id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
197144299 | pes2o/s2orc | v3-fos-license | Unraveling the Mechanistic Origins of Epoxy Degradation in Acids
Water diffusion into polymers like thermosetting epoxies is well-studied; however, comparably little has been reported thus far on the related but very different mechanism of acid diffusion and the corresponding influence on material degradation. The diffusion of hydrochloric acid into an amine-cured epoxy system was studied in this work using gravimetric analysis and dielectric monitoring concurrently, and the mass uptake behavior was observed to differ significantly compared with water diffusion, faster by an order of magnitude. A unique 3-stage diffusion of acid into epoxy was observed due to the influence of Coulombic interactions between oppositely charged ionic species diffusing at different rates. Material characterization studies have revealed that the dominant degradation mechanism is physical in nature, with the formation of surface cracks driven by the swelling stresses due to the core–shell swelling behavior in highly concentrated hydrochloric acid, leading to an erosion-type degradation phenomenon. The insights gained from understanding acid electrolyte diffusion could serve to design a more effective and efficient process to enable thermoset recycling by facilitating rapid material breakdown or the design of acid-resistant materials for various applications in chemical storage tanks, batteries, and protective coatings in a corrosive environment.
■ INTRODUCTION
Resource depletion and societies' difficulty in addressing the end-of-life management of cross-linked polymer products have led to trending research interests in related topics such as recyclable thermosets 1−4 and fiber recovery from composite wastes. 5 While most reported works focused on elucidating the enabling chemical reactions that permit controllable degradation, typically with acids and/or heat, the physical chemistry aspects of the diffusion-limited degradation process are poorly understood. Furthermore, since such materials are often used in their glassy state, the diffusion behavior of acids into glassy thermosets is expected to be similar, regardless of whether the polymer is designed to be degradable or not. Thermosetting epoxies are often the material of choice for applications handling corrosive media, but even then, the diffusion mechanism and the corresponding material degradation behavior are not well understood. Epoxy resins are considered among the top-line materials in terms of chemical corrosion resistance; however, most relevant investigations into the origin of chemical resistance tend to be restricted to qualitative evaluations such as those data published in material handbooks or chemical compatibility guides, which only provide a superficial insight into the environmental conditions and resistance duration. 6,7 In contrast to the common assumption that acid degradation is a chemical degradation process, it was realized through this work that the actual mechanism is a physical degradation process driven by the aggressive plasticization of the amine-cured epoxy thermoset matrix as the diffusing ionic species binds with the polar moieties and disrupts the noncovalent bonding within the polymer network. The ensuing nonequilibrium swelling behavior led to physical erosion, which was observed to be the main degradation mechanism. Understanding how the diffusion of acidic electrolyte into thermoset matrix leads to a corresponding degradation in material properties 8,9 or erosion would facilitate the educated selection of suitable materials and assist in the molecular design of tailormade resin precursors for either applications requiring improved chemical resistance or even controlled chemical degradation of the thermoset resin matrix to enable composite fiber recovery 5,10 and polymer recycling. 1,2 Recent interest in the diffusion of acids in polymers is related to understanding the mechanism behind the resolution limitation of chemical amplified photoresists to enable the design of more precise materials for nanoelectronics. 11,12 Even though many contemporary works were reported on the topic of acid-degradable thermoset for sustainable polymer chemistry, these research studies focused on the acid-degradable polymer chemistry and largely ignored the acid diffusion mechanic studies. 1−4 Thus, in this work, the objective was to understand comprehensively the accelerated diffusion mecha-nism of acid electrolytes into glassy thermosets and propose explanations for the various phenomena observed leading to eventual material degradation.
Neogi compiled an excellent overview of various phenomena and models related to diffusion into polymers, and while most of the diffusion systems reviewed were thermoplastics, the concepts are expected to hold true for thermoset systems. 13 Hojo et al. presented the classical phenomena of the erosioncorrosion process by which a thermoset undergoes chemical degradation when immersed in corrosive media. 14 A similar mechanism in biodegradable polymers was reported by Gopferich et al., who presented a theoretical model based on the ratio between the penetrant diffusion rate and the polymer degradation rate. 15,16 While these studies focused on the consequential degradation behavior, they provide little insight into the basis behind the rapid diffusion of acidic electrolyte into the thermoset matrix compared with benign media like water.
On the contrary, water diffusion into epoxy thermosets has been studied extensively and the network effects well understood; 9,17−20 however, controversy remains due to the many proposed diffusion theories that have yet to be definitively disproved or established as the dominant mechanism. 21−26 In spite of the debates, recent pieces of evidence point toward the domineering effect of network polarity over other factors such as free volume or cross-link density to influence equilibrium water sorption and the uptake rate. 9,23,26−30 Most of the epoxy-water diffusion systems investigated were able to reach equilibrium sorption, indicating the lack of chemical degradation or erosion observed within the timespan of the studies; however, Toscano et al. utilized a photoelastic stress analysis technique to reveal the evolution of swelling stresses over the period of water sorption. 31 Thus, for systems with a larger mass uptake, it would be expected to undergo erosion by swelling-induced fractures along the swelling stress boundaries. Dielectric studies were used to correlate the relationship between dielectric permittivity and absorbed moisture, as well to identify the bounded state of water within the epoxy matrix, 32−34 which might be useful to monitor the diffusion of more polar electrolytes into the epoxy matrix. Miszczyk et al. further developed methods to correlate impedance data from electrochemical impedance spectroscopy to obtain the corresponding capacitance values that were found to vary with gravimetric data of amine-cured epoxy coatings immersed in water. 35 The aim of this paper was to describe in detail the sorption behavior of hydrochloric acid (HCl) into an amine-cured epoxy by concurrent gravimetric measurements and dielectric monitoring. The swelling mechanisms leading to eventual physical erosion were investigated by modulated dynamic scanning calorimetry (mDSC) and Fourier transform infraredattenuated total reflection (FTIR-ATR). The epoxy-amine system was studied in various stoichiometric ratios to briefly address the effect of network polarity and cross-link density in relation to HCl sorption. (1) where M t is the total amount of penetrant that entered the substrate at time t relative to the initial dry specimen mass, M ∞ is the equilibrium penetrant content after a very long time, l is the substrate thickness, and D is the diffusion coefficient.
D can be calculated from the slope of the M t versus t 1/2 curve by rearrangement of eq 1 into the following It must be noted that similar works in the literature may or may not have taken into consideration the effect of the substrate thickness variation due to swelling. 17−21,27−29 While this effect may be considered miniscule and thus negligible in epoxy/water swelling systems, the thickness variation was observed to be quite significant for epoxy/HCl acid swelling systems, which experience upward of 50% mass increase and over 20% thickness increase before fracture. Due to the non-Fickian nature of the epoxy/acid diffusion system and the above-mentioned issue of sample fractures due to nonequilibrium swelling, eq 2 may not be the most suitable equation to model the entire diffusion process. However, after considering the complexity of the acid swelling process, the lack of alternative models, and the insights it delivers compared with the well-studied epoxy/water systems, it was decided that this model is still relevant in spite of its limitations to represent the intrinsic differences of epoxy/acid diffusion systems. As D cannot be directly obtained using eq 2 as the equilibrium mass is unmeasurable due to fractures, the initial linear slope portion of the M t versus t 1/2 curve, S i , is proposed to be a representative diffusion function proportional to D.
The idealized Fickian diffusion behavior can typically be observed only in lightly cross-linked elastomers above its T g , as the mechanism of solvent uptake is understood to be dominated by the occupation of free volume sites with minimal interactions between the solvent and the polymer. For most polymer systems that undergo swelling, the diffusional behavior cannot be adequately described by the idealized Fickian theory; 37 thus, eq 1 can be simplified into where M t is the mass uptake, k is a constant, t is the immersion time, and n = 1/2 for Fickian behavior. Non-Fickian anomalous diffusion is characterized by 1/2 < n < 1.
■ RESULTS AND DISCUSSION
Difference between Water and HCl Diffusion by Gravimetric Measurement. Amine-cured epoxies with different cross-link densities and network polarities were immersed in water and 10 M HCl in an attempt to identify the difference in the diffusion mechanism and to characterize the subsequent thermoset network response.
Diffusion of water into epoxy thermosets has been studied extensively, but there remains uncertainty regarding the effect of free volume or other network interactions such as adsorption and polarity effects; thus, to address this issue, multifunctional amine and epoxy monomers were cured in various amine hydrogen/epoxide ratios (r = 0.5−1.5) to form
ACS Omega
Article samples ranging from amine-deficient to amine-rich, thus bringing about a diversity of network polarities and cross-link densities to evaluate the diffusion behavior (Table 1).
Interesting to note, while it was predicted that the dielectric constant is indicative of the network polarity and thus proportional to the amine content of the amine-cured epoxy samples, this relationship was not observed in the obtained dielectric constant results. The cross-link density result did not exactly comply proportionally with the projected cross-link density from stoichiometric predictions. The deviations from prediction were proposed to be due to the formation of specific, thermodynamically preferable oligomeric conformations during the initial cure process, which result in the excess end group moieties being frozen in place during subsequent vitrification, thus leading to the observed deviation in network polarity and cross-link density from stoichiometric predictions. Nonetheless, the variation in material characteristics of the various amine hydrogen/epoxide ratios (r = 0.5−1.5) suffices for this study to investigate the effects on diffusion behavior in both water and concentrated HCl. The results of water diffusion were then compared to the diffusion study of aminecured epoxy in 10 M HCl to identify the different mass uptake characteristics and phenomena.
The mass uptake behavior of epoxies in water ( Figure 1A) was observed to be in agreement with that of other studies, 23,27 in that water diffuses more readily into networks with a higher amine content (larger r value) and reaches a higher equilibrium mass. The influence of free volume, which was theoretically dependent on density and cross-link density, did not appear to influence the mass uptake or equilibrium mass at all with regard to water diffusion. The mass uptake rate in 10 M HCl ( Figure 1B), as represented by the obtained S i value, was observed to be over an order of magnitude higher compared with that of water. No equilibrium mass was observed in 10 M HCl due to eventual physical erosion and disintegration before equilibrium could be reached. Again, networks with higher amine content were observed to absorb HCl faster and physically disintegrate within a short immersion time. Sample R0.9 with a higher cross-link density was observed to maintain physical integrity and achieve the highest mass uptake before disintegration than the rest. These results indicate that while the diffusion rate is dependent on network polarity, the cross-link density also plays an important role in maintaining physical integrity, especially for systems that experience significant swelling. The results were observed to be in agreement with the study by Li et al., 23 in that the binding with polar amine sites within the thermoset network is an essential factor determining the equilibrium mass for both water and HCl, but the achievable equilibrium mass was also found to be limited by mechanical constraints before physical degradation occurs.
The diffusion behavior was evaluated according to eq 4. Water diffusion ( Figure 1C) was observed to adhere closely to the Fickian diffusion behavior with n ≈ 0.5 at various r values, indicating that while the network polarity and cross-link density were expected to influence the diffusion coefficient, they did not affect the diffusion behavior as the thermoset remains glassy until equilibrium mass was reached. Conversely, the diffusion behavior of HCl was observed to follow a 3-stage mechanism, followed by eventual physical disintegration before equilibrium mass was achieved ( Figure 1D).
On first impression, it was proposed that the 3-stage HCl diffusion mechanism was due to erosion of the samples under excessive swelling. However, the erosion of samples was only observed after some periods of swelling, and different samples begin eroding at different times; thus, while the erosion phenomenon was observed, it was not likely to cause the 3stage HCl diffusion mechanism. Furthermore, in spite of the different network polarity and cross-link density of the aminecured epoxy samples, the three stages were observed to occur within the same time phase, indicating that the mechanism should be due to the specific diffusion behavior of the HCl acid electrolyte instead of the thermoset matrix characteristics.
It was proposed by Zaikov et al. that the diffusion of the electrolytes into polymers should take into consideration the
ACS Omega
Article diffusivity of individual ionic components of the electrolyte media instead of simply assuming the diffusion of the electrolyte as a homogeneous penetrant. 9,38 Consequently, the different diffusivities of cationic and anionic penetrants into the polymer bulk would lead to the formation of an interior electric field (E int ) between the cationic and anionic propagation fronts. The E int would restrict the diffusivity of the ionic species, subsequently resulting in reduced diffusivity of the ionic species, as observed in stage II of HCl diffusion. Eventually, an equilibrium would be reached between the diffusion of ionic species driven by the concentration gradient and the restrictive E int , resuming diffusive propagation as a unified layer at stage III of HCl diffusion ( Figure 1E). Thus, the 3-stage mechanism of HCl acid electrolyte behavior appears to be in agreement with the theoretical proposition made by Zaikov et al. over 30 years ago. The obtained cross-link density in Table 1 infers the covalent network density as the M c is obtained from the rubbery modulus at T g+40°C . Thus, the cross-link density would not have much of an effect on E int , which is dependent on the electrostatic interactions between the bounded ionic species. As observed from the mass uptake curves of R0.5−1.5 ( Figure 1B), the diffusivity was found to be determined by the concentration of polar amine species (secondary and tertiary amine), instead of the expected covalent cross-link density. The ability of the amine moieties to attract ionic acid penetrants overshadows the restriction of network relaxation as imposed by the cross-link density.
To summarize, cross-link density does not appear to have any effect on E int or diffusion of ionic penetrants.
Effect of HCl Concentration on Swelling Behavior. As R0.9 was observed to tolerate the largest swelling in HCl, it was used to study the effect of HCl concentration on mechanisms of physical degradation during acid swelling. Interesting point to note, HCl acid with concentration above 25 wt % (∼8 M) is classified as corrosive liquid, whereas it is only considered an irritant at concentrations below 25 wt %. Thus, while no formal research has been performed to determine the diffusion mechanism of HCl acids into polymers, industrial experience and qualitative studies indicated the influence of HCl concentration on acid diffusion into the polymer matrix. As expected, increasing HCl concentration corresponds to an increase in the mass uptake rate ( Figure 2). During the course of study, it was observed that at high HCl concentration above 8 M, R0.9 experienced core−shell swelling, leading to rapid disintegration due to swelling stresses and eventual erosion of the surface. The concurrent erosion mechanism while swelling complicates the mass uptake measurements, as only the net weight gain could be recorded. Thus, although the reported net mass uptake values for 8 and 10 M appear to be lower than 5 M, it is in truth much higher due to the erosion loss. The higher acid concentration would lead to a higher concentration gradient between the surface and the interior bulk, thus leading to higher diffusivity. The surface erosion phenomenon, while being attributed to the swelling-induced physical degradation that is dependent on diffusivity, is also dependent on the available surface area. The mass uptake values for 10 M HCl concentration appeared to be double that of 8 M, as the epoxy specimen swells much more with a thicker swollen shell than 8 M, but the erosion rate was only slightly higher due to a limited surface area available for erosion. It must be noted that even with crazes and cracks, the fractured but-yet-to-be eroded surface still experiences significant swelling that contributes to the observed mass uptake. Whereas at lower HCl concentration (<8 M), although the diffusion of HCl into aminecured epoxy appeared to be relatively stable with little erosion loss, the significant swelling of the exterior layers led to formation of stress gradients, which cause warpage and eventual fracture of the sample disks after prolonged swelling (refer to Supporting Information S4).
Core−shell swelling has previously been reported in various polymer systems [14][15][16]37 and was described in detail by C.E. Rogers as Case II diffusion in which the diffusion was observed to be so rapid that a distinct boundary was observed between the swollen exterior and the glassy inner core. In the case of R0.9 immersed in HCl larger than 8 M, the rapidly swollen exterior was observed to be only a thin surface layer, which cracked and eroded away from the unswollen bulk. Even when core−shell swelling behavior was not observed, the evolution of strained layers in the exterior due to swelling-induced stress transfer to the unswollen core was reported to influence the penetrant diffusion rate. 39 Thus, when also taking into consideration the electric field restricted electrolyte diffusion and the thermophysical changes of the polymer matrix due to plasticization effects, it would present a highly complicated challenge to model the acid diffusion process in light of so many variables and concurrent processes.
Dielectric Changes during HCl Uptake. To validate the theory of electric field restricted diffusion and investigate the noncovalent polar binding interactions between the diffusing HCl species and the amine-cured epoxy matrix, dielectric measurements were performed concurrently with weight monitoring when R0.9 was immersed in deionized water and 10 M HCl at 60°C. The dielectric constant measured at 1 kHz
ACS Omega
Article is indicative of the ionic polarization response when the penetrant diffuses into the amine-cured epoxy network.
The obtained result for dielectric response of diffused water into R0.9 ( Figure 3A) was observed to be consistent with that by Garden et al. on dielectric changes due to water diffusion in another epoxy system, in that the increase in dielectric constant does not correspond proportionally to the increase in water uptake mass, which was explained by a network confinement effect restricting the dipolar response. 32 In contrast, the measured dielectric constant was found to increase accordingly with HCl mass uptake over time (Figure 3B), and a sharp increase in the dielectric constant by a few orders of magnitude was observed after the onset of HCl diffusion stage II. Initially, it was proposed that the onset of stage II during electrolyte diffusion was due to the restriction of ionic mobility to maintain charge neutrality; 9,38 however, the exceptionally large increase in dielectric constant up to almost 5000 cannot be explained without also taking into consideration the effects of polar interactions with the matrix. For comparison, the dielectric constant of R0.9 was measured to be about 3.5 initially, 10 M HCl was measured to be around 5, and water was around 80. Thus, the large dielectric constant could only be explained by the ionic polarization between HCl and the tertiary amine moieties within the amine-cured epoxy network, which stabilizes the charge separation between the electrolyte ionic species and thus contributes to the large increase in the measured dielectric constant. The exceptionally large dielectric constant was only observed at relatively low frequencies, which corresponds to ionic polarization, and the effect disappears at higher frequencies, further confirming the phenomenon to be due to the charge separation effects caused by the polar interactions between HCl ionic species and the amine-cured epoxy network ( Figure 3D). Water does not disassociate in the epoxy network ( Figure 3C); thus, while the slight increase in dielectric constant could be attributed to the interaction with polar moieties such as hydroxy and amine functionalities, 32 no significant increase in dielectric constant similar to HCl swelling was observed. The formation of E int is only possible in the heavily plasticized, swollen state, whereby the ionic acid penetrants completely bind to the polar network moieties and
ACS Omega
Article thus replace the polar/hydrogen bonding network with a polyelectrolyte-like network as described by E int . Thus, E int is an indication that the heavily plasticized, swollen state has been reached, at least in the outer shell regions. The subsequent swelling-induced physical degradation occurs due to the stress-relief mechanisms of crazes and cracking, which lead to surface erosion. Even when there is no core−shell swelling observed as in the case of HCl concentration <5 M, swelling and associated plasticization still occur throughout the epoxy specimen with a more gradual concentration gradient. E int still exists, but the magnitude of the dielectric contribution is lower due to a lower degree of plasticization, which corresponds to a lower degree of network relaxation. A comparison between the dielectric response with mass uptake for epoxy in 3 M and 10 M HCl is included in the Supporting Information S9.
Changes in Surface Morphology due to Swelling during HCl Uptake. Optical monitoring of acid diffusion and reaction with cured epoxies could be performed with in situ UV−vis spectroscopy 40 and micro-ATR/FTIR, 41 and often the objective was to obtain information related to the kinetics of acid reaction with the epoxy matrix. Kinetics of acid diffusion into cured epoxies systems could not be performed by FTIR techniques due to the strong, broad absorbance of acids in the 3500−3000 cm −1 range. FTIR analysis in the transmission mode tends to yield little information due to spectra saturation. Instead, a surface characterization technique like FTIR-ATR was performed to evaluate the physiochemical response of the epoxy exterior when immersed in 10 M HCl acid. This technique is especially useful to evaluate the characteristics of the swollen exterior layers due to core−shell swelling. Sample disks of R0.9 were immersed in 10 M HCl at 60°C and removed after 1, 2, 3, and 24 h and immediately analyzed by FTIR-ATR. The sample disks were subsequently dried in an oven at 180°C for 1 h and then analyzed again by FTIR-ATR to evaluate if HCl is physically adsorbed or covalently reacted with the epoxy matrix.
The absorbance peak at 1510 cm −1 attributed to the stretching mode of the aromatic ring in the epoxy monomer was identified as the invariant band. During immersion in 10 M HCl ( Figure 4A), the broad peaks corresponding to −OH and −NH bands in the 3500−3200 cm −1 region were observed to increase significantly in intensity with respect to the invariant peak at 1510 cm −1 over time, attributed to the contribution by adsorbed HCl hydrates within the amine-cured epoxy matrix. Furthermore, it could be observed that while the initial −OH peak was centered at 3350 cm −1 , another peak at 3250 cm −1 emerged after HCl immersion, leading to formation of a plateaulike broad peak, which could be attributed to the ionic bonding between HCl and tertiary amine moieties within the epoxy network to form an ammonium chloride salt complex. The formation of salts disrupts the noncovalent polar interactions within the amine-cured epoxy matrix and facilitates further plasticization by HCl, leading to the anomalous diffusion behavior and the core−shell swelling as observed. After the HCl was dried off by heating in an oven ( Figure 4B), no significant deviation was observed in the fingerprint region from 1500 to 650 cm −1 , indicating that very little degradation could be attributed to chain scission events due to acid-catalyzed hydrolytic degradation of ether and amide linkages. Instead, the broad plateau attributed to HCl hydrates and ammonium chloride salt complex remains despite being lower in intensity, evident of the strong binding between HCl and tertiary amine moieties. A further comparison between the initial R0.9 before immersion, after immersion in 10 M HCl for 24 h, and after further oven-drying has shown that other than the polar interactions indicated by the broad peak 3500−3200 cm −1 region as explained earlier, no significant chemical degradation reactions were observed ( Figure 4C); thus, it can be concluded that the HCl acid degradation of this amine-cured epoxy system proceeds via a physical degradation mechanism due to swelling stress.
Water is known to exist in bounded and unbounded states within amine-cured epoxy networks, based on dielectric studies, 34 diffusion studies, 18,42 and thermogravimetric analysis. 43 Instead, modulated DSC was presented in this work as another technique to concurrently evaluate the binding states of adsorbed HCl and the extent of plasticization of the epoxy matrix by adsorbed HCl. Again, R0.9 was immersed in 10 M HCl at 60°C for 1, 2, and 24 h; then, the sample was removed, wiped dry, and immediately crimped within hematic aluminum pans for mDSC heating and cooling run cycles. The desorption endotherm ( Figure 4D) indicated that the adsorbed HCl within the epoxy matrix exists in either unbounded or bounded states due to the presence of two distinct endotherms, namely, below and above the boiling point of water. The unbounded state refers to the adsorbed HCl hydrates within the free volume of the swollen epoxy, as it would be rapidly removed upon thermal removal of water as outgassing HCl. The bounded state corresponds to the ammonium chloride salt complex previously described by the FTIR-ATR study. As the HCl desorption endotherm was observed to be broad and shifts toward higher temperatures over prolonged immersion in 10 M HCl, the adsorbed and bounded states do not appear to be a distinct state but rather occur as a complex distribution of bounded states due to the amorphous nature of the aminecured epoxy network morphology. The shift of the endotherms toward higher temperature is due to the increase in thermal stability of the adsorbed HCl due to lower entropy as the acid/ epoxy diffusion system approaches thermodynamic equilibrium. Furthermore, the adsorbed HCl would increase the heat capacity of the system by increasing the chain mobility of the amorphous network, whereas after desorption, the system returns to its glassy amorphous state. Therefore, the difference in heat capacity measured during heating and cooling cycles could be used to compare the degree of plasticization of the acid/epoxy system (Table 2). In general, longer immersion time corresponds to more significant swelling and a higher degree of plasticization, and the mDSC is a suitable technique to monitor such changes.
To verify and visualize the physical degradation resulting from the proposed swelling stress-induced cracks and the erosion mechanism, an electron micrograph was obtained from vacuum-dried sample disks of R0.9 immersed in 10 M HCl at 60°C for 2, 4, and 7 h ( Figure 5). The initial amine-cured epoxy surface can be considered relatively smooth and homogeneous ( Figure 5A); the particles observed on the
ACS Omega
Article surface can be attributed to dust residues, which settled on the epoxy during the casting process and is unlikely to affect the subsequent acid degradation mechanism. After immersing in 10 M HCl for 2 h (Figure 5B), one can observe cracks, pinholes, and pits on the specimen surface. It is important to note that the captured electron micrograph is representative of the deswollen exterior surface after vacuum-drying to remove HCl hydrates due to its tendency to outgas. First, the proposed mechanism behind the formation of pinholes can be attributed to the chemical attack by HCl on the unreacted resin precursors or partially reacted clusters within the amine-cured epoxy network. These clusters that are susceptible to HCl acid attack were formed due to the vitrification of the entire network, which impedes the subsequent diffusion of unreacted species from achieving reaction completion. The formation of pits as observed might be caused by the formation of HCl gas pockets from the unbounded HCl hydrates during the drying procedure of sample preparation for FESEM imaging. Similar pits were previously observed in injection molded ABS samples and were attributed to volatile outgassing during molding. 44 The pits were mostly observed to form at the layer below the delaminated exterior, which supports the theory that such pits are likely to be artifacts from sample preparation for FESEM imaging. Lastly, the cracks did not appear to originate from any specific defect sites and were distributed homogeneously throughout the specimen surface. It was also obvious that the cracks propagated beneath the exterior layers and resulted in the delamination of the surface from the bulk. The only reasonable explanation was that the crack propagated along the interface between the swollen exterior and the unswollen interior bulk. As observed by the change in specimen surface degradation over a period of immersion time, it appeared that the physical degradation occurred in cyclic repetition between the exterior swelling ( Figure 5B) followed by exterior delamination ( Figure 5C) and again ( Figure 5D), leading to the observed surface erosion phenomena. While this proposed mechanism may appear to be similar to environmental stress cracking of polymers, the key difference in this mechanism is that the origin of the stress source is contributed by the swelling stress and not an externally applied stress.
■ CONCLUSIONS
Herein this work, a diffusion study of amine-cured epoxy in concentrated HCl was presented to unravel the mechanistic origins of the observed physical degradation phenomena that were presumptuously attributed to chemical degradation reactions previously. Through concurrent gravimetric and dielectric monitoring, it was revealed that the diffusion mechanism of HCl into amine-cured epoxy networks turns out to be contrastingly different from that of the well-studied water/epoxy diffusion systems. The partial dissociation of acid electrolyte within the epoxy network led to the observation of an anomalous 3-stage diffusion mechanism, which was also characterized by the concurrent sharp increase in the dielectric constant of the HCl-swollen epoxy system due to the formation of E int . Further evidence that supports the swelling-induced physical degradation as the dominant degradation mechanism was indicated by the lack of chemical changes within the fingerprint region of the FTIR spectra over the HCl swelling duration, as well as the observed desorption of bounded and unbounded HCl from the swollen epoxy matrix. Electron micrographs of the dried epoxy specimen surfaces over a period of HCl swelling have shown characteristic features of environment stress cracking of polymers, which was attributed to the complementary action of the swellinginduced stress due to the large mass uptake, with the plasticizing effect of acid on the epoxy network leading to the erosion-type degradation mechanism observed.
ACS Omega
Article Thus, the proposed epoxy/acid degradation mechanism based on the failure analysis insights does not appear to be restricted to either this specific epoxy system or the HCl media. Instead, the degradation mechanisms could be generally applied to the diffusion or degradation studies of a polar thermoset matrix immersed in concentrated electrolytes. Therefore, specialty solvents could be designed to harness the presented accelerated diffusion mechanism for faster, higher-yielding thermoset disintegration processes to enable thermoset recycling or composite fiber recovery applications. The large dielectric response could also be used to develop in situ sensors utilizing dielectric constant measurements with cheap handheld LCR meters to monitor the safe usage lifetime of thermoset composites used in corrosive environments.
■ EXPERIMENTAL SECTION Materials. N,N-Diglycidyl-4-glycidyloxyaniline, alternatively known as triglycidyl-p-aminophenol (TGAP) epoxy resin, with epoxide equivalent 100 g mol −1 was obtained from Sigma-Aldrich. Bis(aminomethyl)norbornane (NBDA) with amine hydrogen equivalent 38.5 g mol −1 was obtained from Tokyo Chemical Industry. The liquid resin precursors were used as received. Hydrochloric acid (37%) was obtained from Merck and diluted to the respective concentrations (0.1−10 M) with deionized water generated in our laboratory.
Sample Preparation. The epoxy samples were prepared by mixing TGAP and NBDA according to predefined amine hydrogen/epoxide equivalent molar ratios (r) of 0.5, 0.7, 0.9, 1.0, 1.3, and 1.5, to vary the network polarity and cross-link density. The mixtures were subsequently degassed by centrifuge and cast in silicon molds with 1.27 cm-diameter disc cavities with depth of 2 mm. The samples were cured at 25°C for 12 h, followed by postcuring at 120°C for 2 h. The cured sample disks were stored in dehumidifier cabinets and preconditioned by overnight drying in a vacuum oven at 80°C prior to testing ( Figure 6).
Gravimetric Measurements with Dielectric Monitoring ( Figure 6). Acid sorption studies were performed by immersing the sample disks in hydrochloric acid of various concentrations at 60°C. The samples were periodically removed from the acid, quenched in deionized water twice, gently patted dry on filter paper, and immediately measured. The sample disc thickness was measured by a Mitutoyo digital micrometer screw gauge with precision of 0.001 mm and then weighed with an A&D GR-202 semimicro balance with precision of 0.01 mg. Three samples were measured for each sample set. Subsequently, one sample was selected out of each set for measurement of its dielectric properties using a Keysight E4980A LCR meter equipped with ASTM D150compliant dielectric test fixture 16451B. The dielectric test fixture based on a parallel-plate design with gold-coated stainless steel electrodes was cleaned with acetone before and after every measurement to remove residues. A reference HDPE film sample was measured periodically to ensure the equipment was within calibrated limits. The samples were monitored until large fractures occurred. The experiment was performed entirely within a fumehood due to the outgassing of HCl gas from the acid swollen specimens during measurements.
Other Characterizations. Dynamic mechanical analysis was performed with TA instruments Q800 DMA utilizing the single cantilever mode with temperature ramp from ambient to 300°C to estimate the molecular weight between cross-links in accordance with the rubber elasticity theory 46 using the storage modulus value at T g + 40°C according to the following equation Density measurements were performed using an Ultrapyc 1200e helium gas pycnometer by Quantachrome instruments. Modulated dynamic scanning calorimetry (mDSC) was performed using TA 2920 mDSC with aluminum pans under dry nitrogen gas. The samples were equilibrated at −20°C, then heated at 10°C min −1 with temperature modulation of 1.6°C min −1 up to 180°C, followed by controlled cooling at 10°C min −1 with temperature modulation of 1.6°C min −1 until −20°C. This technique was able to identify different types of bonding between the adsorbed HCl and the epoxy network, as well as a guide to determine the degree of plasticization of the epoxy network based on the difference in measured heat capacity between the heating and cooling cycles.
FTIR-ATR was performed using Perkin Elmer Frontier in the mid-IR range with diamond ATR accessory from 4000 to 650 cm −1 . ATR correction was performed using the algorithm included with the accompanying Spectrum software platform to reduce spectra distortion. This technique was performed on samples before and after drying to understand the chemical interactions within the epoxy network at different immersion times.
Secondary electron micrograph was obtained using JEOL JSM-6340F with a cold-field-emission source at an accelerating voltage of 5 kV and working distance of 7 mm. The acidimmersed samples for imaging were prepared by quenching twice with deionized water, followed by reduced-pressuredrying in a vacuum oven at 80°C and −28.8 in Hg. The samples were coated with platinum prior to imaging.
* S Supporting Information
The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsomega.9b00859. Figure 6. Schematic of the experiment workflow, starting with specimen immersion in HCl for a specified duration, followed immediately by thickness, mass measurements, and dielectric analysis. 45 The specimen was returned to HCl immersion, and the monitoring cycle was repeated until disintegration.
ACS Omega
Article Calculations involved in sample preparation, results to supplement the M c calculations, photos showing sample disintegration behavior, heat capacity results from mDSC, and dielectric constant monitoring data over the duration of 10 M HCl immersion with explanation, discussions involving the effects of acid concentration on diffusion behavior and dielectric constant, core−shell swelling stress fields (PDF)
Notes
The authors declare no competing financial interest.
■ ACKNOWLEDGMENTS
This paper was supported by Temasek Laboratories@NTU and School of Materials Science and Engineering, Nanyang Technological University, Singapore. | 2019-07-17T21:05:38.702Z | 2019-06-20T00:00:00.000 | {
"year": 2019,
"sha1": "29ee9edb061ba8015e852a067f7ab5b2be74dc6f",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b00859",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "672132540ae75cfe7d45278b6c7743aa7484838f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
264066864 | pes2o/s2orc | v3-fos-license | Predictors of Engagement and Success in a Flipped EFL Classroom in China
The present study adopted a predictive-correlational design to explore whether self-reported motivated strategies for learning and language-learning strategies of Chinese EFL learners ( N = 97) were associated with the following: (a) engagement with a learning management system (LMS), (b) engagement with flipped-classroom materials, and (c) final grades in a flipped EFL course. The Motivated Strategies for Learning Questionnaire (MSLQ) and the Strategy Inventory for Language Learning (SILL) were administered to sophomore-level English writing students in a cross-border higher education setting in the Henan province of China; predictor variables were then explored for association with end-of-term criterion variables operationalized by behavioral and performance data. Results indicated that the MSLQ and SILL failed to predict engagement or success with flipped classroom materials. Instead, the amount of time learners actually spent engaging with flipped materials was significantly, positively associated with student success measured by final grades.
Introduction
With Google for Education, Bring Your Own Device, flipped classrooms, and other trends of increasingly blended learning environments, the problem of motivating students to get online to study at home still represents a central concern (Hall & Dufrene, 2016;Webb, Doman, & Pusey, 2014).Models such as flipped classrooms employing pre-class asynchronous activities rely on students doing the homework to capitalize on meaningful two-way communication and practice in the classroom (O'Flaherty & Phillips, 2015), so, as always, motivated action remains an important variable to consider whether learning happens face to face or online.
In spite of the number of studies appearing in recent years, many rely on self-reported attitudes toward flipped classrooms, with less attention being paid to how much and why or why not learners engage with pre-class flipped materials (O'Flaherty & Phillips, 2015).As a result, a gap exists in our knowledge on flipped classrooms in general, and in our knowledge of how English-as-a-foreignlanguage (EFL) learners' motivated behavior interacts with engagement with blended and flipped classroom materials.
The need exists to further understand relationships between self-reported motivation of EFL learners in flipped, blended classrooms and both the effort and the performance in such classes.Research that reliably predicts which learner differences lead to ideal and less-than-ideal effort and performance could benefit teachers, administrators, and researchers.For instance, teachers could learn which students remain at risk for test anxiety and lower overall motivation to learn English.Teachers may then take measures to attempt to make instruction meaningful to those particular learners, and already-motivated learners may not warrant as much concern.Meanwhile, administrators could green-light the building of courses in Chinese EFL contexts that work to instill motivation, such as smaller student-centered classrooms instead of lecture-style English classes that may decrease the motivation of Chinese EFL learners (Li, 2014).Finally, future research could focus not so much on whether flipped classrooms motivate learners but which kinds of flipped activities work better than others.Kim, Kim, Khera, and Getman (2014) Tasks [that] are mediated by new technologies can help minimize students' fear of failure, embarrassment, or losing face; they can raise students' motivation to take risks and be creative while using language to make meaning; and they can enable students to meet other speakers of the language in remote locations, opening up transformative exposure to authentic language environments and cultural enactments, along with tremendous additional sources of input.(p.4) Finally, efforts are already underway to establish official policies on e-learning in the Asia-Pacific region, including in major Chinese cities such as Beijing, Hong Kong, and others in the Taiwan region, in which policymakers are tackling issues related to infrastructure, curriculum integration, student learning, teachers' professional development, and leadership building (Kong, Chan, Huang, & Cheah, 2014).
Though criticized for lacking applicability to EFL or Global-English contexts (Dörnyei, 2009;Oxford & Sheardin, 1994; for a review, see Dörnyei, 2005), Gardner's (1985Gardner's ( , 2001aGardner's ( , 2001b) ) theory of motivation from a socio-educational model of second language acquisition (SLA) still seems relevant to a Chinese EFL cross-border higher education (CBHE) context.It provides researchers a number of useful, testable postulations related to learner differences that impact learners' performance and behaviors that learners tend to exhibit in classrooms.This present study sought to draw from this theory to correlate and explain learners' self-reported motivated strategies for learning and language-learning strategies to see if they significantly correlated with success in a flipped EFL class.
Literature review
The following review of the literature aims to underscore both a theoretical and a practical gap in our knowledge.A look at relevant motivation theory and recent work on motivation highlights that additional room exists not just to describe Chinese EFL learners but to predict behavior in flipped EFL classrooms in Chinese CBHE contexts.Next, a look at blended and flipped classroom studies, especially those set in Asian and, specifically, Chinese contexts, uncovers a gap in our prac-tical knowledge.Taking steps to close this gap would benefit teachers, administrators, and research-ers in EFL contexts and beyond.
Integrativeness
Integrativeness is "reflected in an integrative orientation toward learning the second language, a favourable attitude toward the language community, and an openness to other groups in general (i.e., an absence of ethnocentrism)" (Gardner, 2001b, p. 8); Dörnyei (2009) proposed that, from the L2 Motivational Self System theory of motivation, Gardner's concept of Integrativeness played a cen-tral role in L2 motivation, "mediating the effects of all other attitudinal/motivational variables on the two criterion variables Language choice and Intended effort to study the L2" (p.26), and that the concept of Integrativeness proved synonymous with "the L2-specific facet of one's ideal self" (p.27).
Attitudes toward the learning situation
Attitudes toward the learning situation cover any attitude "toward any aspect of the situation in which the language is learned" (Gardner, 2001b, p. 8); learners could hold such attitudes toward any aspect of the learning environment, such as toward teachers and classroom techniques, which vary among learners and contexts over time.
Motivation
For Gardner (2001b), motivation in a socio-educational model requires the following three elements: (a) that a learner persistently and consistently "expends effort" to learn the target language, "by doing homework, by seeking out opportunities to learn, by doing extra work, and so on" (p.8); (b) that the learner wants and desires to learn the target language successfully; and (c) that the learner enjoys learning the target language, referring to the task as "fun, a challenge, and enjoyable" (p.8).Gardner (2001aGardner ( , 2001b) also noted that only motivated learners were likely to use language-learning strategies.
Chinese learners of English
Previous research on what motivates Chinese learners of English identified the significant impact of English as a mandatory requirement in many levels of learners' educational experiences (Li, 2014;Liu, 2012;Ning & Hornby, 2013;Peng & Woodrow, 2010;Zhang & Guo, 2012; to name a few).Ruan (2014) found Chinese EFL writers' previous experiences with classroom instruction might have led to attitudes of English being "imposed" upon them and, in that way, restricting spontaneous efforts to think and communicate (Liu, 2012), more so than for Chinese learners in English-as-a-second-language (ESL) contexts (Li, 2014).Chinese learners may prove more motivated to do well in English classes than to integrate into English-speaking communities (Liu, 2012).Although coop-erative learning may increase Chinese learners' intrinsic motivation (Ning & Hornby, 2013), Chi-nese sociocultural norms that view classroom communication as a timewasting activity may impact Chinese EFL learners' anxiety and motivation to speak up in English class (Peng & Woodrow, 2010).
Although Chinese EFL learners with higher proficiency levels have reported lower levels of motivation (Zhang & Guo, 2012), motivation has correlated significantly with performance, with Chinese female learners tending to report higher motivation than males (Lamb, 2004;Liu, 2009Liu, , 2012;;Yang, Liu, & Wu, 2010).Li (2014) concluded that Chinese EFL learners appeared more extrinsically motivated to learn English (to pass exams and to do well in class) as well as more anxious about speaking English, with "attitudes to learning English being the most important predictor variable for the EFL learners" (p.455); in addition, anxiety in tandem with adverse social pressure negatively impacted motivated learning behavior.Teacher-centered approaches offering fewer chances for meaningful interactions in large lecture-style English classrooms in China might have resulted in EFL learners forming less favorable attitudes toward learning English and in spending less time trying to learn it (Li, 2014).
Blended learning
Among the first to do so definitively, Graham (2006) defined blended learning systems as combining two historically distinct teaching models, resulting in a model that melded face-to-face instruction with computer-mediated instruction.Blended learning constitutes an instructional mode of delivery, which replaces 20-80% of face-to-face time with online work (Harrington, 2010).Harrington (2010) called for further discussion of the potential problems related to blended learning in EFL contexts, naming fractured identity development, "forced individualism" (p.3), and a muting or stunting of learners' academic discourse and "authorial self" (p. 5) as potential problems worthy of sustained attention.
Motivated by a lack of studies on blended learning framed by theory, Grgurović (2014) considered blended learning from the perspective of the Diffusion of Innovations theory and gathered data through interviews and observations; results included participants reporting that they saw "the value of online pronunciation, speaking, and listening activities" (p.168).Earlier, Grgurović (2011) employed a case-study design to collect interview and observational data of ESL students in blended learning contexts; results included participants reporting that "online speaking and pronunciation activities added value to instruction because they were helpful and unique" (p.113).
Blended learning in Asian classrooms has also received attention.Korean EFL learners selfreported that activities using wikis, blogs, and online forums in blended classrooms seemed "novel, easy, [and] fun" (Miyazoe & Anderson, 2010, p. 192).Wang (2014) pointed out the potential of deepening cultural and linguistic understanding in language classes.Still, Kang, Sung, Park, and Ahn (2009) found that low and intermediate-level EFL learners' writing performance remained resistant to improvement as a result of engagement with online blended-learning activities.
The flipped or inverted classroom
Among the first to do so, Lage, Platt, and Treglia (2000) explained that inverting the classroom has meant "events that have traditionally taken place inside the classroom now take place outside the classroom and vice versa," with the inherently student-centered delivery aiming to "provide a menu of options for the students to use in learning" (p.32).In addition to its learner-centeredness, according to recent research, flipped classrooms may involve creativity and higher-order learning skills that promote deep learning (Sweet, Blythe, & Carpenter, 2014).
Blended learning and flipped classrooms in China
Although challenges pertain to introducing out-of-class blended learning, such as resistance from entrenched teacher-centered classroom expectations (Tham & Tham, 2013), research on blended learning in China has reported benefits, mostly regarding learners' attitudes toward the blend.Wu and Liu (2013), for instance, analyzed learners' self-reported satisfaction with blended learning in an EFL context in China, relying on survey data to determine that students mostly held positive attitudes toward blended learning.Liu (2013) also collected data from mandatory end-of-semester course evaluations from Chinese EFL learners of writing and reported that learners held positive attitudes toward the blended course.
Regarding research on flipped classrooms in China, Webb, Doman, and Pusey (2014) asked how Chinese students perceived the flip as well as how instructors viewed the approach's effectiveness.
Participants (N = 240) were observed and surveyed over a fifteen-week period while teachers kept journals (Webb, Doman, & Pusey, 2014).Results indicated that, overall, Chinese learners reported positive attitudes toward flipping the classroom while instructor-perception data highlighted challenges in motivating students to engage with the pre-class materials.In another study, Doman and Webb (2017) compared the attitudes toward the learning experience, technology use, and other learner factors of students in a flipped classroom with those of students in a more traditional faceto-face class.Results of survey and interview data indicated that students in the flipped classroom reported more positive attitudes toward the English-learning experience, and also toward technology use, than did students in the non-flipped class; this suggested, the researchers concluded, that the flipped model may be becoming increasingly applicable in China as communicative approaches predominate (Doman & Webb, 2017).
The present study seeks to extend this literature on flipped English learning in China by exploring time spent engaging with pre-class asynchronous activities in a flipped approach and how it correlates with learner variables, such as motivated behavior and use of learning strategies.
Justification for the present study
Since much of the literature listed above relied only or mostly on participants' self-reported attitudes and motivation, room exists for the present study, which took into account Gardner's (1985Gardner's ( , 2001aGardner's ( , 2001b) ) theory of motivation from a socio-educational viewpoint of SLA, in which motivation includes the three-party aspects of (a) effort, (b) desire, and (c) joy.Researchers have long under-stood that survey results sometimes prove to be artifacts of the measurement (Gass & Mackey, 2012;Oller & Perkins, 1978).Accomplished researchers of L2 motivation, too, have advised against "di-rect self-report measures" in "actual language learning contexts" since students' responses might prove unreliable (Gardner, Lalonde, & Moorcroft, 1985).The present study measured not only re-ported attitudes and motivation but also number of hours learners used the learning-management system (LMS) and engaged with weekly flipped materials, in order to see whether reported motiva-tion differed from or correlated with observed behavior.
The present study specifically posed the following research question: Could the Motivated Strat-egies for Learning Questionnaire (MSLQ) or the Strategy Inventory for Language Learning (SILL) predict which Chinese EFL learners in a CBHE context would (a) use the course's LMS most often, (b) engage with flipped materials most often, and (c) earn the highest grades?Guided by Gardner's (1985Gardner's ( , 2001aGardner's ( , 2001b) ) theory, it was hypothesized that the most motivated learners would not only report high levels of motivation but would also expend the most effort online.
Method
All interactions with participants were approved and overseen by Fort Hays State University's Institutional Review Board (IRB) and by the administration of Sias International University (郑州 西亚斯学院).At all times, basic ethical principles detailed in The Belmont Report (1979), that is, respect for persons, beneficence, and justice, guided interactions with participants and with the data they provided after granting informed consent.
In this study, individual students formed the study's unit of analysis, with two surveys operation-alizing the concepts of motivation and language-learning strategy use (the study's predictor varia-bles).The study correlated predictor variables with the criterion variable of success in a flipped classroom, which itself was operationalized by behavioral data (time using the courses' LMS and time engaging with weekly flipped materials) and with performance data (final grades in the course).Final grades in the course were determined by student performance on three month-long writing projects.The three projects were a life narrative (25%), a persuasive opinion editorial (25%), and a report on how a company brands itself in China (25%).Attendance and class participation made up the final 25% of the final grade.Each writing project was developed in class cooperatively with the instructor, underwent peer-review, and was discussed face-to-face with the instructor before receiv-ing a final grade.
This study took place at a rural, private Chinese university with a long-standing cross-border partnership with a primarily undergraduate-serving U.S. state university.Chinese students enrolled in the cross-border program earned bachelor's degrees from the U.S. university with face-to-face instruction from U.S. instructors living in China.Data was collected in the present study from students enrolled in Composition 2, a second college-level writing course that emphasized research writing and critical thinking about claims and evidence.Syllabi for the writing courses were identical to those used at the U.S. institution although instructors were required to have received formal TESOL training.
Participants
A version of the MSLQ and the SILL measured motivation and language-learning strategies of university students (N = 97) studying English Composition 2 in the CBHE context between a private college in Mainland China and a public university in midwestern United States of America.All learners were sophomores taking two additional English courses during the semester, one of them being an Oral English course that emphasized speaking and the other being a content course taught by a professor from the U.S. partner.Ages ranged from 20 to 22, and all participants spoke Mandarin.Of these participants, 54 identified as female and 43 identified as male.The study invited students from five sections taught by the same instructor who had already planned to experiment with flipping his courses for the entire semester.All participants read and signed informed-consent forms translated into Mandarin Chinese.
SILL
This study employed Rebecca Oxford's (1990) 50-item Strategy Inventory for Language Learning (SILL) to understand participants' language-learning strategy use.The SILL allows for a prompted-production, introspective method of measuring learners' cognitive processes (Gass & Mackey, 2012).It uses a 5-point scale asking participants to respond to statements.Answers in this study ranged from "strongly disagree" to "strongly agree."The SILL measures both direct and indirect language-learning strategies: Direct Strategies include (a) Memory Strategies, such as grouping, semantic mapping, and physical response; (b) Cognitive Strategies, such as formally practicing with sounds and patterns, reasoning and translating, and taking notes and making summaries; and (c) Compensation Strategies, such as guessing intelligently, switching to the mother tongue, and using circumlocution; meanwhile, Indirect Strategies include (a) Metacognitive Strategies, such as delaying speech output to focus on understanding linguistic input, organizing and setting goals, and selfevaluating; (b) Affective Strategies, such as deep breathing or using music and laughter, making positive statements, and discussing feelings in a diary or with a friend; and (c) Social Strategies, such as asking for clarification or correction, cooperating with peers and more competent learners, and developing an understanding of the target culture (Oxford, 1990).
MSLQ
To measure motivational beliefs and strategies for self-regulated learning, this study used a 2scale, 44-item, 7-point Motivated Strategies for Learning Questionnaire (MSLQ) modified and translated into Chinese.The MSLQ was developed by Pintrich, Smith, Garcia, and McKeachie (1993) and has been used for decades in various fields, containing "robust" coefficient alphas for the motivational scales (p.808).The Chinese version of the MSLQ measures two scales: (a) Motivation (with subscales of Self-Efficacy, Intrinsic Value, and Test Anxiety) and (b) Learning Strategy (with subscales of Cognitive Strategy Use and Self-Regulation).Since Rao and Sachs (1999) developed a Chinese version of the 44-item MSLQ (MSLQ-CV, which is a 5-point, 44-item instrument), based on the MSLQ developed for junior high school students (JHS MSLQ, also a 5-point, 44-item instrument), scholars have used the questionnaire with and/or modified it for Chinese participants (Lee, Yin & Zhang, 2010;Rao, Moely & Sachs, 2000;Sachs, Law, Chan & Rao, 2001), participants in Asian regions in general (Rotgans & Schmidt, 2010;Wang, 2012), and EFL learners (Chang, 2010).Because researchers using the MSLQ-CV in China found participants had trouble understanding reverse-coded items (Lee, Yin & Zhang, 2010;Rao, Moely, & Sachs, 2000;Rao & Sachs, 1999), some researchers have combined the two subscales of the learning strategy scale, then created a new scale called "methods" that contained the four reverse-coded items.Lee, Zhang, and Yin (2010), however, while determining that the MSLQ-CV contained "good construct validity" (p.151), predicted that as curriculum reforms in China continued to stress cognitive strategies, learners would eventually learn to distinguish between memorization and understanding.Building off this prediction to measure today's Chinese EFL learners, this study used an in-house translation of the MSLQ-CV that used all five subscales (three Motivation subscales and two Learning Strategy subscales), translating the 44-item instrument into Mandarin and changing it from a 5-point to a 7-point instrument that prompted participants to report to what degree a statement pertained to them (from "not at all true of me" to "very true of me").
Blackboard and Flipped-Approach Delivery
The LMS used in this study, Blackboard also delivered flipped classroom materials.Weekly flipped materials (instructor-created videos describing a target concept, such as formatting essays following APA format or using the university library online, followed by an assignment to apply the concept to their own work in Blackboard blog or journal entries) were placed in folders.Blackboard recorded when each user opened the folder, when each user submitted weekly blog or journal entries on the target concept, and when the user closed the folder.
Data analysis
This study adopted a predictive-correlational design, meaning variables of motivation and strategy use were considered to understand if that helped explain the presence of outcomes (Creswell, 2012), such as time spent engaging with flipped materials and the final grades students received at the end of the course.SPSS Statistics Version 21 was used to run one-way ANOVA tests to see if motivation and strategy reporting differed among participants based on self-identified gender and on students' semester final scores (A = 100-90 points out of 100; B = 89-80 points out of 100; lower than B = 79 points and lower).Since few students scored below a C (69 points and under), grades were grouped into these three categorizes for efficiency of the design and for more equal-sized groupings.Next, after the checking of core assumptions, t-tests checked for significant difference between participants self-identifying as female and male.Pearson correlation coefficients were then used to explore any significant relation between predictor variables (survey data on motivation and language-learning strategies) and the criterion variable of success in the flipped classroom (how much time learners used the LMS, how much time learners engaged with flipped materials, and how many percentage points learners earned in the course).Finally, multiple regression analyses explored whether subscales together impacted behavior and performance in the flipped EFL classroom.
On average, learners used the LMS 246 hours over the 15-week semester, learners engaged with weekly flipped materials 14.25 hours over the 15 weeks, and learners had a class average final score of 79.56 out of 100 possible points.
Was gender a factor?
Analyses checked to see whether participants who self-identified as female Chinese EFL learners differed from those self-identifying as male Chinese EFL learners.While independent-sample t-tests indicated no significant difference between the number of hours females and males used the LMS and engaged with flipped materials, a significant difference appeared between females' final scores (M = 82.28,SD = 5.77) and males' final scores (M = 76.14, SD = 7.16); t (80) = 4.6, p < .001.No other significant difference appeared between females and males in any of the scales or subscales of the MSLQ or SILL (p > .05).
Motivation and strategy use based on grades of A, B, and lower Than B
One-way ANOVA tests separating MSLQ results by learners' grade levels (A = 100-90 points out of 100; B = 89-80 points; lower than B = 79 and under) showed significant difference in the area of Self-Efficacy (p = .046).Post hoc comparisons were then used to identify which groups differed from one another.Specifically, post-hoc Tukey HSD tests indicated that the mean score of the less-than-B group (M = 5.03, SD = .83)proved significantly (p = .036)lower than the mean score of the B group (M = 5.40, SD = .72).The less-than-B group, however, did not differ significantly from the A group in terms of reported Self-Efficacy (p > .05).
LMS use and flip time based on grades of A, B, and lower than B
One-way ANOVA tests analyzing final number of hours learners used the LMS and final number of hours learners engaged with weekly flipped materials showed a statistically highly significant difference between A students, B students, and lower-than-B students (p < .001).Post hoc comparisons using the Tukey HSD test indicated that the mean score of the A group (M = 25.83,SD = 9.63) proved significantly (p = .01)greater than that of the B group (M = 15.46,SD = 8.83) and highly significantly (p < .001)greater than the less-than-B group (M = 10.98,SD = 6.63).
Did the MSLQ predict engagement and success in a flipped classroom?
Pearson correlation coefficients were used to determine if a statistically significant relation existed between learners' self-reported motivated strategy use and behavioral and performance outcomes in a flipped EFL classroom (see Table 2 below).Based on the results, Self-Efficacy correlated with final scores in the flipped classroom (r = .22,N = 97, p = .03),a small effect size according to Cohen (1988).Though the size of this coefficient may be useful simply to explore interrelations between the two variables, the coefficient cannot be used to make predictions (Creswell, 2012).Multiple regression analyses showed that neither the MSLQ's Motivation scale (including Self-Efficacy, Intrinsic Value, and Test Anxiety) nor the MSLQ's Strategy scale (including Cognitive Strategy Use and Self-Regulation) predicted how much learners used the LMS (p > .05),how much learners engaged with flipped materials (p > .05),or how well learners scored in the flipped class (p > .05).
Pearson correlation coefficients, however, did indicate significant correlation when looking at final grades by grade group (A, B, and less than B) instead of final score by average.Significant positive correlation appeared between the number of hours using the LMS and grade groups (r = .20,N = 97, p = .048),indicating that as one variable increased, the other variable also tended to increase.In addition, Pearson correlation coefficients indicated significant correlation between the number of hours learners engaged with flipped classroom materials and final scores separated by letter grade (r = .40,N = 97, p < .001),indicating that as one variable increased, the chance of the grade variable also tended to increase.The effect size here was medium (Cohen, 1988).The appearance of A students, then, correlated with more use of the LMS and more engagement with flipped materials.
In conclusion, a statistically significant relationship appeared between the number of hours learners engaged with flipped classroom materials and learners' final average scores in the class (r = .36,N = 97, p < .001)(see Table 2 above), with a medium effect size (Cohen, 1988).In addition, A grades One-way ANOVA tests separating SILL results by learners' scores (A, B, and lower than B) showed no significant difference in reported language-learning strategy use (p > .05).correlated significantly with more LMS use and more flipped-material engagement.Overall, the MSLQ did not end up proving any statistically significant prediction between predictor variables of motivation and strategy use and criterion variables operationalized by LMS use, flipped materials engagement, and final scores among Chinese EFL learners in a CBHE context.
Did the SILL predict engagement and success in a flipped classroom?
Pearson correlation coefficients were used to determine if a statistically significant relation existed between learners' self-reported language-learning strategy use and behavioral and performance outcomes in a flipped EFL classroom (see Table 3 below).Based on the results, no significant correlation appeared between self-reported usages of language-learning strategies and the criterion variable (LMS use, flipped-materials use, final score).Multiple regression analyses showed that neither the SILL's Direct Strategy scale (Memory, Cognitive, and Compensation strategies) (p > .05)nor the SILL's Indirect Strategy scale (Metacognitive, Affective, and Social strategies) (p > .05)significantly correlated with how much learners used the LMS, how much learners engaged with flipped materials, or how well learners scored in the flipped class.
Discussion
It was hypothesized that, based on Gardner's (1985Gardner's ( , 2001aGardner's ( , 2001b) ) theory of motivation from a socio-educational viewpoint of SLA, the most motivated learners not only would report high levels of motivation but also would expend the most effort online.Results in this study failed to reject the null hypothesis regarding the first half of the hypothesis.Neither the MSLQ nor the SILL predicted whether learners succeeded in the flipped course.It may be that Chinese EFL learners eschew reporting negative feelings toward the learning situation or the teacher.In some earlier studies, students in China have been shown to seemingly avoid questioning or appearing to challenge teachers or researchers in surveys (Baskir, 2012;Chan, 1999).At any rate, the findings here illustrate why studies on blended learning and flipped classrooms need also to measure how much time learners are logging into and using online materials at home.One-way ANOVA results in this study showed Predictors of Engagement and Success in a Flipped EFL Classroom in China that expended effort differed significantly among the A, B, and lower-than-B groups.What learners reported, then, differed from actual behavior.
Statistically significant findings did appear, however.Though participants self-identifying as females and males did not exhibit significant difference in (a) MSLQ variables, (b) SILL variables, (c) number of hours using the LMS, or (d) number of hours engaging with flipped materials, female participants achieved significantly higher final scores in the flipped classroom than did males.This result differs from previous findings that female Chinese EFL learners tended to report higher motivation than males (Lamb, 2004;Liu, 2009Liu, , 2012;;Yang, Liu, & Wu, 2010).In addition, significant differences appeared when analyzing participants by grade levels, with A learners using the LMS and engaging with flipped materials much more than B and less-than-B learners.Finally, a statistically significant correlation appeared between the number of hours learners engaged with weekly flipped materials and final scores in the flipped writing class.
SILL results in this study both matched and contradicted earlier results from Chinese EFL learners.Adapting the SILL for use in a Chinese EFL context, Rao (2005) found learners tended to be highly motivated, to encourage themselves, to rely on teacher authority and direction in studying, and to prefer repeating and reviewing words and concepts over using them.In the present study, learners reported using all the language-learning strategies at a high range level (3.5-5).The findings here showed that although this group preferred metacognitive strategies (4.05), just as Li and Qin (2006) found, the findings here also differ from Li and Qin's (2006) finding that Chinese EFL learners tended to be introverted and to avoid social strategies.In this study, social strategies (4.03)only nominally differed from reported metacognitive strategy use (4.04).In addition, Nisbet, Tindall, and Arroyo (2005) found Chinese EFL learners used metacognitive, cognitive, and social strategies in a high range (3.5-5) and affective, memory, and compensation strategies in the medium range (2.4-3.4), with an overall strategy score of 4.5, which indicated an overall medium-to-high strategy use (Results section, para.1).The present study, however, showed that learners reported high strategy use in all areas, with a lower strategy mean of 3.89.Nisbet, Tindall, and Arroyo (2005) also found that metacognitive strategy use related significantly with proficiency levels.The data here, however, failed to show significant relationships in this area.
MSLQ results in this study likewise matched and differed from those of earlier studies.First, a statistically highly significant correlation was found between the subscales of cognitive strategies and self-regulation (r = .65,N = 97, p < .01),which matches previous findings of the MSLQ use in China (Lee, Zhang, & Yin, 2010;Rao & Sachs, 1999;Sachs, Law, & Chan, 2002).Lee, Zhang, and Yin (2010) predicted that, under current curriculum reform that emphasizes cognitive strategies, Chinese learners would learn to differentiate between the cognitive-strategies subscale and the selfregulation subscale, but results here cannot yet confirm this prediction.Previous research also showed no statistically highly significant relations between test anxiety and (a) self-efficacy, (b) intrinsic value, and (c) cognitive strategies (Lee, Zhang, & Yin, 2010); the findings from this study nearly match these results, except that a significant relation did in fact appear between test anxiety and self-efficacy (r = -.32,N = 97, p < .01).The relationship, however, proved only a slight one, of only little value for prediction-design studies (Creswell, 2012).
Other findings relevant to previous research include the lower-than-B group not differing significantly from the A group in terms of reported self-efficacy, even while the B group and the less-than-B group differed significantly.This seems similar to earlier findings that Chinese EFL learners with higher proficiency levels tended to report lower levels of motivation (Zhang & Guo, 2012).
The present study sought to extend literature on flipped EFL classrooms in China (e.g., Doman & Webb, 2017;Webb, Doman, & Pusey, 2014) by exploring time spent engaging with pre-class asynchronous activities in a flipped approach and how, if at all, it correlated with learner variables, such as motivated behavior and learning strategies.Webb, Doman, and Pusey (2014) had reported challenges in motivating students to engage with the pre-class materials.The present study extends this data by suggesting that pre-course-enrollment factors, such as students' strategy use and motivation, may not reliably predict pre-class material engagement.This indicates that other factors outside of language-learning factors may be impacting this key requirement: that students arrive at a flipped classroom prepared.Additionally, this study further supports Doman and Webb's (2017) conclusion that the flipped model may be becoming increasingly applicable in China as communicative approaches predominate (Doman & Webb, 2017).
Limitations and future research
Limitations of the present study existed.Quantitative designs alone cannot fully investigate central phenomena.Future research can avoid this weakness by following up with qualitative data gathering, perhaps to investigate why reported motivation or attitudes might differ from actual usage of flipped materials online.Additional studies need to be conducted based on the results of the present research, paying careful attention to the capacity of an LMS and other modes of online learning to record learners' online behavior.In addition, studies guided by Gardner's theory of motivation (1985, 2001a, 2001b) should pay attention not only to reported attitudes but also to expended effort.
have pointed out a dearth of research on effective design of flipped classrooms; they argued that flipped classrooms should do the following: (a) expose learners to concepts before class; (b) motivate students to prepare for class; (c) assess comprehension; (d) link at-home work with in-class activities; (e) provide clear, structured guidance; (f) allow enough time for learners to finish assignments; (g) build a community of learners; (h) deliver prompt and adaptive feedback during individual and group work; and (i) involve easy-touse, familiar technologies (pp.44-46).González-Lloret and Ortega (2014), too, argued for the huge benefits technology-mediated task-based learning awards students: | 2023-10-14T15:18:15.812Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "ff602c7568f039d2a59cea67a42820321bd8f50f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.56040/juni1724",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ee8ac27a09a5a0be7f0e600c7e448501fcb3e996",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
119220061 | pes2o/s2orc | v3-fos-license | Full-counting statistics of charge and spin transport in the transient regime: A nonequilibrium Green's function approach
We report the investigation of full-counting statistics (FCS) of transferred charge and spin in the transient regime where the connection between central scattering region (quantum dot) and leads are turned on at $t=0$. A general theoretical formulation for the generating function (GF) is presented using a nonequilibrium Green's function approach for the quantum dot system. In particular, we give a detailed derivation on how to use the method of path integral together with nonequilibrium Green's function technique to obtain the GF of FCS in electron transport systems based on the two-time quantum measurement scheme. The correct long-time limit of the formalism, the Levitov-Lesovik's formula, is obtained. This formalism can be generalized to account for spin transport for the system with noncollinear spin as well as spin-orbit interaction. As an example, we have calculated the GF of spin-polarized transferred charge, transferred spin, as well as the spin transferred torque for a magnetic tunneling junction in the transient regime. The GF is compactly expressed by a functional determinant represented by Green's function and self-energy in the time domain. With this formalism, FCS in spintronics in the transient regime can be studied. We also extend this formalism to the quantum point contact system. For numerical results, we calculate the GF and various cumulants of a double quantum dot system connected by two leads in transient regime. The signature of universal oscillation of FCS is identified. On top of the global oscillation, local oscillations are found in various cumulants as a result of the Rabi oscillation. Finally, the influence of the temperature is also examined.
I. INTRODUCTION
A stochastic process can be characterized by the distribution function. In many cases, the distribution function of a physical quantity is Gaussian and hence only two variables are enough to describe the distribution: its average and second cumulant. Due to the particle nature and quantum effect, electron noise spectrum is an intrinsic property that manifests in mesoscopic systems. 1 It was predicted theoretically that distribution of electron current is binomial, suggesting that all cumulants of current have to be included in order to fully characterize the electronic quantum transport process. 2,3 The full counting statistics (FCS) is an elegant way to study the current correlations in mesoscopic systems and yield not only the noise but all higher order cumulants. 4 It calculates the probability distribution function of the number of electrons transferred through a particular terminal during given period of time that contains fundamental information about the current fluctuation in the system. 5 The current and its fluctuations in mesoscopic systems have been studied extensively and are very important to characterize the physical mechanisms and correlations of a quantum transport systems. 1 For instance, the effective charge of quasi-particle can be determined from shot noise measurement in fractional quantum Hall effect. 6 The cross current correlation can reveal statistical information such as whether the quasi-particle is Fermionic or Bosonic. The study of correlation of entangled electron can be valuable in quantum information processing. 7 A deep relationship has been found between entanglement and noise in terms of FCS providing new framework for quantum entanglement. 8 Furthermore, the equivalence between fidelity of quantum systems and generating function for FCS provides a link between fields of quantum transport and quantum information. 9 In addition, the measurement of cumulants to very high orders have been carried experimentally for electronic transport in quantum point contact systems. [10][11][12] So far extensive investigation has been carried out on the FCS of charge transport, less attention has been paid to FCS of spin transport. It is the purpose of this paper to address this problem.
The key of FCS is to obtain the generating function (GF) from which the probability distribution P (n, t) and all cumulants are calculated. 5 The GF can be calculated by various ways. Using a gedanken experiment scheme of a "charge counter" in the form of spin precession, Levitov and Lesovik, 2-4 gave an analytical expression for the GF in the long-time limit which can be generalized to a general quantum mechanical variable. 13 The GF has been obtained using the first quantization method 14 which can be used to study FCS of dc and ac transport. 15,16 Using the non-equilibrium Green's function (NEGF) 17,18 and path integral method (PI) in the two-time quantum measurement scheme [19][20][21][22][23] , the GF has been calculated to study FCS of phonon transport [24][25][26][27] and electric transport. 28 In this paper, we generalize the existing formalism of FCS of charge transport in the two-measurement scheme to spin transport in the transient regime. In particular, we obtain GFs for spin polarized charge current, spin current, and spin transfer torque in the transient regime for a magnetic tunneling junction where the spin index is not a good quantum number. We have also extend this NEGF-PI method to quantum point contact systems for charge transport. As an application for this formalism, numerical results are given for FCS of charge transport in transient regime for a double quantum dot system.
The paper is organized as follows. In Section II, we give basic definition of quantities needed in studying FCS, and in Section III, which is the central part of this paper, we present details on how to use the method of path integral together with NEGF to calculate the GF of FCS for lead-QD-lead system based on the two-time quantum measurement scheme. This formalism is designed for transient dynamics. The generalization of this formalism to spintronics in transient regime is provided in Section IV where we use the magnetic tunnel junction (MTJ) as an example. The GF for spin polarized charge transport, spin transport and spin transferred torque for MTJ are calculated. In Section V, we generalize the formalism to the quantum point contact system. Section VI is devoted to some numerical results where we apply the formalism to calculate various cumulants of transferred charge for a double quantum dot system. Finally concluding remarks are made in Section VII.
II. STATISTICS
The most important quantity in FCS is the GF, from which various quantities of interest can be obtained. In general GF is denoted as Z(λ, t) where λ is the counting field. The GF is defined as the Fourier transform of the probability distribution P (∆n, t) of the number of transferred electrons ∆n = n t − n 0 which can be calculated from two-time quantum measurement scheme between time t 0 = 0 and t 0 = t, 23 where ∆n can be either positive or negative. Various moments of transferred charge (∆n) j can be obtained by expanding Z(λ, t) in terms of λ, we have The jth cumulant (∆n) j can be calculated by taking jth derivative of the cumulant generating function (CGF) which is the logarithm of GF with respect to λ at λ = 0: It is well known that cumulants can be expressed by moments. For instance, the first cumulant (mean value) is defined as ∆n = ∆n , the second cumulant (variance) is given by (∆n) 2 = (∆n) 2 − ∆n 2 , and the third cumulant (skewness) is (∆n) 3 = (∆n − ∆n ) 3 . With the GF, the distribution function for the number of the electrons P (∆n, t) can be found. In particular, the idle time probability, the probability of no electrons measured at time t is from which we can calculate the waiting times distribution for the electronic transport system in the transient regime. 28 Now let's turn to the discussion of waiting time distribution (WTD). In the dc steady state transport, the WTD can be calculated through 15 where t is the averaged time and WTD depends only on t because of the time translational invariance in dc case (steady state). In the presence of ac bias, averaging over a time period is needed so that WTD depends only on t as well. 16 However, in the transient transport regime, time translational invariance doesn't exist and there is also no time periodicity like the ac case. As discussed in details in Ref. 28 that in the transient regime, we ask how long we wait for the detection of the first transferred electron if we set t 0 = 0 as the starting point. We will use W 1 to denote the WTD in the transient regime 5,28
III. MODEL AND GENERATING FUNCTION a. Two-time Quantum Measurement
We consider the system of a quantum dot denoted by S connected by the left and right leads. The full Hamiltonian of the whole system can be written as where H 0 consists of the Hamiltonian of the isolated leads and the isolated central quantum dot, where we used the index kα to label the states of the lead α = L, R and the index n for that of the quantum dot S.
kα is the energy levels in the lead α and ∆ α is the external voltage, ǫ n is the energy levels of the quantum dot and H T is the Hamiltonian describing the coupling between the two leads and the quantum dot with the coupling constant t kαn , To investigate full counting statistics, we count the number of transferred electrons in the L-lead, and the electrons flowing from the L-lead to the quantum dot is defined as positive direction of the current. The current operator is given by (q = 1) where N (h) is the electron number operator in the L-lead, and the superscript '(h)' denotes the Heisenberg picture. N where the evolution operator is (11) and T is the time-ordering operator. The anti-time ordering operator T should be used if t < t ′ and U † (t, t ′ ) = U (t ′ , t). From the Heisenberg's equation of motion dA/dt = − i [A, H], we find, (12) Now we discuss the two-time quantum measurement by counting the number of electrons in the L-lead. In the two-time measurement scheme we measure the physical quantity such as number operator N L at two different times, e.g., first at time 0 and then at time t. After each measurement, the system is projected onto one of the eigenstates of the operator N L with the corresponding eigenvalue. We define the projection operator at time 0 and t as P 0 and P t , respectively. Let's start from an initial state |Ψ 0 and assume that |n 0 forms a complete set of eigenstate of number operator at time t = 0, we have N L (0)|n 0 = n 0 |n 0 , P 0 = |n 0 n 0 |.
Obviously we have P 2 0 = P 0 and n0 P 0 = 1 and similar relations hold for P t .
After the first measurement at time 0, the wave function becomes P 0 |Ψ 0 with a probability of finding this state equal to Ψ 0 |P 2 0 |Ψ 0 . After a time interval t, this state evolves to a new state U (t, 0)P 0 |Ψ 0 with an eigenvalue n t . After the second measurement at time t, the wave function becomes |Ψ t = P t U (t, 0)P 0 |Ψ 0 , where P t = |n t n t |.
Assuming that the initial state is a mixed state with the density operator, we find the joint probability to have measured n 0 electrons at time 0 and n t electrons at time t, Keep in mind that we should add a normalization constant to the joint probability and the GF, Eq. (22). We will normalize the GF when we come to the final result and use the fact that Z(λ = 0) = 1. The probability distribution for the number of electrons ∆n = n t − n 0 measured between two measurements is given by where δ(n) is the Kronecker δ symbol. Using Eq. (13), we have n 0 P 0 = N L (0)P 0 and n t P t = N L (0)P t . The GF associated with the probability P (∆n) is 19,29 Z(λ, t) ≡ Tr[e iλNL(0) P 0 ρ(0)P 0 U (0, t)e −iλNL(t) P t U (t, 0)] where P t disappears after the summation over n t and the prime indicates that the average is with respect to To remove the projection operator P 0 , we represent it using Kronecker delta function then we can easily express ρ ′ (0) in an integral form, Using Eq. (17) and Eq. (20), we express the GF as follows, with where with H γ (t) = e iγNL(0) H(t)e −iγNL(0) .
As mentioned before, the anti-time ordering operator should be used here if t < t ′ .
Since U −λ/2−ξ (t, 0) is from 0 to t and U λ/2−ξ (0, t) is from t to 0, we can use the Keldysh contour as shown in Fig. (2) to combine U λ/2−ξ (0, t)U −λ/2−ξ (t, 0), where for the upper branch of the Keldysh contour and for the lower branch and θ(t) is the step function due to the fact that the first measurement starts at t = 0. Note that for a time t in the upper branch and a time t ′ in the lower branch, we always have t < t ′ . In terms of Keldysh contour, we can express Z(λ, ξ, t) as: where T C is the contour-ordering operator on Keldysh contour which has upper and lower branches discussed above. Noticing the fact that N L (0) commutes with every term except the coupling term H LS in Eq.(8) and from the Baker-Hausdorff lemma we have e iγNL(0) c kL e −iγNL(0) = e −iγNL(0) c kL , we obtain Note that in the modified Hamiltonian the counting field γ only enters the coupling term between the central quantum dot and the L-lead where we count the number of electrons.
Consider a system where the interaction between the quantum dot and the two leads is adiabatically switched on from t = −∞ to t = 0, the non-equilibrium state ρ(0) can be obtained by evolving the system from the initially decoupled state ρ(−∞) = ρ L ⊗ ρ S ⊗ ρ R at t = −∞. This process can be described by We can rewrite Eq. (22) as Similarly, in terms of Keldysh contour, we can express Z(λ, ξ, t) as: where we have used 'K' to denote the contour, from t 0 = −∞ to t and then back to t 0 = −∞, for this adiabatic process. In contrast, t 0 is 0 in the previous contour 'C'. In general, we can discuss the following two initial conditions. 25 (1) Measurement Regime. The system starts at t = −∞ with the three different regions (L,R,S) disconnected. The coupling between them and the dc bias voltage are switched on adiabatically after t = −∞ and the system evolves to steady state up to time t = 0. This is the dc transport regime and the current is independent of time in the steady state. In this case we introduce projector P 0 to take the first measurement. We point out that the measurement will perturb the system. 4 As a result, the current isn't constant in the long time limit. This measurement is mathematically done by simply introducing a parameter ξ in Eq. (19).
(2) Transient regime. In this regime, the coupling between the leads and the quantum dot is switched on at t 0 = 0 + . Then ρ(0) is the product of initial states of decoupled subsystems ρ(−∞) = ρ L ⊗ ρ S ⊗ ρ R . We will see later that the above parameter ξ will not appear under this regime. Obviously, the contour 'C' should be used in the transient regime.
b. Keldysh Formalism
Now we introduce the Keldysh formalism 18,30,31 to derive GF. For this purpose it is convenient to use the Grassmann algebra whose basic knowledge is presented in Appendix A. We divide the Keldysh contour from t = −∞ to t and then back to −∞ in Eq. (31) into 2N equal time interval δt, such that t 1 = t 2N = −∞ and t N = t N +1 = t. We will use the relation of overcompleteness of the Fermion coherent state Eq. (A15) and insert it at each time slice i = 1, 2, ..., 2N along the contour. 30,31 It is important to note that the Grassmann fields φ and φ are completely independent fields Introducing the abbreviation for evolution operator over δt, U (δt j ) ≡ U (t 0 + jδt, t 0 + (j − 1)δt) and using Eq. (A13), we find ( = 1): where the δt j = +δt indicates the forward-time branch and δt j = −δt is for the backward-time branch and we use the index kα to label the states of the lead α and the index n the quantum dot. Remember that Substituting Eq. (33) into Eq. (32) and using Eq. (A16) of the trace formula expressed in coherent states, we obtain the GF with the action where the term iφ (j+1)x φ (j+1)x /δt j in Eq.(35) comes from the relation of over completeness of Fermion coherent states, Eq. (A15). The term iφ (j+1)x φ (j)x /δt j in the above equation that contains two time indices is due to φ j+1 |φ j in Eq.(33) after using Eq. (A12). To avoid integration along the closed time contour, we split the Grassmann field into upper and lower branches of the contour, respectively. 31 Here, we use + and − to differentiate the upper and lower branches. Setting N → ∞ and δt j → 0, we can obtain the continuous expression for the action The last term −ρ(−∞) in Eq. (35) is responsible for the boundary condition at the −∞ to connect the upper and lower branch and this will be easily seen in Eq. (39) later. 31 Now we want to express Eq. (36) in terms of Keldysh Green's function. To do that, we consider the free action of the quantum dot or the leads in the absence of coupling between them or external fields in Eq. (35) where g −1 jj ′ has double time indices. From the basic property of the Gaussian integral for Grassmann algebra we have From Eq. (37) we can write the matrix ig −1 jj ′ in the following form (when N = 3) where h ± ≡ 1 ∓ iǫδ t . As shown in Ref. 30 and 31, we can get the discrete form Green's function of the free quantum dot or the lead by inverting the matrix in Eq. (39). The continuous version of the Green's function can be obtained by taking the N → ∞ limit while keeping N δ t constant and also (h + h − ) N → 1. Then the four correlation functions in the continuum limit are 31 where n F = ρ/(1 + ρ) is the Fermi occupation number. Now we perform the Keldysh rotation. Define the new fields as: whereas "bar" fields transform differently: The effect of this rotation is to transform the matrix form of contour-ordered function A into an upper triangular matrix as follows: with the following relation: where Q = 1 and A a (t, t ′ ) are respectively the usual retarded and advanced Green's function. For Green's functions or self-energies without counting parameter or other parameters involved, we have With the Green's function just defined, we can rewrite the action of Eq. (36) after Keldysh rotation as follows: where we have introduced and the abbreviated notation Here the Green's function in Keldysh formalism is given by where We point out that the coupling coefficients t kLn , t * kLn , t kRn , t * kRn can also depend on τ in Eq. (46). Now we write Eq. (46) in a matrix form where we have used the notation Φ (51) Note that the matrix M(τ, τ ′ ) contains both time space and orbital space. t kαn and t * kαn are diagonal matrices in Keldysh space. The upper bound for τ and τ ′ should be t, at which we take the second measurement.
Using functional integration of the Gaussian integral for independent Grassmann fields described by Eq. (A11) and taking into the normalization condition Z(λ = 0, t) = 1 and the fact Λ(λ = 0) = 1 into consideration, we can express the GF as follows, Defining the diagonal matrix we have and A = 1. Here the summation on repeated indices is implied. Using the identity and , we obtain the normalized generating function in a compact form where the determinant can be calculated in discretized time slice and real space grid. In the above equation, we have introduced the following notation: where G is the Green's function of the quantum dot and g = g nn ′ (τ, τ ′ ) denotes the Green's function of the isolated quantum dot, and where Λ(γ(τ ′ )) and Λ * (γ(τ )) is defined in Eq.(48) and the Green's function and self-energy are written in the Keldysh space in time domain. We can see that the counting field only appears in the self-energy of the left lead in which we count the numbers of the electrons. When λ = ξ = 0, we have Σ L = Σ L .
The Green's function G satisfies the Dyson equation defined on the Keldysh contour from −∞ to t and then back to −∞ with the following relation (for transient regime we should replace −∞ with 0): where Σ(τ 1 , τ 2 ) = Σ L (τ 1 , τ 2 ) + Σ R (τ 1 , τ 2 ). We can write it explicitly as follows G r,a = g r,a + g r,a Σ r,a G r,a We point out that if we want to investigate the current correlation between the left and right lead, we should introduce two counting parameters λ L , λ R , one for the selfenergy of the left lead and another for the right lead, and calculate GF with two counting parameters Z(λ L , λ R , t). For instance, we have n L n R = ∂ 2 Z(λL,λR,t) ∂(iλL)∂(iλR) λL=λR=0 . We can also generalize the GF to systems with multiple leads.
The self-energy Σ L (τ, τ ′ ) in the presence of the counting field should be calculated separately at four different time regimes. We find from Eq. (48) and (61) and when 0 < τ < t, −∞ < τ ′ < 0, we can write Σ L (τ, τ ′ ) as (Σ a L = 0): and when 0 < τ, τ ′ < t, Finally, when −∞ < τ ′ , τ < 0, we have λ = 0 and Σ L (τ, τ ′ ) = Σ L (τ, τ ′ ). We can see that in transient regime, we only have the case 0 < τ, τ ′ < t, and the parameter ξ doesn't appear. Now we turn to the cumulants of transferred electrons between t 0 = 0 and time t and current of transient regime. In the transient regime, using the relation ln det Ω = Tr ln Ω we can write the CGF as where I is identity matrix and M is given by 8 Taking the derivative of the CGF with respect to λ and using the relation Tr ln(I − Ω) = − j=1 Ω j /j, we can get various cumulants from Eq. (3). The first cumulant, the mean number of transferred charge, can be expressed as follows which can be written in a more compact form: where the trace is over both time space and real space. Similarly, the charge-charge correlation (the second cumulant) is found to be 28 From ∆n L (t) = t 0 I L (τ )dτ , we find the current at time t, The current here is quite different from Cini's approach (the partition free approach), where the coupling between leads and the central quantum dot is turned on in the infinite past while the bias is applied at t 0 = 0. [33][34][35] In our approach, both the coupling and the bias are turned on at t 0 = 0. It is not difficult to prove that we obtain exactly the same expression for the current as in Eq.(72) in the measurement regime where two measurements are performed in the dc case. However, the second and higher cumulants in the measurement regime are not the same as that of the transient regime. This confirms the fact that the first measurement does perturb the system and therefore the current under dc bias is not a constant after the measurement. Similar behavior has been found previously in the case of phonon transport. 24,25 We can derive the long-time behavior of the generating function which recovers the famous Levitov-Lesovik formula. 3,4,36,37 . This has been discussed in detail in the papers of M. Esposito et al. 19 and Agarwalla et al.. 25 For completeness of this paper, we just give a brief summary here about how to get the long-time limit from the FCS in the transient regime. For convenience we assume that we switch on the interaction between the subsystems at −t/2 and we are interested in the behavior between time −t/2 and t/2. When t → ∞, the interval becomes (−∞, ∞), and the Green's function and the self-energy (73) Using the relation Σ r − Σ a = −iΓ and Σ < α = iΓ α f α , and Eq.(56) we obtain the CGF in the steady state limit with the transmission coefficient for the quantum dot T (ω) = G r Γ L G a Γ R . Next we get the current generating function S s (λ) which is the celebrated Levitov-Lesovik formula. Taking the derivative of the current generating function with respect to λ at λ = 0, we get the current of the steady state in the long time limit which is the Landauer-Buttiker formula. 38 Finally, we wish to emphasize that the formalism discussed here cannot be used to study the short time full counting statistics in dc steady state quantum transport since the first measurement is not non-invasive. A formalism of short time FCS in dc steady state within nonequilibrium Green's function formalism is still unknown.
IV. GENERALIZATION TO MAGNETIC TUNNEL JUNCTION
In this section, we generalize the formalism discussed above to FCS in spintronics. As an example we study a magnetic tunnel junction (MTJ) in which the left and right ferromagnetic leads are coupled to the nonmagnetic scattering region which is a quantum dot. The magnetic moment M of the left lead is along the z−axis, while the magnetic moment of the right lead is at an angle of θ to the z−axis, which is along the z ′ axis (such that the coordinate system x ′ y ′ z ′ is obtained by rotating the coordinate system xyz by an angle θ along the y direction), the electric current flows in the y−direction. The relative orientation of the magnetizations (parallel or anti-parallel) in the two electrodes will induce an interesting phenomena in this junction, which is the tunnel magnetoresistance (TMR) effect. [39][40][41] Here we present a formalism which is suitable to study the FCS of transient behaviors in MTJ. Treating xyz coordinate system as the frame of reference, the Hamiltonian of the whole system reads where H L and H R describe the Hamiltonian of the left and right lead , H dot describes the non-magnetic scattering region (quantum dot), H T is the Hamiltonian that models the coupling between leads and the quantum dot with hopping matrix.
In these representations, α represents L or R, C † kασ (with σ =↑, ↓ or ±1, andσ = −σ) is the creation operator of electrons at energy level k with spin index σ inside L-or R-lead. Similarly, C † nσ is the creation operator of electrons at energy level n with spin index σ inside the quantum dot.
To diagonalize the Hamiltonian of the right lead, we apply the following Bogoliubov transformation to the creation and annihilation operator of the right lead, 42 where we have used the abbreviation C kα = C kR↑ C kR↓ , c kα = c kR↑ c kR↓ and R = cos θ 2 − sin θ where we used the abbreviation c kα = c kα↑ c kα↓ , c n = c n↑ c n↓ and T kαn = t kαn R † in the expression of H T .
From now on, we use capital cases C † kα , C kα , C † n , C n to denote the creation and annihilation operators of the leads and quantum dot before Bogoliubov transformation while use c † kα , c kα , c † n , c n to denote the creation and annihilation operators after the transformation.
FCS of transferred charge with a particular spin direction
Here we count the number of electrons with spin-up and spin-down in z direction (L-lead) separately in the transient regime. For convenience, we just consider the spin-up case and the case for spin-down is self-evident. As was demonstrated in the last section that the counting field just enters the coupling term between quantum dot and the particular lead so the modified Hamiltonian H γ with regard to the spin up number operatorN can be written as follows, Since we are working in the xyz coordinate system, we have the modified self-energy of the left lead due to the fact that in the transient regime, Λ doesn't depend on time and the parameter ξ disappears. The normalized GF can be written as: where The Green's function g is for the diagonalized Hamiltonian of the central quantum dot, R and R † act on the spin space. Similar expression of GF can be obtained for spin down electrons of the left lead by modification of Eq.(84). If we are interested in the statistical behaviors of the right lead we should use x ′ y ′ z ′ coordinate system as the frame of reference. Similar to case of the left lead, if we count the number of electrons with spin-up in z ′ direction k c † kR↑ c kR↑ , the normalized GF can be written as: where Here we point out that the GF for the spin up electrons k C † kR↑ C kR↑ in the z direction of the right lead is totally different from that of z ′ . For the spin up electrons k C † kR↑ C kR↑ in the z direction of the right lead, the corresponding modified Hamiltonian is witht kRn = e iγ t kRn 0 0 t kRn . After Keldysh rotation, t kRn becomest kRn = t kRn Λ 0 0 t kRn . Because of this we have
FCS of transferred charge current and spin current
We know that the total charge current operator through lead α isÎ while the spin current operator should bê withÎ ασ = q dNασ dt ,N ασ = kĉ † kασĉ kασ and we can set = q = 1 here. The modified self-energy in the GF of the number of total charge transferred in the lead α is (when α = L(R) we consider z (z ′ )direction) and modified self-energy in the GF of the total spin transferred with short notationΛ = exp − iλ 4 σ x . 43 Note that GF for the total transferred charge (or total transferred spin) Z = Z α↑ Z α↓ since the statistics for spin up and spin down transferred electrons are not independent of each other because of the presence of spin flip mechanism. Hence we cannot directly use the GF for the spin up and spin down to obtain the GF for the statistics of the total transferred charge or spin.
FCS of spin transfer torque
The total spin torque operator can be derived from the total spin along the x ′ direction in the right ferromagnetic electrode, [44][45][46] (95) The spin transfer torque operator iŝ Comparing with Eq.(29) for the case of number of transferred charges, we can easily write down the normalized GF for the total spin (whose time derivative is spin transfer torque) as follows, where and Here we point out that Ξ 1 , Ξ * 1 , Ξ 2 , Ξ * 2 act on the Keldysh space while R, R † ,R act on the spin space of self-energy in the GF.
V. QUANTUM POINT CONTACT
In this section, we extend the formalism further to the quantum point contact (QPC) system which is the simplest in mesoscopic systems and its transport properties have been studied extensively. The difference between the QPC and the quantum dot system studied in the previous sections is that in QPC, two electrodes are connected directly by the hopping term, this is experimentally achieved by a narrow constriction between the electrodes. Examples of two electrodes involved are conductor-superconductor (N-S) and superconductor-superconductor (S-S) systems. 47 Such a system can be described by the following simple Hamiltonian, where H 0 consists of the Hamiltonian of the isolated electrodes, where we use the index kα to label the states of the electrode α. Here ǫ kα = ǫ kα is the energy levels in electrode α and ∆ α (t) is the external voltage, and H T is the Hamiltonian describing the direct hopping between the nearest neighbor sites in the two electrodes with a coupling constant t LR = t * RL , We count the number of transferred electrons in the left electrode, and the electrons flow from the left electrode to the right one is defined as positive direction of the current. Following the discussion of the quantum dot system in Section III. In accordance with Eq.(51) and Eq.(52) we can express the GF as follows, where Λ is the same as Eq.(85) for the transient regime.
For convenience, we introduce the following abbreviated notation, As mentioned in Section III, g −1 LL and g −1 RR contain the Keldysh components and t LR and t RL are diagonal matrices in Keldysh space. Then we write GF as following For Green's function, we have the following Dyson equation in Keldysh space We can write the Dyson equation explicitly as 47 G r,a = g r,a + g r,a t r,a G r,a G k = (I + G r t r )g k (I + t a G a ) with G r,a,k = G r,a,k LL G r,a,k LR G r,a,k RL G r,a,k RR , g r,a,k = g r,a,k LL 0 0 g r,a,k RR , t r,a = t r,a LL t r,a LR t r,a RL t r,a and t k = 0 as previously mentioned that t is diagonal in Keldysh space. Now we turn to the cumulants of transferred electrons between t 0 = 0 and time t and current of transient regime. In the transient regime, from the fact ln det Ω = Tr ln Ω we can write the CGF as Taking the derivative of the CGF with respect to λ and using the relation Tr ln(I − Ω) = − j=1 Ω j /j, we can get various cumulants from Eq. (3). Using the relations Tr t LR G r RL = Tr t RL G r LR and G k = 2G < + G r − G a , the first cumulant, the mean number of transferred charge, can be expressed as, Hence from ∆n L (t) = t 0 I L (τ )dτ , we can get the transient current at time t, We note that similar expression has been obtained in dc case. 47 We point out that the derivation above can be easily generalized to a QPC system with multiple electrodes, or the systems with spin configuration such as N-S or S-S system.
VI. NUMERICAL RESULTS
We now apply the formalism discussed above to a system in which two single-level quantum dots are in series and connected to the left and right lead respectively. The Hamiltonian of such system reads where ǫ 1 and ǫ 2 are the two energy levels of the quantum dots and they are respectively coupled to the left and right lead, and the two energy levels are also connected with coupling strength t 12 . In this system, we have the Rabi frequency between the two dots, which is actually the difference between the two eigenvalues of the Hamiltonian of Eq. (116).
Taking the band structure of the left and right leads in to consideration, we assume that the leads have finite band width in a Lorentzian form 34 Γ α (ǫ) = ΓαW 2 ǫ 2 +W 2 where Γ α is the linewidth amplitude of the left or right lead with Γ L = Γ R = Γ/2 and we further assume that both leads have the same bandwidth W . During the numerical calculation the energies are measured in the unit of Γ so that 1/Γ and eΓ are the units of the time and current, respectively. In this paper, the bandwidth is chosen to be W = 10Γ, the energy levels of the left and the right quantum dot are ǫ 1 = 6Γ and ǫ 2 = 4Γ, respectively. At t = 0 − the system is disconnected. At t = 0 + , the system is connected and the Fermi level of the left lead is ∆ L = 10Γ and the Fermi level of the right is zero.
For the double quantum dot system, the GF shall be written as so that we are measuring electron in the right lead. We also assume that the initial electron occupation of the energy level of the left quantum dot is zero and the initial occupation of the energy level of the right dot is one, The detailed description of the calculation of the GF which is actually a determinant in the time domain is presented in Appendix B.
In Fig.4, we show the 1st-6th cumulant of transferred charges which are counted from time t 0 = 0 to the time t in the right lead of the system. The figure shows the cumulants as a function of time under different coupling strengths between the two dots with t 12 = 1.5Γ, 3.0Γ and 6.0Γ at zero temperature, and we also show the influence of temperature on the cumulants at a temperature k B T = 5Γ when the coupling strength t 12 = 3.0Γ, where k B is the Boltzmann constant. We can see from Fig.4 especially Fig.4e,d,f that there are two kinds of oscillations in the cumulants, one is the overall oscillation, and the other one is the local oscillation with a specific period. Overall there are more oscillations of the cumulants n j as one increases j, which show the phenomenon of universal oscillations in FCS. The universal oscillations of the cumulants in Coulomb blockade regime have been revealed experimentally by C. Flint et al.. 10 The local oscillation is caused by two serial quantum dots, since the electron in the quantum dots will oscillate between the two energy levels and the period of the local oscillations is T osc = 2π/(2∆ω). The oscillation depends on the ratio of coupling strength between two dots and the coupling between the right dot and the right lead. If this ratio is small, the oscillation will not be so obvious, since it is easier for the electron in the right dot to tunnel to the right lead. This can be confirmed from Fig.4 that the oscillation of the cumulants of the system with a coupling strength t 12 = 1.5Γ is weaker than the other two cases at zero temperature. However, if the coupling strength between the dots are strong enough, the 1st cumulant as in the case of t 12 = 6.0Γ in the figure may have negative values at short times, since the electron tends to oscillate between the dots and is unwilling to flow to the right lead. This in turn creates a vacancy in the right dot and hence a larger possibility for the electron in the right lead to tunnel into the right dot giving rise to a negative current. It is found that the 1st and 2nd cumulant, which are mean values and the variance, don't have too many local oscillations and are smooth at longer times.
Regarding the influence of the temperature, we com-pared the cumulants between zero temperature and k B T = 5Γ when coupling strength t 12 = 3.0Γ. Because the temperature will reduce the probability that an electron transfer from the right quantum dot to the right lead and enhance the probability that an electron tunnel from the right lead to the right quantum dot. Both the overall oscillation and the local oscillation are smeared due to the temperature effect.
In Fig. 5, we calculated the WTD (W 1 ) in the right lead in the transient regime, which is the probability distribution that the first electron transfer to the right lead at different time after we turn on the interaction between the leads and the quantum dots at t = 0. The WTD of the system with parameters ∆ǫ = 2Γ, t 12 = 3.0Γ at zero temperature and k B T = 5Γ are presented. Except from the first peak of each curve, we can see from Fig.5 that WTD exhibit an oscillation with a period T osc = 2π/(2∆ω) again due to Rabi oscillation. The temperature doesn't influence the oscillation period but it smears the oscillation amplitude since temperature only influences the electronic distribution in two leads.
VII. CONCLUSION
Using the technique of path integral and Keldysh nonequilibrium Green's function, we express the GF in a compact form in terms of Green's function and selfenergy in time domain. This formalism is suitable for studying FCS in the transient regime. For dc steady state regime, two measurements are needed to collect to investigate the finite time FCS. As we have shown in this paper the first measurement actually perturbs the system and hence FCS after the measurement does not reflect information of real system. Therefore this formalism can not be used to study finite time FCS for dc steady state. We have generalized the formalism to the magnetic tunnel junction to study FCS of spin polarized charge current, spin current and spin transfer torque. Moreover we have calculated GF for the quantum point contact system in the transient regime. We have applied our theory to study FCS of a double quantum dot system. Both global and local oscillations are revealed. We attribute the global oscillation to the universal oscillation as observed experimentally in the Coulomb blockade regime. The local oscillation can be understood from Rabi oscillation. Future work may involve transient FCS of charge transport in quantum point contact systems such as conductor-superconductor (N-S) and superconductorsuperconductor (S-S) systems. In addition, the transient FCS of spin transport in mesoscopic system with spin orbit interaction is also worth studying.
(A15) The trace of an operator,Â, is calculated as:
APPENDIX B: NUMERICAL DETAILS
Here we present detailed description on how to calculate generating function which is a functional determinant described by Green's function and self-energy in the transient regime. Since the functional determinant is expressed in the time domain, we should make a discretization of the time indices. The determinant can be calculated through Eq.(67), and we should keep in mind that both the Green's functions and self-energies have different Keldysh components. The Green's function can be obtained through the Dyson Equation, which is Eq.(63) on the matrix level. For the retarded Green's function, we should first discretize G r , g r and Σ r which have two time indices with a time slice ∆t, and by the rule of matrix multiplication, we have where we have used the underlined Green's function and self-energy to denote the Green's function and self-energy in the matrix form. Given the self-energy and the Green's function g of the isolated central system, we can calculate the Green's function of the system using G r = (I − g r Σ r ∆t 2 ) −1 g r , where I is the identity matrix. From Eq.(67), we obtain G < which allows us to calculate the generating function Z(λ, t). However, this method is time consuming, since at every time step, we should do a matrix inversion to get G r .
Below we introduce a method to make the calculation much more efficient. First we calculate the isolated Green's function of the central system and selfenergy with different Keldysh components in the time domain. 50 For the quantum dot with single energy level ǫ 0 , g r (τ 1 , τ 2 ) = −iθ(τ 1 − τ 2 ) exp[−iǫ 0 (τ 1 − τ 2 )], where θ(τ 1 − τ 2 ) is the Heaviside step function and g a is hermitian conjugate of g r . g < (τ 1 , τ 2 ) is zero if the initial occupation of the energy level is empty while if the energy level is initially occupied with one electron g < (τ 1 , τ 2 ) = i exp[−iǫ 0 (τ 1 − τ 2 )]. Then g k (τ 1 , τ 2 ) is found through the relation g k = 2g < + g r − g a .
We know that a contour ordered matrix A could be written in the upper triangular form A r A k 0 A a in the Keldysh space after Keldysh rotaion. Since G which doesn't contain the counting parameter posses the upper triangular form in Keldysh space, and its retard and advanced components are lower triangular and upper triangular matrix respectively in the time domain, we can just simplify it to a diagonal matrix. So we can just directly calculate GF by calculating the determinant of the matrix δ(g −1 − Σ L − Σ R ) which is a block toeplitz matrix where δ is the diagonal matrix to satisfy the normalization condition Z(λ = 0, t) = 1. | 2014-12-12T09:42:28.000Z | 2014-07-28T00:00:00.000 | {
"year": 2014,
"sha1": "3628e74b59ddd5f149d06523d53c54e20c8c46a5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1407.7362",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3628e74b59ddd5f149d06523d53c54e20c8c46a5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225566555 | pes2o/s2orc | v3-fos-license | Strategic Consensus on Organizational Performance: A Contingency Approach of Organizational Culture and Isomorphic Pressure
Purpose: This research aims to test and analyze the consensus of organizational strategy with the contingency of organizational culture and isomorphism pressure which have an impact on organizational performance moderated by human capital. The survey is conducted to the owners/managers of Batik Micro Small Medium Entreprises (MSMEs) in 18 villages in Pekalongan, Indonesia. Design/methodology/approach: The data collecting is using questionnaire and interview to some owner/managers and leader of the Batik group. Pekalongan is a city of Batik that has Batik MSMEs spread in every area of the villages. Therefore, the sampling technique used in this research includes two stages: proportional area random sampling and purposive sampling. 17 MSMEs were taken as samples in each village. Furthermore, after selecting 17 MSMEs, the study used purposive sampling technique with a sample size of 170 MSMEs. This research uses three variables: isomorphism, organizational culture, and organizational strategy measured by semantic scale and use two bipolar typologies. As for organizational performance, it is measured using Likert scale. The data analysis technique is using Euclidience Distance Simple Regression with contingency approach and moderation of Euclidience Distance Simple Regression. Findings: These results provide are; (1) The higher the alignment degree between imitation strategy and hierarchy culture, the higher the organizational performance will be; (2) The higher the alignment degree between innovation strategy and adhocracy culture, the higher the organizational performance will be; (3) Human capital strengthens the consensus of imitation strategy with the contingency of hierarchy culture and benign isomorphism, so that the organizational performance is higher; (4) Human capital does not strengthen the consensus of innovation strategy with the contingency of adhocracy culture and hostile isomorphism, so that the organizational performance is higher. Research limitations/implications: The sampling technique of this research is done using purposive sampling, so it is feared that it cannot generalize to the existing population. The data collection also uses primary data with measurement approach based on the owner/manager perceptions, which sometimes still requires assistance and good understanding for the respondents. Organizational performance will be increase if the condition of hierarchy culture, benign isomorphism, and imitation strategy are supported by the moderation role of human capital that can strengthen the relationship with organizational performance. Practical implications: The higher the alignment degree between imitation strategy and hierarchy culture, the higher the organizational performance will be. If the organization tend to choose imitation strategy, it is more emphasizes on low market pressure, support from government regulation, low public and media pressure, and stable politic and safety, and vice versa.
Introduction
Since the 19 th century, Pekalongan has become a center of batik and continues to experience growth. Batik industry in Pekalongan has been able to become a mainstay sector or the creative industry that is able to contribute to employment, economic growth, and poverty alleviation (pro job, pro-growth, and pro poor). Pekalongan Batik has become the pride of Pekalongan and Indonesian people. Batik artisans in Pekalongan are known to have a high spirit of creativity and always keep up with the times (Ankafia, Yaniar & Ferianto, 2019). The observations results of batik industry researchers in Pekalongan are still faced with various problems: the availability of raw materials, marketing, technology, and the diminishing interest to be batik artisans. Not to mention other problems such as; batik waste accumulates and pollutes the environment, the batik process ignores occupational safety and health, as well as the consumptive behavior or batik bosses that impacts on business sustainability. Batik entrepreneurs must always be dynamic and innovative in every work that is produced (Ankafia et al., 2019;Febrianti, 2019;Poerwanto & Sukirno, 2012) so that it can have an impact on the sales turnover of the products sold. In fact, the motivation to imitate batik motifs is a common thing and even created by batik entrepreneurs themselves. As a result, batik that has similarities is scattered everywhere, both from the motifs, picture, or design that can be found in many markets with different quality of batik materials (Febrianti, 2019). If this condition is allowed, then in the long run it will be passed on from one generation to another, so it is feared that it will become norm and bad behavior that is attached and believed to be true from one generation to another. It is known that organizational culture inherent in organizations is a norm of behavior and values that are understood and accepted by all members of the organization and are used as a basis behavior rules (Hofstede, 1981). Batik artisans in Pekalongan are usually incorporated in the community that contains batik group with memberships that aims to develop their business. The work spirit from the batik artisans makes the community develop successfully although if it is faced with problems that continue to exist.
The existence of isomorphism pressure faced by Batik MSMEs also makes them conduct the business process in accordance with formalities in order to fulfill the legitimacy as entrepreneurs who follow the rules and business ethics that develop in the community. Batik MSMEs in Pekalongan tend to conduct business process that is similar to other MSMEs because they face the same environmental conditions. Even so, there are several MSME groups that do not do so. The observation result in field shows that the majority of Batik MSMEs in Pekalongan use synthetic colors, but there are still MSMEs that committed to use natural colors and oriented to green business practice. In fact, the most basic crucial problem besides human capital is some of the MSMEs still have limitations on working capital. Sometimes, they pursue profits although they are relatively small without ever paying attention to the quality of the products that is produced. As a result, they implemented a relatively simple organizational strategy, such as selling their products at a low price, because the most important thing for them is to sell quickly and return the capital as soon as possible. This imitation strategy is mostly done by small businesses instead of medium business scope. Likewise, they also have difficulty in obtaining raw materials because they have to consider the price, starting from the price for the materials used for input to the production process, which relatively becomes more expensive. It also becomes the originality of the research. There are some existing research gaps, including: 1. Other research tend to only examine the mediation role of organizational strategy in analyzing the effect of contingency variable toward organizational performance and not use a contingency approach, so that it cannot detect the level of alignment of organizational strategies in improving organizational performance. 2. Research on the level of strategy alignment has benefits so that companies can achieve organizational performance better. 3. This research uses the concept of ideal strategy alignment and does not merely test the interaction of two variables that have an impact on organizational performance. 4. Other researches tend to only portray organizational performance, while this research also examines organizational performance with two indicators, namely operational performance and environmental performance.
As for the research problems that are going to be tested and analyzed are as follows: 1. The level of alignment of organizational performance (hierarchy Vs adhocracy) and organizational strategy (imitation Vs innovation) in improving organizational performance. 2. The level of alignment of isomorphism (benign Vs hostile) and organizational strategy (imitation Vs innovation) in improving organizational performance. 3. The role of Human Capital in strengthening the consensus of organizational strategy so that organizational performance can be increased.
Contingency Approach and Alignment
The contingency approach believes that if something has gone well within one organization, then something cannot necessarily be done well in another organization. This is possible because there are various causes, including organizational culture, isomorphism, management style, implemented technology, and some inappropriate management practices (Muafi, 2008;Armstrong, 2008). Riyanto (1999) and Armstrong (2008) added that "the organization's high work system is very unique and it must be adjusted very carefully for each company's situation so that the company performance can achieve optimal results". The notion of strategic alignment is sometimes described as "a model that is fit/alignment", which is the core of the strategic management concept. Strategic integration is important to provide alignment between business strategy and human resource strategy, so that human resource strategy supports the achievement of business strategy and can be useful to help establish business strategy. The aim is to provide strategic alignment and consistency between human resource and business management policies. Fombrun, Tichy and Devanna (1984) stated that just as companies that will face inefficiencies when they try to implement new strategies with traditional structures, they will also face problems when they try to implement new strategies with inappropriate human resource systems. The critical task of management is to unite the formal structure with the human resource system, so that both of these tend to be able to direct the strategic goals of the organization. Guest (1997) also stated that strategic human resource management is primarily associated with integration problem. HRM is "fully integrated with strategic planning", so that HRM policies between cross-cutting policy areas with interrelated cross-hierarchies and HRM practices are used by line managers as part of their daily work (Armstrong, 2008;Delery & Doty, 1996).
The best aligment approach means that there is external alignment (vertical integration) and internal alignment (horizontal integration). Vertical integration means that there is an alignment of business strategy with human resource strategy in each business life cycle, the dynamics of organizational change, and all organizational characteristics. Horizontal integration means that there is an alignment between policy and practice of human resources (Armstrong, 2008;Homburg, Krohmer & Workman Jr, 1999). The point of emphasis on contingency theory in this study is that organizations must adapt to organizational culture and isomorphism with organizational strategy if the organization wants to get higher performance. Managers need to play and active role in determining method and techniques that are best applied in each case. Different situations require different managerial reactions. If managers face certain situations, then managers need to learn certain important contingencies. Nothing else is called the 'best single way' (universal approach) (Winardi, 2005;Delery & Doty, 1996).
Organizational Culture and Organizational Strategy
Organizational culture is "A system of shared meaning held by members that distinguishes the organization from other organization" (Robbins, 2013). Value system, beliefs, and norms adopted by the organization in general will be inherited from one generation to another and make a difference in uniqueness between one organization and others. If the internalization of values, norms, and assumptions in the organization is held firmly by the members, it will result in feelings of calm, committed, loyalty, spurring harder work, cohesiveness, the same goal alignment, and capability to control organization member behavior that ultimately have an impact on productivity (Kotter & Heskett, 1992). Quinn (1999, 2011) and Quinn and Cameron (1988) introduced a model of organizational culture that is known as The Competing Values Framework (CVF). CVF covers 4 quadrants with two main dimension approaches oriented to organizational effectiveness, namely; horizontal and vertical. Horizontal axis has two indicators; (1) flexibility, discretion, dynamism, and (2) stability and control. Vertical axis also has two indicators; (1) internal focus and integration, and (2) external focus and differentiation. The four types of culture formed from these two-dimensional interactions are; clan culture, hierarchy culture, adhocracy and market.
This research emphasizes on two cultural typologies, namely; Hierarchy Vs Adhocracy. If an organization adheres to hierarchy organizational culture, it means that the organization follows formal and structured rules. Efficiency is much emphasized and leaders encourage their employees to follow the business processes regularly and correctly. Stability, efficiency, and effectiveness of production are strictly guarded according to the prevailing rules so that they are more oriented to the certainty, security, and convenience of operations. More clearly, hierarchy organizational culture is characterized as; leading rational analysis, leading information clarity, leading high reliability, leading processes and leading through measurement (Cameron & Quinn, 2011;Quinn & Cameron 1988;Chen, Huang, Liu, Min & Zhou, 2018).
In contrast, if an organization adheres to the adhocracy organizational culture, it means that the organization is very dynamic, has an entrepreneurial orientation, innovative, and creative. The people who work in it are people who dare to face the big risk of trying and doing new things. This is because they are driven by entrepreneurs-oriented leaders who are also ready to take high risks. Each member of the organization has the same strategic orientation to try new things, new methods, and new innovative management practice. They are committed to be at the forefront of producing unique and high-value new products. This is because organizational members are given the freedom to work and express themselves, especially if they can create new creative ideas. More clearly, adhocracy organizational culture is characterized as; leading innovation and entrepreneurship, leading the future, leading improvement and change, leading creativity, and leading flexibility and agility (Quinn & Cameron 1988;Chen et al., 2018). Both of these organizational culture typologies have opposite continuum, so the researchers chose two of these typologies, namely: hierarchy and adhocracy.
Some research results prove that organizational culture has a relationship with organizational strategy and has a significant positive impact on organizational performance (Vestal, Fralicx & Spreier, 1997;Zwaan, 2006;Muafi, 2008;Chen et al., 2018). Organizational performance is one of the factors that can stimulate or provoke innovative behavior among the employees in an organization (Martins & Terblanche, 2003). The type of adhocracy culture supports the creation of new products or services innovation, and in the contrary, hierarchy culture can hinder product innovation due to the existence of standard, structured, and rigid systems (Valencia, Valle & Jimenez, 2010). The research result from Valencia et al. (2010) also proved that organizations that adhere to the adhocracy's organizational culture have a positive and significant influence on innovation orientation. On the other hand, organizations that embrace hierarchy organizational culture have a positive and significant effect on imitation orientation. It is strengthened by the research finding from Muafi (2009) and Priyono (2004) that when companies have Apollo's organizational cultural alignment with a defender strategy, then the organizational performance of the company will increase. Instead, organizational performance will increase when there is a match between Athena organizational culture and Prospector strategy. Apollo culture has an identical characteristic to Hierarchy organizational culture, and vice versa, the Athena culture has a characteristic that is identical to Adhocracy organizational culture. In fact, the research findings from Xu and Qianqian (2015) proved that hierarchy culture can improve imitation strategy, whereas the adhocracy culture is able to improve innovation strategy.
Actually, no one can blame if the company chooses imitation strategy. Imitation is not something bad if run by a company. It is proved by some companies that do imitation but it actually works and become more superior compared to other companies that introduce their innovation in the community early on (Schnaars, 1994). Even so, pioneer still has advantages and can compete with their followers from any aspects so that it has more sustainability in the long run (Robinson, 1988;Schnaars, 1994).
H1. The higher the alignment degree between imitation strategy and hierarchy culture, the higher the organizational performance will be H2. The higher the alignment degree between innovation strategy and adhocracy culture, the higher the organizational performance will be
Isomorphism and Organizational Strategy
Isomorphism is a concept that defines a condition of organization that is faced to the same condition (DiMaggio & Powell, 1983;Akbar, 2012;Gudono, 2009;Katopol, 2016). Isomorphism is"a constraining processes that forces one unit in a population to resemble other units that face the same set of environmental conditions". More specifically, isomorphism process will encourage one group/unit to imitate another group/unit because it is faced with the same situation and condition. Isomorphism is divided into two types: competitive isomorphism, which is closely related to inter-group competition, and institutional isomorphism, which related to the competition to gain institutional legitimacy. This happens because organizations are not just competing for resources or consumers, but also to gain institutional or political legitimacy (Gudono, 2009).
According to Akbar (2012), Gudono (2009) and Katopol (2016) there are three types of isomorphism, namely: (1) Coercive isomorphism, which exist because of the influence of the pressure of political force of organization to gain legitimacy. Coercion that arise is not always formal and coercion that done by the organization can cause the organization to be truly obedient or even pseudo-obedient (just a ritual) so that there is an impression that the organization is obedient (Gudono, 2009). (2) Mimetic isomorphism, the organization reacts to the environmental uncertainty it faces due to imitation. This can be done by imitating the learning process of other companies, doing 'benchmarking' (Lawrence, 1999) or by using consultant services. It is done by not merely oriented to technical efficiency but emphasizes on aspects of ideology (Gudono, 2009;. (3) Normative isomorphism, which exists because of the homogenization of organizations through educational process and professional organization associations (Akbar, 2012), coercion to be the same arises from the members of the profession. When companies pay attention to isomorphism aspect, in the future they will be able to gain competitive advantage (Jang, Lee & Nelson, 2014). Theodoro (2014) in his research found that normative isomorphism will have a significant impact in the professionalism of executive managers to design and implement policies.
The research result from De Simone (2017) and Ashworth, Boyne and Delbridge (2009) proved that isomorphism pressure has an impact on the choice of business model that will be chosen and implemented by companies. Companies must find a way to increase their profit by increasing the ability of services and systems they owned. Company size and cultural orientation become significant factor with imitation behavior. In the contrary, normative isomorphism pressure, firm ownership, and manager awareness are not related to imitation behavior. This result is strengthened by Yang and Hyland (2012) who stated that understanding mimetic isomorphism is not only by examining the relationship between imitation and isomorphism, but also integrating the role of corporate experience and the degree to which local market competitors influence the imitation and isomorphism as a whole. In order to successfully run the mimetic isomorphism, it should be supported by resources, skills, personal, and administrative support (Katopol, 2016). It is also added by Masocha and Fatoki (2018) that 'traits'-based imitation is considered to imitate the company that has the desired feature, while 'outcome'-based imitation is considered to imitate the company that describes the desired outcome. Therefore, several studies have focused mostly on frequency-based imitation and traits. These findings also reinforce the findings of DiMaggio and Powell (1983) that mimetic isomorphism will direct the organization to have imitation behavior. In facing isomorphism pressure, manager can behave actively and passively. Active managers are the managers who have great curiosity in the external environment, are faced with a complex environment, and must be open-minded. In contrast, passive managers assume that they operate in a stable environment, think that passive scanning may be sufficient, and only wait to receive signals from outside (Day & Schoemaker, 2006). Hitt, Bierman, Shimizu and Kochhar (2001) stated that in organizations that compete strategically, people will look for patterns that can help them understand their external environment. Knowledge of the company's environment will help and improve the company's competitive position, increase operational efficiency, and win battles in the global era. Beal (2000), Barney and Wright (1998) and Day and Schoemaker (2006) stated that companies that do not mobilize and empower its managers and employees to recognize, monitor, predict, and evaluate external key strength may fail to anticipate the emergence of opportunities and threats of their companies. As a consequence, companies cannot implement effective strategies, lose opportunities, and are feared to lead to organizational death. Covin and Slevin (1989) and Robertson and Chetty (2000) proved that organizations that are faced with the pressure of a hostile isomorphism pressure will choose entrepreneurial strategy (progressive, proactive, and extensive product innovation). Whereas, companies that are faced with a benign isomorphism pressure will choose conservative strategy, that is expected to be able to improve organizational performance. The pressure of hostile external environment has these characteristics, including: industrial settings are very risky, have very tight competition, the business climate is volatile and difficult to predict, and does not have business opportunities that can be exploited. On the contrary, benign environment has the characteristics as follows: easily controlled, safe, munificent, and easily manipulated (Covin & Slevin, 1989;Robertson & Chetty, 2000).
H3. The higher the alignment degree between imitation strategy and benign isomorphism, the higher the organizational performance will be H4. The higher the alignment degree between innovation strategy and hostile isomorphism, the higher the organizational performance will be
Organization Strategy (Imitation vs Innovation)
Strategy is a way to achieve goals, objectives, and policies. The most crucial and strategic issue that currently dominates business practice is innovation strategy (Hauser, Tellis & Grifin, 2005) and imitation (Schnaars, 1994;Valencia et al., 2011). These two typology strategies have their own advantages and disadvantages (Valencia et al., 2011).
Company strategy and innovation strategy can help companies to reduce the level of uncertainty through the innovation process (Gupta & Wileman, 1990) reduce managers to be able to allocate limited resources, and can help to prevent competitors to enter the market that has been obtained by the company (Zahra & Das, 1993;Bloodgood, 2013;Barnett & Hansen, 1996;Tempel, 2017). Meanwhile, imitation strategy can provide benefits including; low research and development spending, avoiding products that are not potential, can sell at lower prices, lower market education, and avoid product failure (Schnaars, 1994;Dhewanto, Indradewa, Ulfah, Rahmawati, Yoshanti & Zendry, 2015). In war strategy, the second concept of this typology becomes a separate reason for the company to use it (Ries & Trout, 1986;Hasnin, 2011). Hasnin (2011) explained that there are 4 warfare marketing strategies, namely: (1) defensive, if the company is the market leader who controls most markets; (2) offensive, if the company is the second market leader who controls most markets after the market leader; (3) flanking, if the company is the third largest company that controls the market; and (4) guerilla, if the company holds the smallest part compared to other companies. Hasnin (2011) distinguished companies that innovate and imitate with two approaches as shown in Table 1.
Pioneer
The first company to enter the market with innovative products Company that competes to enter the market first by imitating the same product that produced by the innovator.
Late entrant Innovator companies compete to enter the market first with other innovator companies. The product innovation is different, but it has the same product function.
Imitator companies will enter the market by following In the pioneer line, companies can enter the market with two ways: innovator and imitator. Then, in the late entrant line, companies become the second to enter the market after the pioneer company started, either as innovator or imitator. The study from Lieberman and Asaba (2006) noted that in the competition process, it is said that superior products, managerial systems, and managerial processes are widely recognized as a fundamental part that can be used to attack competitors. This is a usual thing to do by companies, besides mimicking superior products and managerial systems and process. This is conducted when companies imitate to minimize costs (Katz & Shapiro, 1985). Each typology claims that the chosen strategy will be able to increase company performance. The research findings abroad (Tuan, Nhan, Giang & Ngoc, 2016;Kalay & Lynn, 2014;Vázquez, Santos & Álvarez, 2001) and in Indonesia found that innovation strategy is believed by some researchers to be able to increase company performance (Murni, 2000;Prakosa, 2005;Raharso. 2009). On the other side, some research findings also found that as an organizational strategy, imitation strategy is also capable to increase company performance (Assavapisitkul & Bukkavesa, 2009;Schnaars, 1994). Companies can gain profit from making creative imitation (capacity to absorb existed knowledge and produce new knowledge) which can be useful as a unique ability owned by the company (Assavapisitkul & Bukkavesa, 2009;Gary & Larsen, 2019). The research findings from Lee and Zhou (2012) individually also stated that creative imitation has stronger positive effect on financial performance (i.e. slow asset returns) than pure imitation. Furthermore, if it is combined between pure and creative imitation with strong marketing capabilities, it can positively affect market performance, especially market share.
This research focuses on organizational performance that is measured operational performance and environmental performance by using perceptual measure. The importance of environment performance is for the companies to product efficiently because it can reduce costs caused by environmental damage, improve productivity according to the safety standard, and opportunity to get new market (Porter & Van Der Linde, 1995). Organizational performance plays a key role in strategic researches (Beal, 2000;Muafi, 2008). This is because it bases on the reason that in general MSMEs are faced with several obstacles: incomplete recording of financial reports, rare audit in family companies, unsupportive information technology, limited human resources, and tends to rely more on perception (Muafi, 2009).
The Moderating Role of Human Capital
In improving organizational performance, it turns out that Human Capital also plays a strategic and important role in the relationship between organizational strategy and organizational performance. MSMEs nowadays are required to have superior performance in a long-term. This performance can be driven not only by physical asset, but also human capital asset. Today, in the 21 th century, the awareness of organization of human resources as an asset has been so high. The assessment of organizational performance should be more emphasized to non-physical asset (Intangible asset) (Mayo, 2000). Mayo (2000) also added that Human Capital can be measured by including; individual capability, individual motivation, leadership, organizational climate, and workgroup effectiveness. Each indicator of human capital has different contribution which will later be able to make valuable contributions to the organization. The role of HR has shifted from administrative to strategic roles (Jac Fitz-enz, 2009;Schuler & Jackson, 1987). Organizations are increasingly aware that human resources are not just investments, so they cannot rely solely on sophisticated services, produce good product design, and sophisticated marketing strategies and technology. All of these must be supported by the right work system, motivation, and human resource management that are right on target (Jac Fitz-enz, 2009). The research result from Hitt et al. (2001) proved that there is a moderation role of human capital in the relationship of strategy and company performance by using resources strategy contingency alignment approach. Hitt et al. (2001) emphasized on human capital measurement from two indicators; education quality and experience as partner in local firm. Unique and valuable human capital is a strategic asset of the company (Barney & Wright. 1998;Carmeli, 2004).Human capital is the most important part of the achievement of organizational performance (Bontis, 1998;Bontis, Keow & Richardson, 2000;Bontis & JacFitz-enz, 2002;Alpkan, Bulut, Gunday, Ulusoy & Kilic, 2010). In the context of Chinese MSMEs, Liu, Liang, Zhang and Zhang (2010) found the result that human capital moderates the effect of differentiation strategy on company performance. However, the effect of low-cost strategy on company performance is not significant. This result contributes that human capital plays an important role in the implementation of company strategy (Hayton, 2003). The moderating role of human capital in analysing the alignment between organizational strategy with the contingency variable toward the improvement of organizational performance has been studied by researchers. The research result from Hitt et al. (2001) proved that there is a moderating role of human capital in the relationship between strategy and company performance by using the resource strategy contingency alignment approach. Hitt et al. (2001) emphasized that human capital is measured by two indicators, namely: the quality of education and experience as a partner in local firm.
In the context of SMEs in China, Liu et al. (2010) found that human capital moderates the effect of differentiation strategy toward company performance. But, the effect of low-cost strategy in company performance is not significant. This result contributes that human capital plays an important role in the strategic implementation of the company (Hayton, 2003). Gates and Langevin (2010) suggested that the more advance companies are, the more they should continue to strive to develop the indicators of human capital measurement. The findings also even strengthened other research findings that HR managers will prefer differentiation strategies when they use innovation indicators in human capital development. Conversely, when companies choose cost reduction strategy, they should choose efficiency in human capital development in the company. Crook, Todd, Combs and Woehr (2011) have conducted a meta-analysis study by looking at the superior role of human capital in increasing organizational performance. From 66 research results, it was found that there are 3 moderating roles of human capital in strengthening the relationship toward company performance. These results are strengthened by other finding in industrial companies in Turkey from Ozer and Cam (2017) who concluded that human capital can strengthen the relationship between innovation capitals toward market value of the company. It is suggested that managers should consider the interaction between human capital and innovation capital because it will have a major impact on increasing the market value of the company in the long run. Therefore, unique and valuable human capital is a strategic asset of the company (Barney & Wright, 1998). Human capital is the most important part in achieving organizational performance (Bontis, 1998;Bontis et al., 2000;Bontis & Fitz-enz, 2002;Alpkan et al., 2010).
Research Method
This population of this research is all Batik MSMEs in Pekalongan. The population criteria that are taken are: Batik MSMEs with assets of 50 million to 2.5 billion and have annual sales of 300 million to 50 billion. To determine the sample size, the researchers used sample of the majority of Batik MSMEs that spread in 18 villages. The technique that is used is proportional area random sampling by taking the sample proportionally of 17 MSMEs in each village.
In the next stage, researcher chooses purposively with characteristics namely; Batik MSMEs that have entered this category have been established and operating for at least 5 years. The response rate in this research is 68 percent, because the number of respondents who return the questionnaire is 170 respondents. The scale technique of the variable uses bipolar semantic differential scale ranging from scale 7 (very strongly appropriate) to scale 1 (very strongly inappropriate). This research considers two contingency that plays an important role, namely; organizational culture (OC) (hierarchy and adhocracy) and isomorphism (Isom) (benign Vs hostile). These two variables are variables that have typologies that conflict with each other and have conflicting adjectives. For organizational strategy, the variables used are innovation strategy (INOV) and imitation strategy (IMIT). For measuring organizational performance, this research uses MSMEs performance (Perform) both business and environmental aspects, and then it is compared to the competitors, using Likert scale (1/very low to 7/very high).
Recommendations from Miller (1987) suggested that company performance can be measured using self-reports of the managers or company leaders and it can be done by comparing the performance of the same industry average or the same competitor. This was also done by Govindarjan (1988) and Priyono (2004). It is intended so that companies can be more objective in measuring their performance in addition to managers/company leaders must be involved in decision making and require relevant and definite information related to similar companies.
Organizational performance indicator is associated with non-financial measure. While the environmental performance indicator is how far MSMEs can produce efficiently without damaging the environment, increase productivity is referring to environmental safety standards, and opportunity to get new market without polluting the environment when operating. In order to support the research result to be able to represent higher quality result, this research is also supported by interview with some of the managers and owners of the MSMEs. The statistic technique is using euclidience distance regression. Furthermore, it is tested with moderation regression. The reason for using euclidience distance regression is because: (1) this research uses a simple regression model in which the measured data does not use time series data; (2) this research uses a perceptual approach where respondents are only used to explain phenomena (not predict behavior) so it is recommended that there is no need to carry out classical assumption tests; and (3) euclidience distance (ED) scores or deviations from two independent variables is used to see the level of alignment between variables that is studied. The greater the ED scores, the smaller the alignment between variables, so that it will affect organizational performance (Priyono, 2004;Riyanto, 1999;Muafi, 2009;Muafi, 2008). The result of validity and reliability tests show that all items and variables are valid and reliable.
The questionnaire for organizational culture uses the indicator that organizational culture refers to 5 indicators, which are; dominant characteristic, employee management, strong relationship in the organization, emphasis on strategies, and success criteria (Valencia et al., 2011). Isomorphism is measured by considering 5 items (market pressure, government regulations, public pressure, politic and security, pressures of the media) where respondents are asked to give a degree of isomorphism pressure in a very hostile condition to very safe (benign) condition. Isomorphism typology refers to Robertson and Chetty (2011) and Covin and Slevin (1991). All of the environmental indicators are then identified by asking respondents to choose answers that are adapted to the characteristics of hostile and benign environment. Threatening environment belongs to hostile environment and safe environment belongs to benign environment. For the organizational strategy variable, respondents were asked to show the degree of pressure of imitation strategy to innovation strategy. The questionnaire is sourced and developed from Schnaars (1994) and Perez-Luño, Cabrera and Wiklund (2007). Human Capital measure refers to 3 indicators; education, work experience, and competence (Hayton, 2003;Carmeli, 2004).
For the organizational performance (operational and environmental performance) variable, respondents were asked to choose their answers from 7 choice alternatives: 1 is Very Low (VL); 2 is Low (L); 3 is Rather Low (RL); 4 is Neutral (N); 5 is Rather High (RG); 6 is High (H); and 7 is Very High (VH). The questionnaire is sourced from Muafi (2009), Muafi (2017, Robertson and Chetty (2011) and Porter and Van Der Linde (1995). This research divides organizational strategy into two, namely; imitation and innovation. Organizational strategy is divided based on the result of actual score average of organizational strategy. If the score result is ≥ 3.5, then it belongs to innovation strategy group, and vice versa. The indicators of organizational strategy are: opportunity to enter new market, resource exploitation, monitor the production process and sales of the competitor, be the lead in product variety and design, and product sales at competitive price. The researched variables, operational definitions, indicators, measurement scales, and reference sources can be seen more clearly in Table 2 Organizational Strategy (Schnaars, 1994;Pérez-Luño et al., 2007;Naranjo-Valencia, Jiménez-Jiménez & Sanz-Valle, 2011).
Strategy that is agreed upon to be implemented in the organization, including the choice of imitation and innovation strategy Opportunity to enter new market Resource exploitation Monitor the production process and sales of the competitor Be the lead in product variety and design Product sales at competitive price.
Imitation
Unpressure Innovation Pressure 4. Human Capital (Hayton, 2003;Carmeli, 2004 Hypothesis test through simple linier regression by using the measurement with euclidience distance of the misfit score between independent variable of organizational strategy (imitation strategy_innovation strategy) with the variables of each contingency. The equation used to test the hypothesis 1 to 4 is: Y = Organizational Performance as dependent variable Bo = constanta B1 = regression coefficient Dist X1X2 = euclidience distance of organizational strategy_organizational culture (X2) Dist X1X3 = euclidience distance of organizational strategy_isomorphism (X3) The equation used to test the hypothesis 5 to 7 is: The procedure that is used to calculate the euclidience distance score between variable and tested its effect on performance is done in a few steps: 1. Group samples into two typologies with their strategies: imitation and innovation.
2. Determine the value of ideal alignment type between strategies with the contingency variable of organizational culture and isomorphism.
Calculate and sum the amount of deviation or misfit score or called euclidience distance (Dist), which is to calculate the difference of ideal score of each strategy group with environmental contingency variables, strategy postures, and HR training models. The formula that is used is as follows: Distance = (Xid -Xac)² Where; Distance = Euclidience Distance for the ideal type Xid = ideal score of imitation or innovation strategy variable Xac = actual score of contingency variable 3. Test the hypothesis through simple linear regression (contingency approach). Negative and significant regression coefficient are the number sought. The greater the euclidience distance score, the smaller the alignment between variables (the relationship distance between variables is getting closer) so that it will negatively affect the performance. In contrast, the smaller the euclidience distance score, the greater the alignment between variables, so that it will have a positive effect on performance.
Respondent Characteristics
The respondents characteristics are grouped based on job positions, gender, age, years of MSMEs established, types of motifs, number of colors in one motifs, and waste management. The majority of the respondents are both owner and manager, men, the MSMEs have established for at least 3 years, the age is more than 40 years old, has motifs and colors more than 5 in one batik cloth, and does not have waste management. Table 3 explains the analysis result of euclidience distance regression with two strategy groups: imitation strategy (code 1) and innovation strategy (code 2). Each euclidian distance variable will be tested from the relationship with organizational performance. The statistic test result proves that hypotheses 1, 2, 3, and 4 is accepted because the significance level are less than 0,05.
Hypotheses testing
The test result with moderation regression concluded that Human Capital can strengthen the alignment relationship of imitation strategy between hierarchy organizational culture and benign isomorphism with organizational performance. It means that it supported hypotheses 5 and 6. But, in the innovation strategy consensus, the findings turn out to be weakened, so that it rejected hypotheses 7 and 8. It can be seen from the calculation result of R 2 which decreases from the condition before moderation.
Discussion
The research result concluded that there is an alignment of imitation strategy with the contingency of hierarchy organizational culture and benign isomorphism. This means that hypothesis 1 and 2 are supported. Thus, the higher the alignment degree between imitation strategy and hierarchy culture, the higher the organizational performance will be. And also, the higher the alignment degree between innovation strategy and adhocracy culture, the higher the organizational performance will be. This research findings support some prior researches that organizational culture can increase organizational performance (Kotter & Heskett, 1992;Hofstede, 1981). When employees are able to internalize the culture that exists in the organization, it will be manifested in his positive work behavior and at the same time will have an impact on organizational performance. Organizational strategy and organizational culture have a relationship and both can have significant positive impact on organizational performance (Vestal, Fralicx & Spreier, 1997;Zwaan, 2006;Muafi, 2008). The strategy typology relationship from Valencia et al. (2010), Schnaars (1994) and Lee and Zhou (2012) and organizational culture from Quinn and Cameron (1988) formed an ideal alignment in the typology of imitation-innovation and hierarchy-adhocracy. It means that if the organization tends to choose imitation strategy, then it is more concerned with efficiency, low cost of research and development, low market education, capable to avoid product failure, and mimic the process, method, or marketing of pioneer companies. This condition also needs support of hierarchy-type organizational culture with characteristics as follows: prioritizing coordination and monitoring, emphasizing on efficiency aspects with strict and timely control in order to be efficient and effective, and vice versa. Adhocracy culture will usually motivates employee to be creative and innovative, especially in creating new products and services, more flexible and not rigid, and always be the first (Martins & Terblanche, 2003;Valencia et al., 2010;Quinn & Cameron, 1988;Chen et al., 2018). This findings support the research result from Valencia et al. (2010) and Xu and Qianqian (2015). If Batik MSMEs in Pekalongan adhere to adhocracy organizational culture, it will have an impact on innovation strategy so that it will contribute significantly to improve organizational performance, and vice versa. This research is strengthened by interview result with 4 SMEs (respondent/R) owner in Pekalongan that produce batik printing that is faster and handmade batik that took longer production time.
"We usually see other SMEs in terms of selling their products. We tend to imitate and follow the marketing methods. There are certain classes in the products that we sell. If they sell the product cheaply, then we will do the same thing". (R 1).
"Before we carry out the production process, we coordinated with the employees and families involved. If we can produce efficiently, why don't we do it? Because it will help us to sell cheap products and meet consumer demand for the lower middle class, although we also continue to produce batik at expensive prices… it usually needs innovation both from the aspects of color and patterns." (R 3).
"We still need innovation to avoid boredom… because consumers demand that their batik patterns match the current fashion style and change". (R 4).
"The competition demand that we face is very hard…. If we are lucky, even only a little, we will take it. This applies to our batik printing products, which are a lower-class market… even spreads of profit of IDR 10.000 will be taken. Therefore, we will try hard to find cheap and affordable raw materials." (R 2) The relationship of strategy typology from Valencia et al. (2010) and Schnaars (1994) and Lee and Zhou (2012) and isomorphism will form an ideal alignment in the typology of imitation-innovation and benign-hostile isomorphism. Therefore, if the organization tend to choose imitation strategy, it is more emphasizes on low market pressure, support from government regulation, low public and media pressure, and stable politic and safety, and vice versa. Batik MSMEs in Pekalongan tend to imitate strategy, production process, marketing, and other aspects as has been done by other MSMEs. This condition will increasingly have justification if the imitated MSMEs are success from the aspect of market control or sales. This also strengthens the statement and the research result from Gudono (2009) andJang et al., 2014). However, they even discuss to design and implement policies together from the policy and business practice in field aspects. This strengthens the research findings from Theodoro (2014), De Simone (2017), Liu et al. (2010) and Yang and Hyland (2012) that similar business model will be applied together when companies are faced to the same isomorphism pressure. Even so, Batik MSMEs in Pekalongan still have to observe Batik market competitors from outside Pekalongan because it will disrupt their business stability. They are usually quite innovative and creative in terms of patterns, colors, models, and packaging sold to the market. Therefore, when the company chooses imitation strategy, it should emphasize on benign isomorphism type, and conversely, so that it can improve its performance. It should be noted that this research supports hypotheses 3, 4, 5 and 6 but does not support hypotheses 7 and 8.
Organizational performance will be increase if the condition of hierarchy culture, benign isomorphism, and imitation strategy are supported by the moderation role of human capital that can strengthen the relationship with organizational performance. It also strengthens the result from Katopol (2016) that human resources support is very important and strategic in strengthening the relationship of imitation strategy and contingency variable in improving organizational performance. Conversely, the owners/managers of Batik MSMEs in Pekalongan must be active and proactive in analyzing the condition of external environment that occurs outside the company which is usually difficult to control. Hierarchy organizational culture that is applied must support it so that the company can implement the imitation strategy well and right on target. It seems that the role of human capital in Batik MSMEs in Pekalongan must be able to be more proactive and agile in dealing with problems, so that the company will be able to compete in both local and foreign market. Do not let failure occur in the future and lose the opportunity to success. The research result concluded that human capital is not able to strengthen the relationship between adhocracy culture, hostile isomorphism, and strategic innovation in the increase of organizational performance. This is reasonable, considering that in general they are more dominant to choose imitation strategy with limited resources and capital owned by each MSME. They still have not considered that it is important to have good human capital. This is also reasonable considering that the latest education of the average MSMEs owners/managers is high school, and they do not consider innovation important because it will take a very high cost.
Human Capital needs to improve its competence both from the aspects of knowledge, skills, and capabilities of employees/company owners. There are some ways that can be done, such as traning and mentoring from the government, universities, or self-taught. One of their advantages is having a work ethic, perseverance, tenacity, and very high morale. It is one of the commitment to maintain Pekalongan into a city of batik, which is known not only in Indonesia, but also in the world. This actually makes the capital for Pekalongan to continue to grow and develop in the future while also keep maintaining existing cultural values. They also begun to realized that armed with science and technology, they will have a progress rapidly.
Implication
In determining organizational strategy, the owner/manager of MSMEs should be able to focus to pay attention about the alignment of the strategy with the contingency variable such as organizational culture and isomorphism in supporting the achievement of organizational performance to increase.
SMEs owner/managers should regularly attend related training and mentoring carried out by government and universities in managing Batik business, so that they can develop their talents by implementing organizational strategies and policies in achieving organizational goals.
Routinely, it should coordinate all management and staff about organizational strategies that should be implemented and not only imitate all activities starting from the input, process, and output produced.
Increasing the competency of human capital owned by SMEs has to be done, because the aspects of human capital have a very significant strategic role in strengthening organizational performance.
Hierarchy organizational culture and benign isomorphism should have alignment with imitation strategy to be able to increase organizational performance. In contrast, adhocracy organizational culture and hostile isomorphism should have alignment with innovation strategy, so that it can increase organizational performance. This research provides recommendations that companies should not only pay attention to the internal aspects of the organization (organizational culture, organizational structure, HR practices, and other internal variables), but also have to pay attention to the external aspects (isomorphism, market dynamics, politics, inflation, and other extraneous variables). It needs to be known that the owner/manager of MSMEs should be consistent with the strategy consensus that is existed in order to increase organizational performance, and even sustainable competitive advantage.
Research Limitation
The sampling technique of this research is done using purposive sampling, so it is feared that it cannot generalize to the existing population. The data collection also uses primary data with measurement approach based on the owner/manager perceptions, which sometimes still requires assistance and good understanding for the respondents. Moreover, the MSMEs performance variable is also used by comparing the company performance with the competing company performance in the last 5 years. Besides, this research uses quantitative approach, so that it is less able to explore the research variable deeply. The data also collected by cross section approach instead of longitudinal. In the future, it is also necessary to consider to use other contingency variables that can be considered in the model, namely variables that represent the internal aspects (such as leadership style, managerial role behavior, time orientation) and external aspects of the organization (environmental uncertainty). Likewise, it is also necessary to consider other aspects of organizational performance (social and religious performance) because Pekalongan is known to have problems with waste disposal and its religious people. Some of these limitations could also become suggestions for future research by filling in the remaining research gaps. | 2020-07-02T10:29:38.114Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "b867b2cefcbe47473105a6516b15e15a3d580823",
"oa_license": "CCBY",
"oa_url": "https://www.jiem.org/index.php/jiem/article/download/2480/934",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "da38ec6d56e9d8e47902a6b50f378f91b63cbf40",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
247686057 | pes2o/s2orc | v3-fos-license | A case of feline temporal lobe epilepsy with hippocampal sclerosis and dentate gyrus malformation
A two-months-old, male, mixed breed cat presented with epileptic seizures. The cat was diagnosed with drug-resistant epilepsy, and died at 3-years of age. No gross lesion was found at necropsy. Histopathologically, the dentate gyrus granule cell layer of the hippocampus was irregularly arranged. Granule cells were dispersed and ectopic cells were sporadically observed in the molecular layer. The granule cells had an enlarged cytoplasm and swollen nucleus. Immunohistochemistry for NeuN and GFAP confirmed severe neuronal loss and mild gliosis in CA1. Binucleation and ischemic change were observed in the remaining pyramidal cells. This report describes a case of feline temporal lobe epilepsy and hippocampal sclerosis associated with dentate gyrus malformation.
ABSTRACT.
A two-months-old, male, mixed breed cat presented with epileptic seizures. The cat was diagnosed with drug-resistant epilepsy, and died at 3-years of age. No gross lesion was found at necropsy. Histopathologically, the dentate gyrus granule cell layer of the hippocampus was irregularly arranged. Granule cells were dispersed and ectopic cells were sporadically observed in the molecular layer. The granule cells had an enlarged cytoplasm and swollen nucleus. Immunohistochemistry for NeuN and GFAP confirmed severe neuronal loss and mild gliosis in CA1. Binucleation and ischemic change were observed in the remaining pyramidal cells. This report describes a case of feline temporal lobe epilepsy and hippocampal sclerosis associated with dentate gyrus malformation.
KEYWORDS: dentate gyrus malformation, feline temporal lobe epilepsy, hippocampal sclerosis, histopathology, immunohistochemistry A two-months-old, male, mixed breed cat presented with recurrent epileptic seizures. Despite the administration of antiseizure drugs including phenobarbital, diazepam, levetiracetam, gabapentin, pregabalin, flunitrazepam, imepitoin and zonisamide, epileptic seizures persisted (Supplementary Table 1). Epileptic episodes started with focal seizures with facial automatism, including licking, swallowing, facial twitching, head-nodding and head-turning, and evolved into generalized tonic-clonic seizures (Supplementary Movie 1). At 6-months of age, magnetic resonance imaging revealed no significant finding in the brain ( Supplementary Fig. 1A). The cat was diagnosed with drug-resistant epilepsy of unknown cause and died at 3-years and 3-months of age. Postmortem magnetic resonance imaging ( Supplementary Fig. 1B) showed slight atrophy of the cerebral cortex and hippocampi compared to the previous image taken at 6-months of age ( Supplementary Fig. 1A).
At necropsy, no significant changes were found in the visceral organs and the brain. The brain was fixed in 10% neutral buffered formalin and routinely embedded in paraffin for histopathological examination. Tissue sections were stained with hematoxylin and eosin (HE), and luxol fast blue and HE. Immunohistochemistry was performed using mouse monoclonal anti-NeuN antibody (clone A60, 1:100, Millipore, Temeculla, CA, USA) and rabbit polyclonal anti-glial fibrillary acidic protein (GFAP) antibody (1:400, Dako, Carpenteria, CA, USA) as previously described [4]. Immunolabeled antigens were visualized using the Dako Envision + System (Dako). An age-matched feline brain tissue without any lesion was used for normal positive control.
Histopathological examination of the brain revealed bilateral lesions in the hippocampus (Fig. 1). No other morphological changes were observed in the brain, besides slight expansion of the cerebral sulci suggesting mild cerebral atrophy. In the hippocampus, the dentate gyrus granule cell layer was irregularly arranged in a winding line and the pyramidal cells of cornu ammonis (CA) were lost (Fig. 2). The dentate gyrus granule cells were dispersed and ectopic cells were sporadically observed in the molecular layer, occasionally showing a bi-laminated structure (Fig. 3). The granule cells had an enlarged cytoplasm and swollen nucleus. Normal hippocampus and granular layer of the same area in an age-matched cat is shown in Supplementary Fig. 2. Pyramidal cells were almost completely lost in CA1 and CA3, while pyramidal cells of CA2 and CA4 were also lost to a lesser degree (Fig. 2). Binucleation and ischemic change were observed in the remaining pyramidal cells (Fig. 4). Luxol fast blue-HE 634-637, 2022 staining revealed no significant change in the cerebral white matter.
A neuronal nuclear antigen NeuN has been used for evaluating aberrant distribution and loss of neuronal cells in human patients and experimental rodent models of epilepsy [2]. In the present case of feline temporal lobe epilepsy, immunohistochemistry for NeuN confirmed significant loss of pyramidal cells especially in CA1 (Fig. 5A). Pyramidal cells of CA2-4 were lost to a lesser degree. Dentate gyrus granule cell dispersion and ectopic granule cells were also depicted by immunohistochemistry for NeuN. Immunohistochemistry for GFAP revealed mild gliosis in CA1 and hilus of dentate gyrus (Fig. 5B). In CA1, where most pyramidal cells were lost, GFAP-positive astrocytes with enlarged cytoplasm were observed together with GFAP-positive astrocytic processes (Fig. 5C).
Based on the neurological signs and histopathological findings, the cat was diagnosed with feline temporal lobe epilepsy with hippocampal sclerosis and dentate gyrus malformation. Temporal lobe epilepsy in the cat has been studied mostly in experimental models for human mesial temporal lobe epilepsy, which is the most common form of focal epilepsy in humans [9,13]. Histopathologic findings in the brains of human mesial temporal lobe epilepsy include hippocampal sclerosis and malformations, as well as neoplastic, ischemic, and inflammatory lesions. However, malformation of the hippocampus in cats with spontaneous epilepsy is rarely reported. Bilateral dentate gyrus malformation in association with hippocampal sclerosis and intraventricular meningioma was reported in a 13-year-old cat that had a history of seizures for 3 years [10]. In the present case, the cat developed refractory epileptic seizures at 2 months of age and had no other complications besides hippocampal sclerosis and dentate gyrus malformation. Hippocampal sclerosis is characterized pathologically by neuronal loss and gliosis, and further classified according to the predominantly affected area of the hippocampus [3]. In human, typical lesions show severe neuronal loss in CA1 and moderate loss in other areas excluding CA2 (type 1, classical type). In atypical cases, neuronal loss is restricted to CA1 (type 2) or CA4 (type 3). According to the International League Against Epilepsy (ILAE), immunohistochemistry for NeuN and GFAP are recommended for classifying hippocampal pathology of human mesial temporal lobe epilepsy [3]. The International Veterinary Epilepsy Task Force (IVETF) has also recommended the use of NeuN and GFAP for pathological evaluation in animals, although information on NeuN immunohistochemistry in actual animal cases of epilepsy is limited [11]. In the present case, immunohistochemistry for NeuN and GFAP confirmed severe neuronal loss and mild gliosis in CA1, which was comparable to type 1 hippocampal sclerosis in human. Pyramidal cell loss was moderate in CA3; mild in CA2 and CA4. In a retrospective study of cats with documented history of recurrent seizures and/ or status epilepticus, postmortem examination revealed that CA3 was most frequently affected followed by CA4, while CA1 was speared from neuronal loss and gliosis, and thus it is considered that type 1 hippocampal sclerosis is rarely seen in cats [16]. In addition, binucleated pyramidal cells were observed in the present case, which had not been reported in previous cases of temporal lobe epilepsy with hippocampal sclerosis. Recent studies have suggested that multinucleated neurons can be formed by cell-cell fusion during development, injury and repair [7].
Granule cell pathology (GCP) in human mesial temporal lobe epilepsy has been classified in to 2 types [1]: Type1, substantial granule cell loss; Type 2, architectural abnormalities including granule cell dispersion, ectopic neurons or clusters of neurons in the molecular layer, or bi-lamination. Granule cell pathology of the present feline case was comparable to type 2 GCP. In humans, granule cell dispersion in infants with sudden unexplained death has been associated with developmental vulnerability that leads to autonomic/respiratory instability or autonomic seizures, and sleep-related death [8]. Also, mossy fiber sprouting has been associated with reparative and/or mal-adaptive event in the pathogenesis of hippocampal sclerosis. However, evaluation of mossy fiber sprouting is difficult to reproduce between laboratories, and thus the current classification of hippocampal sclerosis relies on the patterns of neuronal loss and gliosis assessed by immunohistochemistry for NeuN and GFAP, respectively [15]. In the present case, the dentate gyrus granule cells were enlarged and dispersed. Enlarged neurons are commonly seen in cortical dysplasia, the most common pathology in pediatric epilepsy patients, and are considered to be deranged radial glia, which have failed to degenerate following cortical maturation [2]. Also, enlarged granule cells with dispersion have been reported in a subset of human mesial temporal lobe epilepsy patients with hippocampal sclerosis [14]. Studies on seizure models of rats and human patients with epilepsy have shown increased cell proliferation of dentate granule cells [12,15]. Granule cell enlargement and dispersion in the present case may be related to dysplasia and increased proliferation of neurons in the dentate gyrus.
No significant finding was noted on MRI at 6 months of age in the present case. A study on MRI of the hippocampus in epileptic cats revealed that cats presenting with epileptic seizures with orofacial involvement are more likely to show changes on MRI compared to epileptic cats without orofacial involvement, however mild abnormalities were difficult to detect on MRI [5]. Although the etiology in spontaneous cases of feline hippocampal sclerosis is yet to be elucidated, autoimmunity, ischemia, and toxins have been suggested as the primary cause. Experimental studies have shown that excessive glutamate release induces hippocampal sclerosis in cats [9]. However, it remains unclear whether the neuronal degeneration and necrosis are the cause or result of the seizure activity, or possibly both, in spontaneous cases [6]. The present case suggests that malformation of the dentate gyrus may be associated in the development of feline hippocampal sclerosis, similar to human mesial temporal lobe epilepsy.
CONFLICTS OF INTEREST. The authors declare no conflicts of interest.
ACKNOWLEDGMENT. This study was partially supported by Grant-in-Aid for Scientific Research (18H02338 and 17H01507). | 2022-03-26T15:09:00.405Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "df02f514019a61e10e3e705afd33f066d725c050",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jvms/advpub/0/advpub_22-0006/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bbc3244a07f9c313f612804f56abe8c787055e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198962878 | pes2o/s2orc | v3-fos-license | Comparison of hepatitis B, hepatitis C, and HIV seropositivity of Syrian and Turkish pregnant women
Objective: In this study, we aimed to compare the seroprevalence of hepatitis B (HBV), hepatitis C (HCV), and human immunodeficieney viruse (HIV) in Syrian pregnant women and Turkish pregnant women in our hospital. Materials and Methods: In our study, a total of 68,169 Turkish pregnant women who received HB surface antigen (HBsAg), HB surface antibody (anti-HBs), HCV antibody (anti-HCV), anti-HIV test, and a total of 11,015 Syrian pregnant patients who received HBsAg, anti-HBs, anti-HCV and anti-HIV tests were examined retrospectively between January 2012 and January-2018 in Universty of Health Sciences Kanuni Sultan Süleyman Training and Research Hospital, Obstetrics and Gynecology Clinic. Results: In our study, the rates of HBsAg, anti-HCV, and anti-HIV seropositivity were 1.1%, 0.1%, and 0.03% in Syrian pregnant women between 2012 and 2018, respectively. In the other study group, in Turkish pregnant women, HBsAg, anti-HCV, and anti-HIV seropositivity rates for 2012 and 2018 were found as 1.8%, 0.2%, and 0.08%, respectively. Conclusion: Although there were no significant differences between the HBsAg, anti-HCV, and anti-HIV results of both groups, the anti-HBs positivity was higher at a significant level in Turkish pregnant women. The reason of the significantly higher anti-HBs positivity levels in pregnant women might stem from the fact that women are vaccinated and controlled regularly due to the policies in this regard in our country.
Introduction
Infection is one of the most important factors increasing perinatal morbidity and mortality. Studies have shown that infections that present during the gestational period have the risk of infecting the fetus by exceeding the placenta and increase fetal mortality and morbidity (1) . Since 2011, because of the civil war, about 2.5 million Syrian people have been forced to abandon their countries and live in refugee camps in neighboring countries. Syrians have been provided with temporary protection by Turkey and are the densest group of asylum seekers in our country. Approximately 2.7 million Syrian refugees have already been distributed in Turkey, which has world's largest population of Syrian refugees (2) . Refugees may face housing, food, medical accessibility, and language barriers when they come to temporary or new host countries. The Turkish Government has provided free healthcare for Syrian refugees, and the facilities to health services has been increased. The rates of pregnancy and birth are high in Syrian refugees in our country (3) . Due to limited opportunities in communication, healthcare workers are also affected and difficulties are experienced in health services. For these reasons, adequate measures against infectious diseases cannot be taken and the mother, fetus, and health workers are at risk. The failure of Syrian pregnant women to adapt to Turkish screening and vaccination programs, and most Syrian pregnant women being seen by physicians during the first birth is a common problem. This study was conducted to compare the hepatitis B virus (HBV), hepatitis C virus (HCV), and human immmunodeficiency virus (HIV) seropositivity of Turkish pregnant women and that of Syrian migrant pregnant women who gave birth in our hospital.
Materials and Methods
Our study was performed retrospectively after approval was obtained from the Local Ethics Committee of University of Health Sciences Kanuni Sultan Suleyman Training and Research Hospital (approval number: 2018.10.36). A total of 11,015 Syrian pregnant women and 68,169 Turkish pregnant women who presented due to pregnancy and who gave birth were included in the study. The women presented to University of Health Sciences Kanuni Sultan Süleyman Training and Research Hospital, Clinic of Obstetrics and Gynecology of İstanbul University of Health Sciences between 2012 and 2018. Patients' files were retrospectively scanned and their ages and ethnicity (Syrian refugee-Turkish population) were recorded. Venous blood samples from all patients were tested for HBsAg, anti-HBs, anti-HCV and anti-HIV using the micro-ELISA method. Suspected positive anti-HIV sera samples were confirmed using the western blot method.
Statistical Analysis
The Statistical Program for the Social Sciences (SPSS Chicago, IL, USA) program was used to evaluate all collected data. Continuous variables with normal distribution were reported as the average. P values less than 0.05 were considered statistically significant.
Results
In the study, 11,015 Syrian immigrant pregnant women and 68,169 Turkish pregnant women were compared in terms of serology. The serology results of the study and control groups are given in Table 1. A total of 68,169 Turkish patients who gave birth in our hospital and 11.015 Syrian patients were examined for HBsAg, 67,760 Turkish and 11,004 Syrian pregnant for anti-HCV, 67,871 Turkish and 11,015 Syrian pregnant women for anti-HIV, and 7130 Turkish and 180 Syrian pregnant women for anti-HBs. The average age of the Turkish women (28±6 years) was significantly higher than that of the Syrian migrant women (25±6.02 years) (p<0.001). Anti-HCV was positive in 0.2% of 67,760 Turkish pregnant women and 0.1% of 11,004 Syrian pregnant women. There was no statistically significant difference between anti-HCV positivity of either group. Anti-HIV was positive in 57 of 68,169 Turkish pregnant women, 12 of these patients were confirmed and seen as negative in our records. The other patients' verification results could not be obtained. Anti-HIV was positive in 4 cases of 11,015 Syrian pregnant women, and four of our patients were found to be negative in our records of verification. When HBsAg positivity of both groups was examined, HBsAg was studied in all pregnant women who gave birth in our hospital. HBsAg positivity was found in 1.8% of the 68,169 Turkish pregnant women and in 1.1% of the 11,015 Syrian pregnant women. Although the total number of patients studied for anti-HBs was less than the others, 26.3% of 7130 patients with anti-HBs in Turkish pregnant women were positive, and in Syrian patients, only 180 patients were examined and 11% of them were positive. When these two groups were compared with each other, anti-HB outcomes were significantly higher in Turkish patients. Syrian pregnant women were divided into two groups, as those who were under and over the age 35 years. When the serologic results were compared between the two groups, both anti-HCV and HBsAg were found as significantly higher for patients over the age of 35 years (p<0.001 and p=0.002, respectively). There was no statistically significant difference of anti-HIV and anti-HB positivity between the two groups.
The serologic results of these groups are given in Table 2.
Discussion
Since 2011, Syrians have migrated to many countries due to the Syrian civil war between rebels and government forces. The majority of the refugees have chosen to be refugees in Turkey. According to the April 2016 United Nations Refugee Agency data, there were 2,749,140 registered Syrian refugees in Turkey (2) . The increasing number of uncontrolled and informal Syrian refugees causes many social and health problems. It is important to know the hepatitis and HIV prevalence in both the refugee and the local community.
There is no congenital anomaly or teratogenic effect of HBV and it does not pass through the placenta (4) . When the case of infection from the mother to the infant is considered, it may be due to contact with various maternal fluids during or after childbirth or in the vagina during vaginal delivery, or by swallowing mother's blood or after a placental injury (5) . HBsAg has been shown in the mother's milk, and theoretically, it can be considered as a breast milk infectious agent; however, infectiousness is decreased by 85-95% with the newborn vaccination program, which is routine in our country (6) . HBV infection is an important health problem in our country as it is in developing countries. In recent years, the screening of donors for hepatitis infection, progress in the sterilization of instruments used in healthcare, and the increase of disposable materials has reduced the infectivity of the infection to a relatively small extent. In addition, HBV vaccination has been routinely practiced in our country since 1998 at the 0 years age group. Considering that HBV vaccine administered in newborns and immunoglobulins significantly prevent vertical passage, it shows us how important it is for pregnant women to be screened for HBsAg. HCV infection may also pass vertically through the newborn. Babies of high-viral-loaded mothers are at greater risk. However, this ratio is lower than that of HBV (7) . It has been observed that most HIV infections are seen in childhood in the perinatal period and this transition is between 13-43%. It is known that infants of pregnant women who are known to be HIV-positive and who are treated with zidovudine during their pregnancies have a 25% reduction in the risk of transmission from mother to infant with ongoing postpartum 6-week zidovudine treatment (8) .
There are many studies in the literature showing the maternal outcomes of refugees from different ethnic groups, but there are few studies comparing HBV, HCV, and HIV infections of the Syrian population in Turkey. In the study conducted by İnci et al., (9) on 4186 pregnant women including 2158 Syrians and 2028 Turks, the rate of vaccination of pregnant women was investigated, and the HBsAg positivity rates before and after vaccination were analyzed. HBsAg positivity was 1.4% among all pregnancies, which was 1.8% among Turkish women and 1.1% among Syrian women.
In the study of Çift et al., (10) comprising 297 Syrian refugees and 300 Turkish women, a total of 597 participants who presented due to pregnancy and giving birth, the anti-HBs immunoglobulin G (IgG) positivity ratio in Turkey (13.9%) was found to be statistically higher than in Syria (8.5%). Again, in this study, the rates of HBsAg and anti-HCV in Syrian pregnancies were found as 0.3%, and they were 0.8% and 0%, respectively, in Turkish pregnant women. In our study, HBsAg and anti-HCV positivity were found as 1.8% and 0.2% in Turkish women and 1.1% and 0.1% in Syrian pregnant women, respectively. However, the difference between this study and our study is that there were no anti-HBs IgG and IgM data of the patients. In the study conducted by Madendag et al., (11) which only investigated Turkish pregnant women, 1,910 HBsAg positivity of 90.351 pregnancies (2.11%), anti-HCV positivity in 102 of 60,729 pregnant women (0.17%), and anti-HIV positivity was detected in 3 of 60,562 pregnancies (0.004%). In a study conducted by Coppola et al. (12) on HBV infection on 1212 immigrants in Italy, they found HBsAg positivity in 116 patients (9.6%). A total of 606 (50%) patients had negative HBsAg/anti-HB antibody values. It has been reported that immigrants have to undergo HBV vaccination after 4.5 years of living in Italy because immigrants' serology has not been assessed, no vaccination has been performed, and immunization has not been provided so far. Considering the number of Syrian pregnancies in our country and a group of patients who were not previously on the screening program, HBV, HCV, and HIV screening of patients is very important both for newborn health and for the safety of our healthcare. Vaccination against hepatitis B is possible and immunity is mandatory for both Syrians and people who were born before 1998 and who are not on the vaccination schedule. There is no vaccine or prophylactic treatment developed for HCV and HIV, and therefore, all pregnancies, especially among Syrian pregnancies, should be routinely screened, materials such as gloves, masks, and glasses should be used during the examination or surgery and the risk of infection should be reduced (10) . The authors also reported maternal and infant infections (including HIV, toxoplasmosis, sexually transmitted infections and rubella seronegativity) to be worse amongst migrant women in 63.6% of included studies and better in 9.1%; the remaining studies showed mixed results and reported that admission to an neonatal intensive care unit (NICU) or special care was higher amongst offspring of migrant women (13) . In the European Union, a number of communicable diseases have been reported to spread in the refugee population including acute respiratory tract infections, louse-borne relapsing fever, cutaneous diphtheria, scabies, measles, meningococcal meningitis, shigellosis, typhoid fever, hepatitis A (HAV), tuberculosis, and malaria. Across studies, tuberculosis -particularly latent -and HBV are the most commonly reported diseases. A recent study including only Syrian refugees found leishmaniasis, tuberculosis, hepatitis, and vitamin D insufficiency to be the most prevalent health concerns (14) . A study from Italy of 529 asylum seekers found 8.3% to be HBsAg positive and 45.6% to be anti-HBV positive (15) . The inclusion of Syrian pregnant women and their newborns in our country in the national vaccination and screening program, in addition to providing them with treatment, should become the priority health policy. According to our study, it is suggested that hepatitis and HIV screening should be performed in medical treatments because Syrian refugees migrate from regions where the disease is prevalent.
Study Limitations
The limitation of the present study was its retrospective nature, and therefore, the anti-HBs IgG and IgM results of the patients could not be obtained. However, the strong side of the present study is that the number of the cases was higher and it provided insight on seropositivity comparison.
Conclusion
As a result, although there were no differences between HBsAg, anti-HCV, and anti-HIV results in our study, anti-HB positivity was found to be higher at a significant level in Turkish pregnant women. The reason why the anti-HB positivity scores were higher in Turkish pregnant women might be because of the regular and controlled vaccination policies in our country. We believe that it is important that the routine screening and awareness programs are organized on the prevalence of HBV, HCV, and HIV among both Turkish pregnant women and in refugees due to the increasing Syrian population living in our country. We also believe that planning new treatments such as immunization, immunoglobulin or zidovudine is a preventive and therapeutic method. Thus, it will be possible to reduce postnatal mortality and morbidity due to infection. In addition, healthcare staff who are at risk will be safeguarded in this way in terms of infectious diseases.
Authorship Contributions
Surgical | 2019-07-31T13:03:58.240Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "72f99757ac72ff2414b244c9c9fcf13cbd55baf4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4274/tjod.galenos.2019.15564",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72f99757ac72ff2414b244c9c9fcf13cbd55baf4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16712471 | pes2o/s2orc | v3-fos-license | Phenotypic Alterations in Hippocampal NPY- and PV-Expressing Interneurons in a Presymptomatic Transgenic Mouse Model of Alzheimer’s Disease
Interneurons, key regulators of hippocampal neuronal network excitability and synchronization, are lost in advanced stages of Alzheimer’s disease (AD). Given that network changes occur at early (presymptomatic) stages, we explored whether alterations of interneurons also occur before amyloid-beta (Aβ) accumulation. Numbers of neuropeptide Y (NPY) and parvalbumin (PV) immunoreactive (IR) cells were decreased in the hippocampus of 1 month-old TgCRND8 mouse AD model in a sub-regionally specific manner. The most prominent change observed was a decrease in the number of PV-IR cells that selectively affected CA1/2 and subiculum, with the pyramidal layer (PY) of CA1/2 accounting almost entirely for the reduction in number of hippocampal PV-IR cells. As PV neurons were decreased selectively in CA1/2 and subiculum, and given that they are critically involved in the control of hippocampal theta oscillations, we then assessed intrinsic theta oscillations in these regions after a 4-aminopyridine (4AP) challenge. This revealed increased theta power and population bursts in TgCRND8 mice compared to non-transgenic (nTg) controls, suggesting a hyperexcitability network state. Taken together, our results identify for the first time AD-related alterations in hippocampal interneuron function as early as at 1 month of age. These early functional alterations occurring before amyloid deposition may contribute to cognitive dysfunction in AD.
INTRODUCTION
Alzheimer's disease (AD) is an age-related neurodegenerative disorder characterized by progressive loss of cognitive and executive functions. AD develops over decades, and although overt stages are commonly studied, little is known about the mechanisms occurring in the earliest stages (Prince et al., 2014). Understanding these upstream mechanisms is crucial for identifying early diagnostic biomarkers, as well as therapeutic targets that could help modify disease progression more efficiently.
The main histological hallmarks of AD are deposition of amyloid beta (Aβ) peptides into extracellular amyloid plaques and intracellular accumulation of hyperphosphorylated tau protein (neurofibrillary tangles) (Hardy and Selkoe, 2002;Hardy, 2006). The prevailing theory regarding the cause of AD is the amyloid cascade hypothesis, which posits that overproduction of Aβ from amyloid precursor protein (APP) initiates a series of events, including synaptic dysfunction, microglial and astrocytic activation and hyperphosphorylation of tau, which culminates in widespread neuronal death and neurodegeneration (Hardy and Selkoe, 2002). In addition to the production of Aβ, processing of APP along the amyloidogenic pathway liberates beta C-terminal fragment (βCTF), the rate-limiting precursor to Aβ (Chow et al., 2010). βCTF is also known to have neurotoxic properties independent of the Aβ region (Lee et al., 2000) and was associated with cortical atrophy when cells expressing βCTF were transplanted in the brains of newborn mice (Neve et al., 1992). Furthermore, transgenic mice overexpressing βCTF also display extensive hippocampal neuronal degeneration Oster-Granite et al., 1996), suggesting that Aβ may not be solely responsible for AD pathology. Because βCTF accumulates prior to Aβ and because its deleterious effects such as neurodegeneration and synaptic abnormalities (Oster-Granite et al., 1996) resemble those associated with Aβ, AD-related pathological alterations may arise even earlier (i.e., before Aβ accumulation) than initially thought.
It has been recently proposed that neurodegeneration may stem from neuronal hyperexcitability. The regulatory mechanisms controlling the excitability state of hippocampal neuronal networks and preventing a transition towards a state of excitability (i.e., a network hyperexcitability state) have been examined in the hippocampus, yet the underlying molecular and biochemical mechanisms remain unknown. This functional regulation operates under physiological conditions but is lost under pathological conditions (Niedringhaus et al., 2015). Hyperexcitability, as manifested by epileptiform activity, is observed at the earliest stages of AD in both humans and mice (Palop et al., 2007;Gleichmann et al., 2011). Based on this evidence, it has been hypothesized that neuronal death is a consequence, rather than cause, of epileptic seizures (Palop et al., 2007). Further, the Aβ-mediated increase in hippocampal network excitability is associated with a widespread increase in excitatory activity, with subsequent negative impact on learning and memory (Palop et al., 2007;Palop and Mucke, 2009). A similar increase in excitability was detected after in vivo treatment of 1.5-2 month-old TgCRND8 mice with pentylenetetrazole, a drug that inhibits type-A γ-aminobutyric acid (GABA) receptors (Del Vecchio et al., 2004). This may be due to the AD-related decrease in parvalbumin (PV) neuron activation, which leads to hyperexcitability (Verret et al., 2012). More recently, using the same AD model, we have reported that subtle alterations in synchronization of intrinsic hippocampal gamma and theta oscillations are detected as early as 1 month of age in TgCRND8 mice (Goutagny et al., 2013). Given that hippocampal network activity is coordinated by GABAergic neurons (Cobb et al., 1995;Lawrence and McBain, 2003;Mann and Paulsen, 2007;Amilhon et al., 2015), GABAergic neuronal dysfunction may lead to network over-excitation, and thus underlie an increased susceptibility to seizures. Similarly, soluble Aβ can disrupt excitatory-inhibitory balance, a specific AD-linked (apoE4) genotype has been associated with impairment of GABAergic interneurons, and GABAergic activation-induced hyperpolarization prevents Aβ-related toxicity (Huang and Mucke, 2012;Paula-Lima et al., 2013;Nava-Mesa et al., 2014). However, the dysfunction of GABAergic interneurons in the earliest AD-associated state of hippocampal excitability and network synchronization has not been assessed.
Here, we investigated putative GABAergic interneuronal impairment in 1 month-old TgCRND8 mice. Aβ pathology progresses during the course of aging, and 6 month-old TgCRND8 mice have high levels of Aβ and severe plaque load in many brain regions, including the hippocampus (Chishti et al., 2001). Although Aβ plaques are undetectable before 3 months of age (Chishti et al., 2001), βCTF, the first cleavage product of APP is expressed already at 1 month Goutagny et al., 2013). Impaired performance in cognitive tasks is first detectable at 2 months of age, but only when more sensitive tests such as object recognition tasks are used, as 2 month-old TgCRND8 mice are unimpaired on a Morris water maze task (Francis et al., 2012). We chose to examine TgCRND8 mice at 1 month of age because it corresponds to the AD pathogenesis stage at which alterations in hippocampal neuronal activity may be first detectable, as no difference has been detected between 15 days-old TgCRND8 mice and control littermates (Goutagny et al., 2013). We focused on neuropeptide Y (NPY; Palop et al., 2011) andPV (Verret et al., 2012) hippocampal interneurons, as both subtypes have been shown to be particularly affected in AD. Moreover, NPY and PV interneurons are critically involved in the control of hippocampal excitability (Palop et al., 2007) and network synchronization (Verret et al., 2012;Amilhon et al., 2015), respectively.
Animals
All experiments followed the policies and guidelines of the Canadian Council on Animal Care and the animal care regulations of McGill University. TgCRND8 mice bear Swedish KM670/671NL and Indiana V717F mutations in the hAβAPP-encoding gene and overexpress human Aβ by 3-4 months. Male TgCRND8 and non-transgenic (nTg) mice were maintained on an outbred C3H/C57BL6 background and kept on a 12 h light/dark cycle with food and water ad libitum. βCTF ELISA βCTF content was quantified from hippocampal homogenates from Tg mice, as preliminary examination indicated that it is undetectable in nTg mice (data not shown). This was expected, as the antibody used in the enzyme-linked immunoabsorbent assay (ELISA) kit (IBL International, Japan) is directed against the human βCTF protein. Hippocampi from male TgCRND8 mice (n = 8 mice) were dissected on ice, snap frozen and stored at −80 • C until proteins were extracted. Hippocampus samples were homogenized in radioimmunoprecipitation assay (RIPA) buffer (100 µl per hippocampus) and left on ice for 30 min. The samples were then centrifuged for 10 min at 1000 g at 4 • C. The supernatants were collected and protein concentrations were determined using a BCA assay kit (Pierce, Rockford, IL, USA). The βCTF content was then assessed in triplicates using an ELISA kit (IBL International, Japan) as per manufacturer's instructions. Briefly, samples were diluted in enzyme immunoassay (EIA) buffer (1:100) and standards were prepared as directed. 100 µl of each sample was loaded into the appropriate wells and incubated overnight at 4 • C. The plate was then washed seven times using diluted wash buffer. One hundred microliter of labeled antibody solution was loaded into each well, and the plate was incubated for 1 h at 4 • C. The plate was subsequently washed nine times and 100 µl of chromogen was added to each well. After 30 min of incubation in the dark at room temperature, 100 µl of stop solution was added to each well, and the plate was analyzed using a plate reader at 450 nm against the reagent blank.
Immunohistochemistry
Additional male mice (n = 5 mice per experimental group) were anesthetized by pentobarbital and transcardially perfused (PBS followed by 4% PFA). Brains were stored in fixative for 24 h at 4 • C, then in a sucrose solution (30% in PBS) for 3 days at 4 • C, frozen using dimethylbutane and stored at −80 • C. Brains were sliced coronally using a freezing microtome at 40 µm for light microscopy, or using a vibrating microtome at 50 µm for fluorescence microscopy, and free-floating sections were stored in a cryopreservative solution (3:3:4 glycerol:ethylene glycol:PBS) at −20 • C in preparation for IHC (IHC) staining. The serial sectioning fraction for IHC was 1/8). Washes in PBS preceded all steps except primary antibody addition. All steps were at room temperature unless otherwise specified.
Cell Quantifications
Immunoreactive (IR) cell somata were counted in each hippocampal region and sub-region (from Bregma −1.06 mm to Bregma −3.88 mm; Franklin and Paxinos, 2007), on a Nikon Eclipse E600 (Kanagawa, Japan) microscope with a 20X objective, by an experimenter blind to group identity. Anatomical regions were determined as per Franklin and Paxinos (2007). CA1 and CA2 were combined (CA1/2) due to the relatively amorphous boundary separating these two regions. Intra-subject counts by an additional experimenter (on a Leica DM 2500 microscope) to confirm count/total accuracy correlated significantly (p = 0.0062), with Pearson r and intraclass correlation values >0.5. The serial section fraction was 1/8. Cell quantifications are shown as numbers of cells per section to correct for number of sections and for consistency between experiments. Pairwise comparisons between nTg and TgCRND8 animals in hippocampal regions and sub-regions were performed using unpaired t-tests, with Welch's correction applied when required.
For GABAergic markers, we were able to count each labeled cell in order to quantify. However, given the large number of NeuN-labeled cells, it was necessary to use stereological estimates to quantify this marker, as in previous studies (Pham et al., 2003). Although this method relies on estimates as opposed to absolute cell counts, and is not appropriate for quantification of all hippocampal cell types (Noori and Fornal, 2011), it is an optimal quantification method for this larger cellular population. NeuN-IR quantifications were performed by an experimenter blind to group identity on an Olympus BX51 microscope with a motorized stage, using StereoInvestigator software (MBF Bioscience). The stereological parameters (optimized for the NeuN-IR population) were: sampling grid area 28440 µm 2 , counting frame 25 µm × 25 µm, dissector height 10 µm, guard zone 1 µm. Volume was assessed using a Cavalieri estimator (20 µm grid spacing) corrected for overprojection. Average coeffecients of error for Cavalieri probes were 0.025 for CA1/2 and CA3 and 0.033 for dentate gyrus (DG), and for optical fractionator (Gundersson m = 1) were 0.051 for CA1/2, 0.060 for CA3 and 0.054 for DG.
Electrophysiology
Tg and nTg mice (aged 30-35 days; n = 6/group) were decapitated, and the brain was rapidly removed and placed in ice-cold high sucrose artificial CSF (ACSF) solution (in mM: 252 sucrose, 3 KCl, 2 MgSO 4 24 NaHCO 3 , 1.25 NaH 2 PO 4 , 1.2 CaCl 2 and 10 glucose) and bubbled with carbogen (95% O 2 and 5% CO 2 ). The cerebellum and frontal cortex were removed with a razor blade, and the hemispheres were separated and allowed to recover for 2-3 min in the oxygenated sucrose solution. Complete septo-hippocampal isolate was then removed from the remaining hemisection as described previously (Goutagny et al., 2009). After dissection, the complete septo-hippocampal preparation was left at room temperature in ACSF bubbled with carbogen for 60 min. For recording, the preparation was transferred quickly to a custom-made submerged recording chamber. Recordings were performed at 30-32 • C after an additional 30 min period in the chamber. The preparation was continuously perfused with ACSF (25 ml/min, in mM: 126 NaCl, 24 NaHCO 3 , 10 glucose, 4.5 KCl, 2 MgSO 4 , 1.25 NaH 2 PO 4 and 2 CaCl 2 , pH 7.4, with 95% O 2 /5% CO 2 ) via a gravity-fed perfusion system and maintained at 30-32 • C. Local field potentials were recorded using glass micropipettes (2-6 MΩ) filled with ACSF. Signals were recorded through a differential AC amplifier (A-M Systems), filtered online (0.1-500 Hz), and sampled at 5 KHz. All drugs came from aliquots of stock solutions (stored at −80 • C) and were added to the perfusing artificial ACSF at the concentrations indicated. Base line recording lasted for 20 min followed by 100 s of pharmacological stimulation (4AP at 150 µM) and 20 min recovery after stimulation. Changes in theta power were measured in mV 2 /Hz. Pairwise comparisons were performed with t-tests. For all experiments, a p value of ≤0.05 considered statistically significant. Bar graphs show experimental mean, with error bars indicating standard error of the mean.
Immunohistochemical Assessment of Aβ and βCTF in the Hippocampus of 1 Month-Old TgCRND8 Mice
Immunohistochemical staining for three neuronal markers (NeuN, NPY and PV) revealed distributional patterns in CA1/2, CA3, DG and subiculum ( Figure 1A) that have been FIGURE 1 | Expression of neuronal markers and amyloid-beta precursor protein (APP) cleavage products in the hippocampus of 1 month-old TgCRND8 mice. (A) Micrographs of immunohistochemical labeling of hippocampal neurons for NeuN, neuropeptide Y (NPY) and parvalbumin (PV) in CA1 and 2, CA3, dentate gyrus (DG) and subiculum (SUB) hippocampal regions. Scale bar = 50 µm. (B) Amyloid-beta (Aβ) and C-terminal fragment/Amyloid precursor protein intracellular domain (CTF/AICD) expression as assessed by FCA3340 and CT20 antibodies, respectively. Anatomically matched hippocampal sections of CA1/2 sub-region of 11 month-old TgCRND8 mice were used as a positive control. Scale bar = 100 µm.
Frontiers in Aging Neuroscience | www.frontiersin.org described previously (Albuquerque et al., 2015). Assessment of Aβ and amyloid-plaques using the FCA3340 antibody, which specifically recognizes human Aβ but not APP (Barelli et al., 1997), indicated no immunolabeling in hippocampus sections obtained from 1 month-old TgCRND8 mice, in contrast to the abundance of plaques present in the hippocampus of 11 month-old TgCRND8 mice, used as a positive control (Chishti et al., 2001; Figure 1B). However, the CT20 antibody which recognizes both human and murine full length APP and its C-terminal fragments (CTF) α, β and amyloid precursor protein intracellular domain (AICD) but not Aβ, revealed IR cells, notably in the pyramidal cell layer of the CA1/CA2 region ( Figure 1B). CT20-IR cells displayed punctate cytoplasmic labeling ( Figure 1B). Combined, the labeling data obtained with FCA3340 and CT20 indicated that immunoreactivity likely arises from the presence of APP and its first cleavage products (CTFs) in the absence of Aβ. Since the observed putative CTF-IR could arise from different fragments (αCTF, βCTF or AICD) generated along either the amyloidogenic and non-amyloidogenic pathways, we performed a selective βCTF ELISA to assess whether the CT20-IR could be due specifically to the βCTF. The presence of βCTF was confirmed and quantified (1.47 pg βCTF/mg protein ± 0.22, range 0.80-2.43 pg βCTF/mg protein) by ELISA in the hippocampal tissue of TgCRND8 mice aged 1 month.
Global Phenotypic Analysis of the Composition of Selected Hippocampal Neuronal Populations
Phenotypic analysis along the rostro-caudal axis (−0.94 to −2.86 mm from Bregma) using the neuronal marker NeuN indicated that both total number (Figure 2A) and density ( Figure 2C) of neurons were similar between genotypes in all studied regions of the hippocampus (CA1/2, CA3 and DG). Interestingly, structural volume was significantly decreased for TgCRND8 mice in CA3 (p = 0.041) and DG (p = 0.017) but not CA1/2, as compared to controls ( Figure 2B).
However, when assessing specifically NPY (Figure 3A; p = 0.016) or PV (Figure 3B; p = 0.016) subpopulations, a significant decrease in numbers of IR-cells was found in TgCRND8 mice compared to age-matched nTg littermates.
Regional and Sub-Regional Distribution of NPY-and PV-Expressing Neurons
A more detailed analysis indicated that the overall number of NPY neurons is decreased in hippocampal sub-regions CA1/2 (p = 0.021) and DG (p = 0.0033), but not CA3 or subiculum (Figures 4A-D) of TgCRND8 mice as compared to controls. The most affected sub-regions in CA1/2 were the stratum pyramidale (p = 0.021) and oriens (p = 0.026) layers ( Figure 4E). No significant difference was seen between genotypes in the layers of CA3 ( Figure 4F). By contrast, in the DG, the polymorphic layer (PO; p = 0.0031) and granule cell layer (GR; p = 0.0051) showed alterations in TgCRND8 mice, whereas the molecular layer (MO) was unaffected ( Figure 4G).
An analogous analysis of PV-expressing neuronal sub-populations showed a significant decrease in the number of these neurons in CA1/2 ( Figure 5A; p = 0.017) and subiculum ( Figure 5D; p = 0.030) whereas CA3 ( Figure 5B) and DG ( Figure 5C) were not significantly affected. Among the layers analyzed in these sub-regions (Figures 5E-G), the number of PV-expressing neurons in TgCRND8 mice was significantly decreased only in the pyramidal cell layer of the CA1/2 region ( Figure 5E; p = 0.0004).
to focus further on the PV-expressing sub-population of interneurons.
Assessment of Amyloid in PV-Expressing Neurons
To explore the mechanisms of the observed decrease in the number of PV-expressing interneurons in 1 month-old TgCRND8 mice, we used a double-labeling approach to determine whether, in the absence of Aβ (Figure 1B), PV-expressing neurons may co-express βCTF. Using anti-PV and FCA3340 antibodies, we first confirmed the absence of Aβ in the PV-IR neurons located in the pyramidal layer (PY) of CA1 in 1 month-old TgCRND8 mice ( Figure 6A). By contrast, in the anatomically matched hippocampal region from 11 month-old TgCRND8 mice, which was used as a positive control, cytoplasmic Aβ labeling was revealed in pyramidal neurons.
As our ELISA data demonstrated βCTF expression in the hippocampus in 1 month-old TgCRND8 mice, we next asked whether PV-expressing neurons may also express βCTF. Our double-labeling approach indicated that CTF-IR (as revealed by the CT20 antibody) does not co-localize with PV-IR ( Figure 6B). Interestingly, in 1 month-old TgCRND8 mice, APP and CTFs were apparently localized in the upper portion of the pyramidal cell layer where PV-IR neurons were virtually never detected ( Figure 6B). Indeed, PV-expressing neurons were localized in the lower portion of the PY (Figure 6B).
Functional Assessment of Hippocampal Network Excitability
Transgenic mouse models of AD at 5 months of age are characterized by high levels of Aβ peptides potentially leading FIGURE 6 | Analysis of the co-expression of APP cleavage products and PV-IR in CA1 hippocampal neurons of 1 month-old TgCRND8 mice. (A) Micrographs of immunohistochemical labeling of hippocampal neurons for PV and Aβ; anatomically matched sections from 11 month-old mice were used as a positive control. Note that in contrast to the absence of Aβ-IR in 1 month-old TgCRND8 mice, intracytoplasmic Aβ is clearly detectable in the pyramidal layer (PY) of the CA1 region of 11 month-old TgCRND8 mice. (B) Analysis of Aβ/CTF expression in CA1 hippocampal PV neurons in 1 month-old TgCRND8 mice. As Aβ was undetectable when assessed with the selective FCA3340 Aβ antibody, and enzyme-linked immunoabsorbent assay (ELISA) detected a substantial amount of βCTF, this immunoreactivity reveals likely the expression of βCTF. However, no PV-IR neurons co-expressing βCTF were observed. Scale bar = 10 µm.
to network hyperexcitability (Palop et al., 2007). However, our recent data examining Tg mice at 1 month of age has not indicated the presence of Aβ peptide (Goutagny et al., 2013); instead, we observed elevated βCTF and decreased numbers of hippocampal PV-expressing neurons in the current study. Aiming to assess whether these alterations could account for network hyperexcitability (changes in theta amplitude, theta frequency and burst (seizure) events), we utilized 4-aminopyridine (4AP) in a complete septo-hippocampal preparation (Goutagny et al., 2009). The hyperexcitability network state is achieved by blocking primarily K v 1 channels, which consequently induces activity reflecting the firing of GABA-releasing cells and is sustained by GABA A receptor signaling (Avoli and de Curtis, 2011). Our hypothesis was that 4AP would reveal a state of hyperexcitability in juvenile TgCRND8 mice.
Treatment with 4AP did not elicit a visible change in the amplitude of theta activity in nTg mice (Figures 7A,B, red square). Conversely, in Tg mice 4AP treatment affected amplitude of hippocampal theta activity (Figure 7D, red square). Magnification of raw activity traces confirmed an increase in theta amplitude of Tg mice during 4AP treatment ( Figure 7E). As shown in the spectrogram analysis, theta oscillations frequency remains stable (4.10 ± 1.68 Hz) in nTg mice when treated with 4AP ( Figure 7C, red square). In contrast, although not statistically significant (p = 0.29), Tg mice showed changes in frequency during 4AP stimulation ( Figure 7F, red square). Statistical analysis further confirmed our findings, as no significant changes were detected in frequency either during (p = 0.29) or after 4AP stimulation (p = 0.53; Figures 7G,H). Concomitantly, statistical analysis further confirmed the 7.5 ± 2.44-fold increase in theta power during 4AP stimulation in Tg mice (Figure 7I, p = 0.02). The increase was also significant after 4AP stimulation (Figure 7I, 5.03 ± 1.29 p = 0.01). The changes in theta peak power were also significant when Tg mice were compared to nTg during 4AP (Figure 7J, p = 0.038) and after stimulation (Figure 7J, p = 0.023).
At this point, the data suggested that the network state in TgCRND8 mice is closer to a hyperexcitability-like state when compared to controls. To further explore the excitability network state, we analyzed the number of burst events present in the nTg ( Figure 7K) and Tg mice ( Figure 7L). Our results showed an elevated number of burst events in Tg mice when treated with 4AP ( Figure 7L), further supporting a hyperexcitability-like network state. In contrast, control mice showed a more stable network state ( Figure 7K). Further statistical analysis confirmed a two-fold increase (2.4 ± 0.19-fold, p = 0.012) in relative burst events in Tg mice when compared to nTg mice (Figures 7M,N).
DISCUSSION
Alterations in hippocampal neuronal excitability (Del Vecchio et al., 2004;Palop and Mucke, 2010;Verret et al., 2012) and synchronization of hippocampal oscillatory activity (Goutagny et al., 2013) occur at the first stages of AD pathogenesis, via as yet poorly understood mechanisms. Here, we found that a significant decrease in the number of hippocampal NPY-and PV-IR cells in 1 month-old TgCRND8 mice coincides with the early impairment of neuronal network activity. This decrease was distributed between hippocampal regions CA1/2 and DG for NPY. The decreased number of PV-IR cells selectively affected CA1/2 and subiculum, with the PY of CA1/2 accounting almost entirely for the reduction in number of hippocampal PV-IR cells. This decrease in NPY-and PV-IR cells was observed in the absence of change in total numbers of hippocampal neurons (NeuN-IR) in the studied principal layers, suggesting that decreases in numbers of NPY-and PV-IR cells are not the result of neuronal death occurring in the studied hippocampal layers but may be related to loss of expression of particular GABAergic markers (see ''Discussion'' Section in Uchida et al., 2014), although contribution by specific GABAergic population cell death cannot be conclusively ruled out by the current data. We observed decreased volume of CA3 and DG, potentially representing changes in neuropil. Moreover, the change in GABAergic cells was observed at a stage wherein βCTF, the first cleavage product of APP and direct precursor of Aβ, was abundant. Our work (Goutagny et al., 2013;present study) is the first reporting on AD-related hippocampal dysfunction at such an early stage of pathogenesis. However, cortical hyperexcitability was recently correlated with βCTF (Xu et al., 2015) in 2-4 FIGURE 7 | Electrophysiological monitoring of CA1/subiculum neurons in response to 4AP challenge. Raw traces of theta oscillations recorded in CA1/subiculum area using septo-hippocampal preparations from non-transgenic (nTg; A) and transgenic (Tg) mice (D). In nTg mice, 150 µM-4AP treatments did not change theta amplitude (A, red square). Conversely, in TgCRND8 mice 4AP treatments altered theta amplitude (D, red square). Magnification of raw theta activity before, during and after 4AP treatment in nTg (B) and TgCRND8 mice (E). Power spectrum analysis of theta power and frequency in nTg (C) and (Continued)
FIGURE 7 | (Continued)
TgCRND8 mice (F). Theta frequency did not show statistically significant changes in TgCRND8 4AP-treated animals (G,H, different colors correspond to individual TgCRND8 samples). TgCRND8 mice reveal theta power increase during and after 4AP treatment (I,J, red circles denote averages, black circles denote individual TgCRND8 traces). Under 4AP challenge, elevated number of burst events was observed in TgCRND8 mice (L) compared to nTg (K).
Relative change of burst events is higher in TgCRND8 mice (N, representative trace of burst events after 4AP in L) compared to nTg mice (M, representative trace of burst events after 4AP in K). Burst events were significantly increased for TgCRND8 mice (see inset in N). * p < 0.05.
month-old Tg2576 mice prior to plaque formation (Duffy et al., 2015), thus further pointing to dysregulation of excitability as a general feature of initial AD stages. As a corollary, currently available AD mouse models should be studied at much earlier ages than have been previously examined, specifically in the context of putative early impairments of GABAergic interneurons. Electrophysiological monitoring of theta oscillations in the distal CA1/subiculum of TgCRND8 mice revealed increased power, without change in frequency of theta oscillation in comparison to nTg littermates. These findings suggest altered control of hippocampal network excitability state as a potential mechanism of network dysfunction seen during early stages of AD development at the same age (1 monthold) and the same strain (TgCRND8) of AD mouse model (Goutagny et al., 2013). Furthermore, although 4AP triggered burst activity in neuronal populations of both genotypes, the relative change of burst events was significantly higher in TgCRND8 mice than in controls, suggesting an AD-associated hyperexcitability. Therefore, treatment with 4AP appeared to induce a prodromal network hyperexcitability-like state in the studied 1 month-old TgCRND8 mice, as previously seen in older mice of the same strain (Jolas et al., 2002;Del Vecchio et al., 2004).
Given the recently demonstrated pivotal role of PV-expressing GABAergic neurons in the generation of theta oscillations in the isolated hippocampus (Amilhon et al., 2015), these data suggest that the loss of PV-expression in GABAergic neurons may be causal to their functional impairment. Thus, the decrease in number of functional PV neurons in the absence of their death (as attested by the absence of difference in the number of hippocampal NeuN-IR neurons between genotypes) may contribute to the loss of inhibitory tone and subsequent increased excitability of pyramidal cells. As PV neurons are key for the synchronization of pyramidal cells, this loss of PV expression can likely have a critical impact on CA1 network activity. Furthermore, the observed AD-related hyperexcitability may in turn be related to the previously reported uncoupling of theta/gamma oscillations (Goutagny et al., 2013). In addition to effects on excitability, the loss of PV neurons may also influence hippocampal plasticity. Indeed, in a recent study LTP was enhanced in rat hippocampal slices after application of an α 7 -nicotinic acetylcholine receptor agonist in part through an enhancement of a GABAergic receptor subtype (Townsend et al., 2016). The effect of phenotypic changes in GABAergic neurons on synaptic plasticity should also be explored, especially given the implication of PV cells in AD (Verret et al., 2012). Notably, synaptic hyperexcitability in CA1, which has been reported in 5 month-old TgCRND8 mice (Jolas et al., 2002), is present much earlier, as is shown here at the age of 1 month, before Aβ overproduction. At this time point βCTF is detectable, and given that this protein has been previously associated with impairments in synaptic function (Nalbantoglu et al., 1997;Tamayev et al., 2012), the presence of βCTF raises the possibility that this neurotoxic protein may be affecting GABAergic subpopulations. In this light, decreased numbers of NPY-IR neurons in the stratum oriens (SO) and pyramidale of the CA1 region, combined with the significant loss of PV-IR in the PY of CA1, would likely have major consequences in terms of control of pyramidal cell excitability. Indeed, cells in the SO project to distal dendrites of CA1 pyramidal cells and provide dendritic inhibition (Maccaferri and Lacaille, 2003). It has to be stressed however that, because of the very young (1 month) age at which we assessed the putative changes in the hippocampus of TgCRND8 mice, the possibility remains that the observed decreased numbers of the PV neurons may be related to the incomplete maturation of the GABAergic neurons, rather than to their AD-related decrease. This possibility is nevertheless very unlikely, as our previous study demonstrated that at the age of 2 weeks, there is no difference in the hippocampal network activity between TgCRND8 mice and their age-matched nTg controls (Goutagny et al., 2013).
A previous phenotypic analysis of hippocampal GABAergic neuronal populations at the overt stages of AD pointed to selective loss in number or function of specific sub-types, notably neurons expressing NPY (Ramos et al., 2006;Albuquerque et al., 2015), and PV (Verret et al., 2012;Albuquerque et al., 2015). As with the aforementioned CA1 synaptic hyperexcitability, our results indicate that the change in the composition of certain GABAergic sub-populations may occur much earlier than previously believed. This is particularly striking for NPY-expressing GABAergic neurons that, by the age of 1 month (in the present study), have already decreased substantially in number, similar to 6 month-old TgCRND8 mice (Albuquerque et al., 2015) in all studied hippocampal regions. By contrast, the alteration of PV-expressing neurons appears more complex because the decreased expression of PV found in CA1 and subiculum at 1 month of age is not detectable in 6 month-old animals (Albuquerque et al., 2015). Additionally, in another AD mouse model (APP/PS1), a significant loss of PV neurons was shown in CA1/2 region of 10 month-old mice (Takahashi et al., 2010). A putative loss of PV neurons in older TgCRND8 mice cannot be completely excluded as these mice have not been studied at ages older than 6 months (Albuquerque et al., 2015).
Collectively, our data indicate that before the loss of GABAergic neurons in AD mouse models (Krantic et al., 2012;Albuquerque et al., 2015), GABAergic neurons undergo substantial alterations that potentially increase seizure susceptibility (Palop et al., 2007). These include changes in neurochemical phenotype and composition of GABAergic subpopulations, compatible with the increased excitability revealed here with 4AP. Similarly, the shift from hyperpolarizing towards depolarizing actions (Lagostena et al., 2010) has been proposed to aggravate increased excitability resulting from loss of GABAergic neurons. Moreover, in the J20 line of hAPP FAD mice, PV neurons displayed greater depolarization of the resting membrane potential and reduced action potential amplitude (Verret et al., 2012). However, other alterations, such as increased numbers of NPY-expressing cells and GABAergic sprouting in the DG, may counteract the AD-related hyperexcitability (Palop et al., 2007) through compensatory remodeling of GABAergic neuronal population composition.
Finally, our data show that the decrease in PV-IR and NPY-IR cells is present at 1 month, whereas the number of NeuN-IR cells is unaltered, suggesting that phenotypic alterations of GABAergic neurons occur without neuronal loss at this early AD pathogenic stage. Functional changes manifesting as decreased PV and NPY expression, and increased synaptic excitability in CA1, all occur well before GABAergic cell death, which is not detectable before 6 months in TgCRND8 (Krantic et al., 2012;Albuquerque et al., 2015) and AβPPdE9 (Ramos et al., 2006) mice. This represents additional evidence supporting the hypothesis that hyperexcitability and increased seizure susceptibility are the cause rather than the consequence of AD-related neuronal death (Palop et al., 2007). Moreover, our study suggests that at early stages of AD-related pathology, GABAergic neurons downregulate at least some of their neurochemical markers (PV, NPY) but remain present, as no neuronal loss was detected by stereological quantification of NeuN-IR cells. A similar phenomenon has been observed in schizophrenia, both in animal models (Nullmeier et al., 2011) and patients (Akbarian et al., 1995;Impagnatiello et al., 1998;Guidotti et al., 2000;Hashimoto et al., 2003;Lewis et al., 2005).
In conclusion, this study reveals early alterations in hippocampal neuronal phenotypes that are associated with a functional increase in oscillatory activity and precede Aβ accumulation. These findings suggest that AD etiology may involve hippocampal GABAergic alterations occurring before appearance of symptoms and plaque formation.
ACKNOWLEDGMENTS
This research was supported by Canadian Institutes of Health Research (CIHR) operating grants to RQ and SW, a Fondation pour la Recherche Médicale (FRM DVS20131228910) grant to SK, and Service de Coopération et d'Action Culturelle du Consulat Général de France à Québec (Coopération France-Québec) to SK and SW. NM is a Fonds de Recherche du Québec -Santé (FRQ-S) scholar and CIHR New Investigator. IM was supported by FRQ-S. SM-R was supported by FRQ-S, as well as CONACYT-269021 and was awarded a Catedra support from CONACYT, México. The authors thank all members of the Quirion lab for fruitful discussions and constructive suggestions.
Conflict of Interest Statement:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Copyright © 2017 Mahar, Albuquerque, Mondragon-Rodriguez, Cavanagh, Davoli, Chabot, Williams, Mechawar, Quirion and Krantic. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution and reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | 2017-05-04T06:46:18.058Z | 2017-01-19T00:00:00.000 | {
"year": 2016,
"sha1": "5c7cf0a4ad3a0ffafaf26941dfaae704119abd0d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2016.00327/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "088a46ef110ca501c9586c8e58568933b5d852c2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
81682427 | pes2o/s2orc | v3-fos-license | Epidemiological Status of Bovine Theileriosis in Uttar Pradesh a state of India
Tropical theileriosis has long been recognized as a hindrance to the development of sound dairy industry in the India and is a cause of major economic losses. Serological surveys indicated that Theileria annulata infection is widespread in the country but the disease mostly affects exotic dairy breeds and their crosses with indigenous breeds. The prevalence of Theileria infection in tick vector Hyalomma anatolicum collected from healthy animals of Sub-humid regions of India. Theileria caused by Theileria annulata is economically important vector borne haemoprotozoan disease of livestock. Theileria is responsible for causing theileriosis resulting in death of affected animals. The disease is endemic in warmer regions, it is seasonal and the incidence is higher during summer and rainy season when the ticks have higher activity although sporadic outbreaks have been recorded year round. It is a potential killer of livestock and causes economic losses in terms of mortality, morbidity, abortion, infertility, reduced milk yield etc. The disease is underestimated in cattle due to sub clinical nature. The conventional parasitological techniques are less sensitive. More than 80 percent of infections are cryptic and undetectable by direct microscopy. This paper briefly discuss about the occurrence, control and the economic importance of Theileria spp. in Western Himalayan region of Uttar Pradesh.
mainly cattle and sheep, in tropical and subtropical regions. Tropical theileriosis is a TBD caused by a protozoon called Theileria annulata transmitted by several tick species of the genus Hyalomma [1]. Theileriosis and babesiosisare the most important and dangerous blood protozoan diseases of the cattle, these are transmitted by ticks especially in countries which have intensive animal industries [2]. Tropical theileriosis is a frequent fatal disease of cattle caused by the protozoan parasite Theileria annulata.
The temperature of Uttar Pradesh region is favorable for ticks and responsible for the transmission of theileriosis in cattle. It is situated on the northern spout of India and shares an international boundary with Nepal. The Himalayas border the state on the north, but the plains that cover most of the state are distinctly different from those high mountains. Cattle are the important species in these areas because of dual purpose. They are reared for the supply of draught power for agriculture and for milk production. This is because of the introduction of new cattle from the surrounding states where this disease is prevalent. Uttar Pradesh has a humid subtropical climate and experiences four seasons. The Gangetic plain varies from semiarid to sub-humid. The mean annual rainfall ranges from 650 mm in the southwest corner of the state to 1000 mm in the eastern and southeastern parts of the state. The winter in January and February is followed by summer between March and May and the monsoon season between June and September. Summers are extreme with temperatures fluctuating anywhere between 0°C and 50°C in parts of the state coupled with dry hot winds called the Loo.
The present paper attempts to present a scenario of theileriosis, their occurrence in Uttar Pradesh, reason of occurrence, effects and their preventive measures. So that dairy venture become more profitable and mortality rate in crossbred cattle due to theileriosis should be reduced.
Theileriosis
The Theileria parasite was first reported by Arnald Theiler and Dschunkowsky first described the disease theileriosis in 1904. Theileriosis is caused by Theileria annulata and transmitted through the bites of Hyalomma and Rhipicephalus with higher incidence in the crossbred cow of all age groups with the general epidemiology of the disease in tropical areas [3]. This disease is seasonal, starts in the second part of April, and adds to its abundance increase in June and July. Cases of theileriosis are generally observed during summer or rainy season when the ticks have higher activity although sporadic outbreaks have been recorded year round [4]. Tropical theileriosis caused by T. annulata may result in 80% mortality in susceptible animals.
Acute clinical cases of theileriosis were first recorded on 12 June, 1922 in hill bulls. In 1930 outbreak of clinical theileriosis were recorded in imported herds maintained at Lahore, Bangalore, Allahabad and Kirkee. Since then occasional outbreaks of theileriosis have been recorded mainly in cross bred and exotic cattle.
Treated cattle turn out to be long standing carriers, with only a few numbers of infested erythrocytes, thus posing difficulty in the demonstration of parasites in blood smear. In long standing carrier animals blood smears are negative on microscopy [
Pathogenesis
The life cycle of T. annulata includes the following stages: Sporozoite Stage: When infected adult ticks attach to cattle, the sporozoites develop in the tick salivary gland and are injected with the tick saliva. The sporozoites invade the lymphoid cells and schizonts are detected in 10-13 days. This is the prepatent period of the disease.
Schizont Stage:
The schizonts parasitize lymphocytes, proliferate and invade and damage the lymphoid system and produce lesions in the skin, liver and spleen.
Piroplasm Stage:
The piroplasm parasitizes the erythrocytes and causes destruction of these cells with a decrease in the erythrocyte count and haemoglobin level.
Symptoms: Clinically a rise of body temperature up to 107°F and enlarged superficial lymph nodes accompanied by dullness, anorexia, salivation, lacrimation, discharge from nostrils, tachycardia, decreased milk production are the symptoms for theileria (El-Deeb Younis, 2009).
Microscopic Examination
Theileria annulata infection in cattle is usually based on the detection of macroschizonts in Giemsa's-stained lymph node biopsy smears in live animals and impression smears of lymph node and spleen in dead animals [5].
Serological Examination
Serological tests such as the indirect immunoflourescent antibody test (IFAT) can be used to detect circulating antibodies [7,8]. However, crossreactivity with antibodies directed against other Theileria species limits the specificity of the IFAT [6].
DNA Based Examination (PCR & LAMP)
PCR: Molecular diagnosis of haemoprotozoan diseases involves several PCR-based diagnosis procedures, which help in the identification of the parasites up to the species or even strain level [9-12]. With the availability of sequenced parasite genes and PCR, it is possible to detect parasites within samples of blood. PCR based technique uses small material which is very relevant because large amount of material is not possible from different stages of parasitic life cycle [13]. This technique reveals a high sensitivity compared to immunological examinations and serological testing. Furthermore, the advent of the polymerase chain reaction (PCR) technique has made it possible to increase the sensitivity of nuclear hybridization techniques, through amplification of target DNA sequences of the parasites in test material, by in situ synthesis of these sequences prior to hybridization with the diagnostic probe. Despite the benefits of PCR based technologies , such as high specificity and sensitivity to detect some parasites the main disadvantage of these methods is that they are very time consuming and do not provide quantitative data [14].
Loop Mediated Isothermal Amplification (LAMP):
It is sensitive and specific and less time consuming method [15]. It is characterized by use of DNA polymerase that has low sensitivity to inhibitors and the set of four primers to recognize six different sequences on target gene [16]. It can amplify 10 9 copies in an hour [17]. It is isothermal technique which uses water bath. It has been used for Babesia and Theileria [16,[18][19][20]. It can be used without DNA extraction [21].
Treatment
There are three effective drugs available for the treatment of Theileriosis namely; parvaquone, buparvaquone, and halofuginone lactate are used worldwide [22]. Research work regarding the efficacy of these drugs has shown that buparvaquone, secondgeneration hydroxynaphthoquinone, is more effective so far. Early treatment with buparvaquone was 100% effective in eliminating the protozoan parasites from the blood and lymph nodes and led to an improvement in the clinical state whereas treatment in the later stages of the disease whilst eliminating the parasites failed to improve the clinical condition of the animal [23].
Status of Theileriosis in India
India being one of the 12 mega biodiversity country contribute significantly to world flora and fauna. As a result India with its tropical climate is hub of several vector borne diseases like bovine tropical theileriosis. T. annulata, the causative agent of tropical thcileriosis has a much wider distribution; it is found in Southern Europe, Northern Africa, and Egypt to the Sudan, the Middle East, India, parts of the former Soviet Union and southern China. T.annulata, originating from Asian water buffalo (Bulbulus bubulis), and transmitted by several Hyalomma tick species, is responsible for tropical theileriosis from Southern Europe to China, a vast region in which an estimated 250 million cattle are at risk. Livestock plays a critical role in the welfare of India. Indigenous cattle are resistant to this disease but cross bred cattle are highly sensitive to theileria [24]. The theileria parasites have detrimental effect on the cows as it causes high mortality in the animals and there is an irreversible loss of production and reproduction. Theileria and Babesia both have same symptoms like high fever and both are fatal diseases, but in babesia blood comes out with the urine and hence it is also known as Red water disease. Medicine for babesia is easily available but for theileria it is not easily available as it is very costly. So smallholder dairy farmers would prefer to run the risk of tropical theileriosis rather then they pay for the vaccines.
Status of Theileriosis in Uttar Pradesh
Livestock are the important part of the rural population. Uttar Pradesh is specialized in smallholder dairy production system. Cattle have been the important species in the herd. Due to moderate climate of the region, ticks responsible for the blood-borne diseases are not in the active form or found less. No earlier case of theileriosis is reported from the region. But to increase the milk production rate, cross bred cattle have been introduced in the Uttar Pradesh state from the neighboring states like Haryana, Punjab, and Rajasthan etc where these diseases are prominent and many of these animals may be the carriers. These animals are the source of infection. Infection is usually caused by tick that migrates from carrier animals to non infected animals. Now some cases found positive for theileriosis in a preliminary survey. The reason for the occurrence of theileriosis is the introduction of carrier cattle to the herd of healthy animals. The stress due to extreme of climate may be the contributory factor.
Infection by theileria limits the movement of cattle between the countries and can result in the production losses and high mortality in susceptible animals. Due to lack of sensitive diagnostic methods and lack of cost effective treatment for the detection of clinical cases and carrier animal majority of cattle positive for theileria are left untreated.
Present status of the animal diseases needs a serious attention in terms of research. Minjauw and Mcleod have estimated the cost of T. annulata in India to be $384.3 million. Vaccination against this disease is not practiced due to higher cost and non-availability. To reduce the chance of introducing the parasite first is essential screening should be done before introduction of the cross bred cows to the areas where the chance of occurrence of such disease is very low or introduced from the districts where infection is uncommon. The second is treat the cattle for ticks on arrival and don't mix them with home cattle. So the aim is monitoring cattle for the blood protozoan through microscopic examination and by using Polymerase Chain Reaction (PCR). PCR monitors the presence of parasites which are not visible by microscopic examination and in the preclinical cases.
Effect on Milk Production
Theileria annulata infection was diagnosed as the cause of severely depressed milk yields in Friesian cows [25]. It was also found that cows of higher producing breeds were generally susceptive to the tick and the effect on milk production appeared to be greater [26].
Effect on Reproduction
Previous studies confirmed that theileriosis has an adverse effect on reproduction. Pregnant animals introduced to endemic bush tick areas are especially at risk and should be monitored carefully after introduction for signs of theileriosis [27].
Effect on Thyroid Hormone
A number of experimental conditions have been used to evaluate hormonal secretion during heat stress including short-term temperature modification using environmental chamber, seasonal comparisons of hormonal-profiles and the use of micro climatic modification during period of heat stress. Johnson and Vanjonack stated that the thyroid function in the lactating animals showed a general depression in the summer months and was normal or elevated during winter months [28].
It was also reported that thyroid hormones are affected in cases of tropical theileriosis caused by Theileria annulata [29][30][31]. It is stated that thyroid hormones, which affect growth, development, energy and efficiency metabolisms necessary for the development and normal functioning of many cells, are closely associated with the regulation of oxygen consumption [32][33][34][35]. Sangwan et al. and Garg et al. reported that thyroid hormones decrease in tropical theileriosis [30,31].
Conclusion
A future strategy of dual vaccination with tick antigen and associated recombinant T.annulata antigen can aid to achieve integrated protection in host. | 2019-03-18T14:03:27.577Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "dddcdd6bf5236100b4753983c7fc025666d08778",
"oa_license": null,
"oa_url": "https://doi.org/10.23880/mjccs-16000130",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d660fcf91a4d6b77ccf3f6552cba84a32146b509",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
} |
200118890 | pes2o/s2orc | v3-fos-license | PERCEIVED PHYSICAL ACTIVITY BENEFITS AND BARRIERS IN SEDENTARY ADULTS
Background. Research aim was to analyse the perceptions of benefits and barriers of physical activity in physically inactive adults. Methods. Research methods were individual interviews and qualitative content analysis. Results . Insufficiently physically active adults have many different health and emotional problems, but this is not a sufficient motive for them to be physically active. Research participants positively evaluated the benefits of physical activity if their physical activity did not exceed their physical fitness; however, it was not optimal and did not comply with the proposed minimum recommendations. Physical activities of adults were mostly related to holidays, cooking, playing with children, and activities in nature. Meanwhile, respondents were more physically inactive – working at home, reading books, and spending time on smart devices. Adults mentioned more reasons to be physically passive than active. Internal causes of physical inactivity were related to lack of time, lack of motivation, antipathy to sport, health problems and internal beliefs, external causes – workload, weather conditions, financial situation, family and household. Conclusions. Physical health problems and psychological ailments of inactive adults were frequent consequences of physical inactivity. Their activities after work were passive (working at home, reading books and time on smart devices) and active (holidays, cooking, playing games with children, activities in nature). Although there were more active activities, most of them were in leisure time and they did not fully meet the minimum recommendations. The participants positively evaluated the benefits of physical activity if their physical activity did not exceed their physical fitness. Internal causes of physical inactivity of the participants were lack of time, lack of motivation, antipathy for sport, health problems, feeling of loneliness and age, external causes – workload, weather conditions, financial situation, family and household.
INTRODUCTION
M ost people are conscious of the importance of physical activity for health, but physical inactivity remains a major public health problem worldwide. Healthy lifestyle and exercise have been shown to be an effective tool for adults who want to reduce body weight and improve overall health (Caron et al., 2017). Exercise can strengthen muscle tone and increase physical fitness as well as endurance. Current national recommendations for aerobic exercise suggest that every week adults have at least 150 minutes of moderate intensity and at least 75 minutes of intensive aerobic exercise. The World Health Organization (WHO) recommends adults to do health-enhancing exercises in a variety of activities during their leisure time, and in particular outdoor exercise such as cycling, walking, or Nordic walking. Aiming at strengthening the heart and the respiratory function, muscle strength and endurance, stiffness and firmness of bones, improving the overall functional health of the body, and reducing the symptoms of anxiety in this age group, it is recommended to exercise the main (large) muscle groups at least twice a week (WHO, 2016).
Although society aims to promote physical activity and promote its benefits, it has been observed that physical inactivity is gradually increasing and thus remains a health problem in the society. There are various reasons why many adults refuse active leisure time or other physical activity. The reasons include lack of motivation, time and, of course, self-confidence (Herazo-Beltrán et al., 2017). In many studies, physical inactivity is identified as one of the most important factors contributing to the global increase in health problems in Europe and worldwide. Therefore, there is a high likelihood of developing non-communicable diseases, cancer, cardiovascular diseases, and type II diabetes. Researchers have found that physical activity is a great preventive measure to protect one's health from various diseases (Maheri et al., 2017).
Many people seem to have sufficient knowledge of physical activity and its benefits, but research shows that the vast majority of people in the society do not engage in physical activity, which contributes to an increase in global health risk. The World Health Organization reports that 3.2 million people in the world die each year due to their physical inactivity (WHO, 2016). The prevalence of physical inactivity was established to be highest among 30-49 year-old persons, with women (28.9%) showing higher physical inactivity than men (22.6%) (Thanamee et al., 2017). Therefore, the study aims to answer the problem question: what are the sedentary adults' perceived benefits of and barriers to physical activity. The answer to this question will help motivate sedentary adults and engage them in physical activity. Research objectives include the analysis how sedentary adults describe and evaluate their physical and mental health, what leisure time activities they choose, what benefits of physical activity they see, and what prevents them from being physically active.
METHODS
The qualitative study was carried out following the principles of the grounded theory. Grounded theory is a research methodology concerned with the generation of theory, which is 'grounded' in data that has been systematically collected and analysed. It is used to uncover such things as social relationships and behaviours of groups, known as social processes (Noble & Mitchell, 2016).
Research participants were selected using the criterion sampling method. Criterion sampling involves selecting cases that meet some predetermined criteria of importance (Patton, 2001). The study involved the persons who met the following criteria: • 35 years of age and older (people of this age are expected to have a family, children, and a steady income); • higher education (having information on the benefits of physical activity and the damage of physical inactivity); • sedentary occupation (working in an educational institution; they are expected to have income for organized active physical activity in a sports club or using a variety of sports inventory); • persons not physically active enough (not engaged in physical activity every day, not athletes, rarely visiting a sports club). Thus, the study involved people who work being sedentary for long time, do not change the environment and suffer from routine and physical inactivity.
The sample size was determined based on the saturation principle, i. e. the interviews were stopped when no new responses were received (Francis et al., 2010). Thus, 12 sedentary adults participated in the study. Among them there were 4 men and 8 women, aged 35 to 49 years, living in Vilnius and working at a university of applied sciences. A more detailed description of the participants is given in Table 1, which also includes age, gender, position, and family status.
Participants were interviewed at the workplace in the University Administration Office. During the study, a neutral environment was created, nobody disturbed, and the same questions were given to each interviewee. Non-verbal communication was also taken into account.
The interview guide involved questions about the quality of life, physical activity benefits and obstacles, and related questions. The interviews took place in January-February, 2019. The interview with each person was recorded and later transcribed. The responses were systematized and analysed.
The data were analysed using the method of content analysis. Content analysis is a research technique used to make replicable and valid inferences by interpreting and coding textual material (Duriau, Reger, & Pfarrer, 2007).
Research ethics. When investigating human behaviour, the researcher intervenes in a private life, so in qualitative research, first and foremost, the welfare of the research participants is taken care of, and thus the form of informed consent was provided before the study. In order to ensure the confidentiality of research participants, the interview was carried out in accordance with the requirements of the laws of the Republic of Lithuania The data received were analysed according to the principle of impartiality, trying to investigate the phenomenon without any bias. In order to ensure the transparency of the research, the insights made in the study were illustrated by literal quotations from interview transcripts. All research data was stored to avoid negative consequences for the individuals involved and unauthorized access.
RESULTS
Physical and psychological health of research participants. Aiming at assessing the psychological health of sedentary adults, the participants were asked if they often experienced fluctuations of mood and stress. Respondents were confident that they were often confronted with difficulties, especially when performing job functions. They argued that bad mood was inevitable when something went wrong at work or conflicts occurred between colleagues or other employees. Often the interviewed adults got up early in the morning being irritable without knowing what caused that. Many of the respondents mentioned that they felt that 2-3 times. It was observed that the participants of the interview suffered from volatile behaviours. Some respondents realized that trying to relieve stress they were trying to calm down, contemplate over all decisions or even engage in other activities. Meanwhile, others mentioned that the right way for them to get rid of stress or tension was to snack during work or chew gum ("To avoid stress during work, I snack because it seems that when I chew something, I stave off my problems, and it helps", Viktorija). It is known that chewing or eating during stress is a psychological illness. Stress forces us to activate all our muscle groups and for this reason everything is done faster -moving, swallowing, chewing, etc., which further increases tension and anxiety.
The subjects had to answer how they evaluated their health. Respondents admitted having overweight, others said they suffered from stomach problems, bloating, or just overeating ("I have stomach problems, I like to eat, I really like to live with my belly and its problems", Julius). Besides, there were interviewees who simply did not want to talk about their health status believing it to be confidential information. One man admitted having problems with his heart, he had had cardiovascular diseases and even a heart attack (I am a cardiac patient and I had a heart attack, Gintas). It was discovered that after a working day, the subjects felt back pain and some of them even hada spinal hernia, and they had been suffering from it for a long time ("I have a spinal hernia that is being treated for a few years, I need surgery but I still hesitate…", Auksė). The summarized assessment of physical and psychological health of the research participants is presented in Table 2.
Summarizing the assessment of the psychological and physical health of the subjects, it is observed that the subjects usually experience bad mood and stress during work, under stressed working conditions; and overweight or backache may affect their physical activity. Activity areas of sedentary adults. The study aimed at analysing key activities of the research participants that can be classified as physically active and passive. Most of the interviewees admitted that they spent their leisure time passively, that is, they read books, worked at home using their smart devices (I eat, watch TV, sit on Facebook and read news online, Laima). The results showed that family people had to take care of their children when they returned home, pay attention to them, play games with them, cook, or engage in other household activities (We are often at home with our family, we cook meals and spend time together playing board games, Ksenija). Meanwhile, other interviewees mentioned that they had to do the work they did not manage to do at the university and they had to prepare for the next day's lectures, check students' independent work or tests (We eat and then I go to check students' assignments or read the materials for my new lectures, Giedrius).
Very few choose to take a walk in the city park or engage in outdoor activities like cycling or fishing. Respondents mentioned that they choose to travel abroad in their free time. They could be attributed to active people as sightseeing includes a lot of walking (Every year we travel, we want to get to know different cultures and customs, Evelina). Table 3 summarizes the non-work activity of adults.
Summarizing the activities of the respondents we assume that people are physically active very rarely, usually only in their leisure time. At home physical activity is also diminished as most sedentary adults continue to work and others spend time watching TV, sitting at the computer or reading books (Table 3).
Evaluation of physical activity of sedentary adults. Analysing the respondents' responses on the benefits of physical activity, it was found out that people who had to engage in physical activity felt more energetic and even more lively the next day (Although I do little sports, but I will not lie, health benefits really are obvious -the next day I feel really more lively, Giedrius).
As for the emotional state, the results are not surprising, as everyone unequivocally agreed that sport had a positive effect on our brain, it allows us to forget our bad thoughts and relax (Active workouts allow you to forget the difficulties, various problems and they also allow you to relax emotionally, Agnė). However, there were also respondents who were overwhelmed by sports and
Mood changes and volatility
The mood is bad when I fail or get angry with someone, I think 1-2 times a week… (Audronė).
<...> I am tormented by my bad mood in winter and autumn (Ksenija).
The bad mood often presses me 2-3 times a week (Gintas). During the interviews it turned out that social health was important for the subjects. Adults responded that they did not want to do sports all by themselves as it was sad and thus they lacked motivation (It is always more fun to work-out with a partner than alone, and it is more motivating, Laima). Others said that in the group of people it is always more fun to do sports and there is a social reason to get engaged in sports because others do that (I like to play volleyball in a group of friends because it is team sport, people in the group have always more fun than being alone. There is a social reason to encourage a person to do sport because others are so athletic …, Evelina).
During the study, adults not only appreciated the benefits of well-being, emotional and social health, but also mentioned that they were noticing changes in the body. They pointed out that the body became firmer, muscles became more visible, and they could even lose weight (If I do some exercise or walk more, the weight diminishes, Gintas).
Assessment of physical activity and health is provided in Table 4.
The responses received show that many of the respondents appreciate the benefits of physical activity for well-being and emotional health. Respondents also notice body changes that make them happy.
Effect of working conditions on physical activity in sedentary adults. The study aimed at assessing the impact of working conditions on the physical activity of sedentary adults. Research participants said they were working in a friendly team and did not have anything to complain about although they had to work overtime (I started Then questions were asked about hours of rest in order to find out whether adults were passive or active during work or during rest or lunch breaks. The vast majority said that the rest area was passive, everyone was sitting and drinking coffee, some respondents said they would go out for a walk in good weather (If it is warm, I like to go out for a walk during the break, Evelina). Table 5 provides systematic data on the impact of working conditions on sedentary adult activity.
Summarizing the data in Table 5, it can be assumed that adults at work are physically inactive, passive, little moving because of the nature of the sitting and / or standing job. During their rest time, they rarely go out for a walk.
Internal and external causes of insufficient physical activity. After analysing the responses of the interviewees, two causes for insufficient physical activity were distinguished -internal and external. It was observed that internal causes were not only a lack of time, but also a lack of external and internal motivation (To tell the truth, I have no After exercise the muscles are bigger, and you can see the abdomen press (Aušrinė). We take advantage of rest, <...>, and we go to the cafe with colleagues (Laima).
Negative aspects
Tight work with students, I don't have time to go to the lounge (Auksė).
<...> rest area is passive -we sit, drink coffee … (Agnė).
There is nothing to do during the rest break ... -the administration could set up an active space (Lukas).
PERCEIVED PHYSICAL ACTIVITY BENEFITS AND BARRIERS IN SEDENTARY ADULTS
motivation to do sport, it's hard to make myself to go to the gym, Julius). Research participants said they had no time to do sports after returning from work because they had to continue to work on the computer or simply they did not have the willpower to engage in physical activity. Laziness is the main barrier to physical activity. Often, adults do not have any motivation to do sports; they prefer a passive way of spending leisure time at TV screens. Also, some feel antipathy for sport and do not like this activity. They argue that they should rather engage in other activities such as reading books. Some respondents said they could not do sports due to health problems such as backache or overweight. Both men and women felt lonely, some of them were divorced (I don't have company, I am divorced, Laima). It is noteworthy that the senior respondents thought that age was a hindrance to sports, they felt different than before, and therefore they urged younger people to be more physically active. Table 6 summarizes the causes of the respondents' physical inactivity.
As for external causes, it was observed that fatigue and workload had a significant impact on the subjects' physical activity. Respondents said they did not want anything as they were tired after work, only rest at home with their family. They admit that working with students consumes a lot of energy and strength, so fatigue prevents them from being physically active. Very few adults choose a bicycle as a vehicle to get to work because poor weather conditions can prevent riding a bike.
The adults also pointed out the financial situation as one of the reasons for the lack of physical activity because the salaries were not high; others said they did not find a suitable fitness club according to their income. The subjects were influenced by family and household as well. Most of the time is spent on children, cooking or housework. These aforementioned factors limited their physical activity. The causes of physical inactivity are detailed in Table 7.
Summarizing the causes of physical inactivity, it can be suggested that internal causes are related to internal human beliefs, and external causes -to the environment. It can therefore be concluded that internal causes are more difficult to have removed than the external ones. Many internal reasons are influenced by the inner attitudes that hinder the change of lifestyle. External causes of physical inactivity can be overcome easily if the person is highly motivated and wants to engage in physical activity.
DISCUSSION
The study sought to understand the main causes of physical inactivity in adults that prevent their physical activity. Also we wanted to find out what activities respondents were engaged in during their working hours, and how they spent the rest of their day or leisure time. We aimed at investigating whether working conditions and workload affected the physical activity and health of the research participants. The perception of the benefits of physical activity for the well-being, emotional and social state of the subjects was investigated as well. Adults had to indicate how they evaluated their psychological and physical health.
The results of the study revealed that after returning from work some adults continued to work as tutors and teachers, they checked student assignments and tests, constructed new tests, collected materials for lectures. Others sat passively watching TV or surfing the web on social sites. There were also those who wanted to spend time for themselves -to read books, as it helps to get away from family problems and not to think about upcoming jobs. Research by other scientists shows that younger people are more mobile and thus more physically active than adults (Molanorouzi, Khoo, & Morris, 2015).
Not all the respondents were satisfied with their working conditions. They mentioned that they would be happy to have a lifting table or they would like to reduce the duration of their work because after a long working day they felt tired and did not want to do anything and only passively spend the evening. Some of them had to work longer hours. They explained that they were working overtime due to a reduced number of employees. Therefore, working conditions may not only have a negative impact on physical activity but also on health. As a result of the extraordinary impact of work on human life, health and society, work values are considered to be fundamental, because the health of the worker is the most important aspect to ensure the quality of the work performed (Clark, Kolbe-Alexander, Duncan, & Brown, 2017). The study showed that non-standard work schedules were associated with adverse health effects. Working overtime increases fatigue, sleep disturbances, and the risk of cardiovascular diseases (Merkus, Holte, Huysmans, van Mechelen, & van der Beek, 2015). Rich and full-fledged recreation is also important in adult life. Research participants revealed the fact that the lounge was meant for passive rest, most often it was used for drinking coffee and communicating with colleagues. Others would like to have an active rest during the break and offered the employer to install areas where they could play table tennis or similar games. In the clinical population, human health has been determined to depend on the duration of sedentary activity, and it is therefore necessary to move or to take a walk after a long period of work. Physical activity has been shown to be associated with various positive health outcomes, including better mental health, and to reduce the incidence of cardiovascular diseases. It has also been found that physical activity improves health-related quality of life (Kolt et al., 2017). The participants of the study indicated the benefits of physical activity to health, emotional well-being and appearance. Respondents believe that physical activity provides more energy, and the next day they feel like having done a good job for themselves. Researchers also claim that physical activity can prolong life expectancy and quality. Research has shown that regular physical activity is associated with a reduction in premature death and chronic diseases such as coronary heart disease, stroke, type 2 diabetes, depression and some types of cancer. Regular exercise increases the average life expectancy as physical activity slows down the development of chronic diseases (Paul et al., 2015). Speaking of the benefits to emotional well-being, the subjects revealed that they improved their mood when exercising, feel as usual before sports, and after exercising they were emotionally recovering. Active workouts allow them to forget the difficulties and various problems. Research data show a link between physical activity and psychological wellbeing (Vancampfort et al., 2017). Physical activity has been shown to significantly reduce the risk of depression and mental disorders in healthy adults or those with chronic diseases and to reduce the symptoms of people with mental disorders. These data show that physical activity is associated with reduced symptoms and frequency of mental illnesses and increased self-confidence (Bernard et al., 2018). Respondents assured that social health was no less important as they could get more motivation to exercise while exercising with friends or partners, and notice mistakes often made or learn how to do the exercises correctly. Research participants say they like to play a variety of team games in a group of friends because team sports promote social life and encourage getting involved in sports. Research shows that social support and sustainable supportive relationships could help to make acquaintances, and physical activity in groups has the added value of social support that can encourage active movement and social identity (Quirk, Crank, Harrop, Hock, & Copeland, 2017). In the opinion of the participants, appearance also contributes to changes in health, it is said that the body becomes firmer due to exercise, the body composition indicators also show changes, for example, in the muscle mass. Respondents noticed that even low physical activity could reduce body weight, and if they were more active, they could see greater changes in their appearance. Research suggests that sedentary lifestyle leads to weight gain while reducing muscle mass. However, regular physical activity prevents obesity and is a natural tool for reducing its prevalence. Also, exercising people do not just lose unnecessary kilograms, but also radically change their body appearance (Sofková & Přidalová, 2015). Research has shown that resistance exercises have a positive effect on reducing weight in people who are overweight or obese (Sigal et al., 2014).
The analysis of the psychological health of subjects showed that people in bad mood tried to snack during work and to calm down in this way, and then to continue their unfinished work. It has been observed that stress usually arises from other people's poor performance and difficulties. Research shows that adults who poorly appreciate their psychological health struggle to cope with the difficulties they experience (Kaseva et al., 2016). It has been found that physical activity reduces the development of depression, makes people feel better and controls their emotional state. Therefore, people feel more satisfied with life and happier. Most importantly, physical work can effectively improve work capacity and productivity. In terms of physical health, respondents experienced health problems such as overweight, backache, stomach and intestinal problems, and cardiovascular diseases. Preventing and controlling lifestyle-related risk factors can reduce the frequency of vascular and metabolic diseases and other disability conditions (Wang, Li, Jones, Bodner, & Dean, 2019).
Despite all the recommendations and the perceived health benefits of physical activity, the physical activity of adults is decreasing. Respondents considered the lack of time to be one of the reasons that hinder exercising, as there is other work that needs to be done for the next day after returning home, so there is not enough time for physical activity. Time constraints are one of the most common obstacles to being physically active, but it has been shown that shorter physical activity (i.e. ≤ 10 minutes) accumulated per day can bring similar health benefits (Kolt et al., 2017). The World Health Organization suggests that adults take part in a weekly exercise of at least 150 minutes of moderate aerobic exercise and at least 75 minutes of intense aerobic exercise, taking increasingly reducing physical activity into account (WHO, 2016). Our research participants said they could not engage in physical activity due to lack of motivation or lack of interest in sports and they chose passive leisure time spending it on smart devices. Internal motivation is identified as one of the most important factors for positive, meaningful and long-term changes in physical activity behaviour (Santos, Ball, Crawford, & Teixeira, 2016). Better health increases self-confidence and improves selfesteem (Mathews et al., 2016).
The external causes of physical inactivity were fatigue, workload, weather conditions, financial situation, family concerns, and household. With the intensification of work and life rhythm, fatigue and workload are the main causes of physical inactivity; respondents say they have no strength for exercising after returning home. Research shows that people often use their positive emotional resources over time; they experience stress and can hardly overcome its consequences. With such a prolonged state, full physical and mental exhaustion is possible, which affects a person's motivation, attitudes and behaviour (Kavaliauskienė & Balčiūnaitė, 2015). The financial situation is also one of the reasons for inactivity. Individuals with lower economic incomes usually overestimate personal barriers, for example, they say they have no financial resources to buy a sports club membership subscription, or they are unable to select a sports venue according to their needs (Herazo-Beltrán et al., 2017). It has been found that women living in socially disadvantaged areas have a higher risk of physical inactivity regardless of their individual social and economic circumstances, which may lead to greater health and social inequalities (Santos et al., 2016). Family concerns and household were identified as one of the reasons for lack of physical activity. Research participants said they had housework when they returned home. They had more family responsibilities such as cooking, playing with children or helping them do their homework. Therefore, these factors are thought to limit the physical activity of adults because of their high occupancy at home. Research testifies that due to their lifestyle problems associated with family responsibilities women are under heavy strain that promotes lack of physical activity. Women are more likely to be tired and have no time, partly because of their marital status, in contrast to men (Santos et al., 2016). This problem is relevant up to nowadays because we need to take care of our health in advance to ensure proper aging. It is important to carry out more similar research to identify the causes of increasing physical inactivity and to introduce activities that would encourage physical activity so that no causes could interfere with it.
The problem is that knowledge of the consequences does not help to prevent the social welfare from being hindered by external and internal barriers. Personal adaptive and thinking powers also provide limitations and uncertainty for this phenomenon.
CONCLUSIONS
1. The results of the qualitative study show that inadequately physically active adults have physical health problems such as overweight, gastrointestinal disorders of cardiovascular diseases as well as backache. Psychological ailments are constant changes of mood, frequent stress and insomnia. These listed disorders are the most common consequences of physical inactivity. 2. The non-working activities of adults studied were passive, such as work at home, reading books and time on smart devices, and activeholidays, cooking, playing games with children, and activities in nature. Although there are more active activities, most of them are in their leisure time when they have free time. Physically active activities of adults do not fully comply with the proposed minimal recommendations. 3. Research participants positively evaluated the benefits of physical activity if their physical activity did not exceed their physical fitness levels.
After exercising, they feel physically stronger, more energetic and more relaxed. Also, physical activity has a positive effect on emotions: they faster forget their bad thoughts and experience more positive emotions. Adults also notice changes in appearance, their gradually changing bodies. However, if they are overwhelmed with exercising, they are fatigued and feel muscle pain, which lowers motivation to exercise. 4. Internal causes of physical inactivity of the subjects include lack of time, lack of motivation, antipathy for sport, health problems, feeling of loneliness and their age. External causes are fatigue and workload, weather conditions, financial situation, family concerns and household. There are more internal causes than external, and it is more difficult to overcome them than the external ones. | 2019-08-16T22:28:40.155Z | 2019-07-22T00:00:00.000 | {
"year": 2019,
"sha1": "42806b6293a4f2b8d4251fe0a5bbedb85a2346ac",
"oa_license": "CCBY",
"oa_url": "https://journals.lsu.lt/baltic-journal-of-sport-health/article/download/788/752",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "333f29144e1f2ed37db432ea87be6a1e3c4c138c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
249462086 | pes2o/s2orc | v3-fos-license | IoT based SMEs shop management system
The Internet of Things (IoT) is an idea that intends to interface arranged data frameworks to actual items. The Internet of Things (IoT) has applications in pretty much every part of life in this day and age, and stock administration is no special case. IoT gives an answer for this issue by making it simpler to interface every one of the various organizations in a strategic framework utilizing Wireless Sensor Networks. An Interactive Shopping Model and an Automated Inventory Intelligent Management System that uses the Internet of Things to give constant item following, the board, and observing. A study and examination of the commonness of IoT among assembling SMEs is introduced, just as the current impediments and possibilities for permitting prescient investigation. The four examination capacities are depicted alongside an outline of the IoT empowering agents. Future patterns and difficulties in arising innovative work subjects are featured, for example, making IoT advances available to SMEs. The motivation behind this paper is to look at how the Internet of Things is changing our lives and work spaces, just as to feature probably the best strategic approaches, insights, and patterns. Considering the developing significance of big business IoT and the exploration hole in this field, an IoT design and the IoT administration industry will be examined. A model is needed to choose and send IoT administrations in different authoritative settings.
INTRODUCTION
The present innovation is progressing dangerously fast, particularly in developed nations.
Everybody needs everything to be simpler and quicker. The Internet of Things (IoT) is an organization that empowers actual gadgets, well-being, administrations, business, vehicles, security, and different things inserted with programming, hardware, actuators, sensors, and organization network to gather and trade information. Since the Internet of Things was made to further develop productivity, diminish human intercession, and increment precision, the article can be detected or controlled remotely utilizing the Internet. The client can screen the framework in a good way utilizing the IoT strategy, and there is no requirement for direct contact with the Everything can be utilized remotely with an assortment of cell phone applications. This is because of the framework's productivity and new innovation. Besides, in light of its low power utilization and usability, the remote controller by telephone has forever been a choice. A shop chief, for instance, can utilize IoT to control their security framework. The objective of the venture is to help little retailers in keeping up with the security framework while they are away from the store. The framework can be utilized in any telephone application to guarantee that a shop representative can arrive at the shop from any area. We can likewise check out their brilliant shopping basket charging framework as a principal part. The remote sensor is associated with the forced air system, roof fan, and window, which likewise has a LCD screen, an IR sensor.The remote association will be associated with each thing accessible in the shop. The Internet of Things has as of late acquired broad acknowledgment in an assortment of fields. As per numerous specialists, the Internet of Things will totally change how individuals communicate with their environmental factors, bringing about the development of a billion-dollar industry that will be the new main thrust for data innovation extension. The web of things alludes to the most common way of associating objects to the web and permitting them to impart and trade data with each other to accomplish savvy ID, continuous following, observing, and the board. Radio recurrence ID (RFID), electronic name (EPC), remote detecting innovation, worldwide situating framework, and standardized identification per user are only a couple of the data detecting advances utilized in the Internet of Things. The Internet of Things is an idea that alludes to a data sharing biological system in which heterogeneous gadgets (things) are associated through wired or remote organizations. The Internet of Things (IoT) is an improvement of machine-to-machine (M2M) correspondence that involves the coordination of sensors, actuators, and other inserted gadgets by means of an IP-based systems administration model. IoT empowers the utilization of Wireless Sensor Networks (WSN) to gather information from an assortment of sensors, which is then traded and envisioned in reality with the assistance of actuators. The Internet of Things (IoT) is comprised of countless entertainers, including sensors, things, sensor organizations, actuators, and people, who all add to its appropriate work. With such progressions in innovation, IoT related to Cloud processing can be utilized to make a mechanized stock administration framework, bringing about the advancement of a brilliant shopping complex. Many stock administration frameworks have been proposed throughout the most recent few decades, yet they needed elements like continuous checking, detect ability, onthe-fly stock information refreshing, e-installment, information examination, and secure client validation. Conventional frameworks for counting and overseeing co ordinations depended on manual techniques or standardized tag innovation, yet they couldn't stay aware of the developing number of clients and enhanced co-ordinations.
The Internet of Things, when utilized related to a stock administration framework, can support the improvement and the board of all coordination-related exercises, accordingly expanding productivity and consumer loyalty. The reason for this paper is to introduce an Automated Inventory Management System (AIMS) for further developed coordination and a shrewd shopping complex. The fourth modern unrest incorporates the modern web of things (IoT) and computerized reasoning (AI). Little and medium-sized endeavors (SMEs) come up short on the assets and information to utilize Industry 4.0's high level and present-day innovations and techniques.
LITERATURE REVIEW
Parts of the Internet of Things Environment Technological The Physical Setting Hardware Various remote gadgets are utilized to associate human and non-human items to the Internet of Things, permitting correspondence and connection between them over a universal remote organization. [4] There are not many examinations on the IoT biological system and engineering that are pertinent to the development of big business IoT. For example, we actually don't completely grasp what undertaking design components are and how they help in the improvement of explicit endeavor IoT administrations. The undertaking IoT environment depicted in the past area gives the innovation stages needed to IoT design execution. While scientists still can't seem to settle on an IoT engineering, they typically utilize a complex methodology, with each layer devoted to explicit capacities like correspondence/detecting, information handling, and information handling/thinking. [5] Coordinated stages that permit the IoT's different equipment, programming, and systems administration components to cooperate consistently. Norms Various specialized and functional principles characterize the plan and intractability of different IoT components. [4] In request to Coming up next are the significant advantages of the Internet of Things that affect business: Communication -The Internet of Things (IoT) permits associated gadgets and clients to keep a steady association and trade information. The Internet of Things idea, which alludes to all associated objects that trade information, is broadly utilized in all parts of life, including wearable and different gadgets and sensors that make each item shrewd. [3] In the social IoT, there are different proprietors/substances associating through their gadgets, while collective endeavor IoT applications are regularly run by a solitary administering body. A coordinated effort can occur between individuals, among individuals and things, and between things in the IoT framework.
Entrepreneurs assume a basic part in the IoT's turn of events.
These business people utilize their specialized information, business experience, and instinct to make new plans of action in the IoT domain, roused by a craving for self-gain, self-realization, or local area commitment. Rejuvenating these business thoughts often requires tending to exist specialized, administrative, and legitimate issues by growing new advancements, new business processes, and coming to an obvious conclusion in various alternate ways identified with IoT. [4] The Internet of Things ought to be incorporated as per a particular vision and thought, distinguish openings for utilizing innovation, draw in business organizations and government, and encourage a culture of Internet of Things utilization. [3]
PROPOSED WORK
The system would allow for real-time asset monitoring, management, and end-to-end system trace ability. The system would serve as a guide, directing him to the locations of his desired products. Overall, the system would be automated, resulting in a more efficient system. This system's goal would be to create an interactive shopping environment. The system would allow for real-time asset monitoring, management, and end-to-end logistics chain trace ability. It is based on Internet of Things (IoT) wireless technology and cloud computing.
Small and medium-sized enterprises:
At the local, regional, national, and European levels, encouraging innovation in small and mediumsized enterprises remains at the heart of policy initiatives aimed at stimulating economic development. On a theoretical level, innovation has also displaced efficiency as the primary focus Infrastructure Layer: In this section, we will go over all of the actors (physical entities) that make up the system, their roles, basic functionalities, and how they communicate with one another. The workflow of our proposed system for such an interactive shopping model is depicted in the figure below.
Sensors:
Sensors serve as the system's eyes and ears, detecting events and environmental conditions and transmitting the information gathered. The sensors' job is to observe and perceive physical world events or phenomena. Sensors are classified into three types based on three factors: sensor type, methodology, and sensing parameters. The sensor type determines whether the sensor is homogeneous or heterogeneous, as well as whether it is single-dimensional or multidimensional.
Methodology refers to the methods used by a sensor to collect data. It can be active or passive in nature. Sensing parameters are the number of factors that a sensor can detect. A sensor can detect a single parameter, like body temperature, or multiple parameters, like an ECG.
Gateway:
Sensors serve as the system's eyes and ears, detecting events and environmental conditions and transmitting the information gathered. The sensors' job is to observe and perceive physical world events or phenomena. Sensors are classified into three types based on three factors: sensor type, methodology, and sensing parameters. The sensor type determines whether the sensor is homogeneous or heterogeneous, as well as whether it is single-dimensional or multidimensional.
Methodology refers to the methods used by a sensor to collect data. It can be active or passive in nature. Sensing parameters are the number of factors that a sensor can detect. A sensor can detect a single parameter, like body temperature, or multiple parameters, like an ECG
Mobile Device:
A mobile device acts as a customer service representative. Customers' mobile devices enable them to search for and purchase items in real-time. Any notification or special offer relating to a specific product is immediately sent to the mobile device, which then awaits a response.
RESEARCH GAP
Data security and data protection, data quality, the usage of common standards and protocols, inseparability, legal difficulties, and so on are the key issues and challenges facing the Internet of Things, just as they are for Internet-based technologies. Other major challenges facing the Internet of Things, as outlined by (6), include establishing a common addressing mechanism for effective device addressing, developing low-cost embedded devices that are more energy-efficient and reliable, establishing governing bodies to oversee device usage, establishing quick and reliable communication, and minimizing the load on servers as well as embedded devices. Increased production and deployment of these devices go beyond resolving these concerns, and all actors, particularly businesses and governments, must work together to resolve them in a timely manner and create policies that ensure the correct and authorized use of the Internet of Things.
One of the most significant concerns is privacy, which specifies the rules under which individual data can be accessed. The identification and tracking of devices, as well as all of their behaviors and the extraction of personal data from application areas, make users feel like they are being monitored and can be tracked at any time (7). On the one hand, it is convenient to locate persons who have gotten lost, have been abducted, or have been in an accident. On the other side, it is an inconvenience for anyone who wants to maintain their privacy. One of the most major challenges that the Internet of Things must address is security. Affordable and low-cost broadband connections, as well as Wi-Fi capabilities in many gadgets, are required for simple locations in public spaces, leaving them vulnerable to cyber-attacks. With a special focus on IoT systems, the Internet of Things permits permanent data sharing between connected objects and defines three main components for maintaining security: authentication, secrecy, and access control. (8) Compatibility is a term used to describe how well two things work together. Different manufacturers of smart device sensors and platform providers utilize different data transfer protocols, which might lead to communication issues. (9). There are numerous organizations and projects aimed at establishing communication standards.
Complexity is an important factor to consider. Complex systems that connect and manage various things are vulnerable to failure. Users should not put too much faith in technology because defective scanners and sensors can result in incorrect data or no data being sent. Electromagnetic disturbances, vibrations, and human age could all play a role. Eye and finger injuries, for example, can impact scanning the iris of an eye or a fingerprint for bio-metric data. (10).
EXPERIMENTAL METHOD
In today's world, supermarket shopping is becoming increasingly popular. It is now recommended that a smart shopping complex based on IoT avoid this issue. It includes an Android app, Wi-Fi, and the cloud. Customers can use a search tool to learn about the mall's product availability and unavailability. Wi-Fi, LCD, Android app, Amazon cloud, and billing unit central can all be used to wirelessly connect all types of shop equipment. The goal of this proposed system is to provide a system that can be used to address small shop management through the Internet of Things. We've added a simulation image over here, where we can see that all of the equipment is connected via a smartphone and a desktop computer, both of which are connected to the internet via the router. In the shop, the computer is connected to a switch. The smart wireless IOT device here is the home gateway, which is also connected to the switch. Our shop management system's main and core device, the home gateway, is wirelessly connected to all devices. For security purposes, our shop has two CCTV cameras, CCTV 1 and CCTV 2, both of which are open 24 hours a day and connected to the internet. We also have a motion detector that connects to those cameras to detect people motion. 2 lights -Light and Light 1, Light 1 is open 24 hours a day and is connected to the door that acts as a shutter. 2 fans, 4 windows, and an air conditioner are connected to the home gateway and have some conditions to use those devices in different temperatures to control the shop's internal environment temperature. We use a temperature monitor to check the temperature here (temperature monitor shows the shop temperature). A smoke detector, fire sprinkler, fire detector, and siren are installed in the shop for fire and safety reasons. It will notify that customer that there is an emergency in this store. A car serves as the smock generator, and a heating element serves as the fire. The solar panel is also used here to convert solar energy to electricity. The solar panel was connected to the battery and stored energy in it. This battery provides a source of power for light 1. All of those devices can be operated/controlled remotely using a smartphone or a computer. IoT devices, particularly in the business and home sectors. Using IoT devices, we can control fans, air conditioning, and windows in our small shop management system by measuring temperature.
If the temperature drops below 10°C, all windows, fans, and air conditioning will be turned off.
Between 11°C and 15°C degrees Fahrenheit, the windows will be open. Low-temperature fans will be set between 12°C and 15°C degrees Celsius. Fans will be turned on at temperatures ranging from 15°C to 20°C degrees Celsius. If the temperature rises above 20°C, the air conditioner will automatically turn on, and the windows will close. After 22°C, the fan speed drops to a low level.
If the temperature falls below 20°C, the air conditioner will turn off automatically, and the fan speed will be set to high. If the door to our shop is open and unlocked, all lights will be turned on.
In this simulation, we use two lights, one of which is powered by a battery and the other by a direct power source. When electricity is unavailable, the battery will serve as a power source. Here, a small solar panel is used to charge the battery, and the power meter displays the amount of electricity produced by the solar panel. This shop has a smoke generator, fire sprinkler, and fire monitor siren for security and safety. If a smoke generator detects smoke, it will sound the alarm and open all windows. If the fire monitor detects a fire, the fire sprinkler will automatically activate, and the power to all devices will be turned off, but the windows will remain open. In general, all of our devices are intelligently connected to the door. All lights, fans, windows, air conditioning, and a printer will turn off automatically if the door is closed and locked. The CCTV camera is operational 24 hours a day, seven days a week. If any of the cameras are turned off, we can turn them on remotely. When the door is closed and locked, a motion detector helps detect any unusual activity or people. When the door is unlocked and opened, all of the store's lights will automatically turn on. All of these devices can be accessed locally and remotely via Wi-Fi from the shop computer.
DISCUSSION
One of the many areas where IoT can be used is in retail management. Integration of advanced or new marketing technologies, such as IoT, AI, or Bots, in the Romanian market, can be a challenge in terms of specific costs, particularly for SMEs [13]. According to studies, Romania is still not ranked highly in terms of digitization when compared to other European countries, coming in second to last after Bulgaria. Nonetheless, the country's "digital economy" [14] shows strong growth potential. Although the Internet of Things concept is not as popular among SMEs in the IT industry as it has been in other studies [15][16], it has tremendous potential because IoT adoption and integration is a top priority for the future. According to other studies [17], more than half of the analyzed companies (64.2 percent) have already integrated some of the new technologies into their marketing processes. Despite the fact that Artificial Intelligence was the most popular technology among the analyzed companies, two of them had integrated IoT technologies.
Companies that do not expand their operations to new locations use advanced marketing technologies only infrequently, whereas those with more than four locations use them more frequently. A business that is able to operate in multiple locations is thought to have more financial clout. During the simulation of our proposed system, we attempted to maintain it using a smartphone with Wi-Fi that we could control remotely. The simulation's results are displayed using smartphone apps to control our devices such as fans, lights, fire monitoring, fire sprinklers, air conditioning, CC cameras, motion sensors, motion detectors, fire detectors, temperature sensors, window and door, and so on. We can manage our shop with our smartphone and ensure its security to protect it from any type of claims, such as theft, safety, and fire issues, while also conserving energy. When the shop is empty, it turns off all lights and closes the door to conserve energy.
FUTURE WORK
In the future, SMEs in management may be required to use robust machine learning methods to ensure better quality control and production monitoring. Machine-level predictive analytics could be enabled by smart IoT devices attached to a single machine. Data would be uploaded to a cloud service, which would store it and make it available for management reports. Small shop management is one of the IoT's smaller components, in which we attempt to demonstrate how we can manage a small shop using IoT. We only put the management system's structure in place here.
We paid no attention to the security and privacy features that could be implemented in the future.
We can also add more equipment if necessary. We can create a mobile application to control the entire system as well in near future. and implement. This could be in the form of a ready-to-use product that can be implemented at any point in the manufacturing process without the need for computer scientists to set up. Choosing an IoT platform is a significant decision that necessitates effectively exploiting the Iot network for the development of many IoT enterprise apps and services. While many applications require standardised, low-performance, and low-power IoT platforms, high-performance platforms are still required to meet the computational demands of data-intensive IoT applications and edge devices. Without a doubt, the rapid development of new technologies has an impact on all aspects of daily life, and the analytic and reports presented show that these trends will continue and grow in the coming years. We are all part of this technological revolution, whether we like it or not, and the most important thing is to learn how to use it properly and wisely. The Internet of Things should be integrated according to a specific vision and idea, identify opportunities for using technology, attract business institutions and government, and create a culture of Internet of Things use. In addition to selecting the appropriate platform, businesses must decide whether they will install their own internal IoT infrastructure, third-party IoT infrastructure, or a hybrid of the two. | 2022-06-09T01:15:50.315Z | 2022-06-07T00:00:00.000 | {
"year": 2022,
"sha1": "0abbe1638a54b4e1b6139b4016b9017a7a8e79c3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0abbe1638a54b4e1b6139b4016b9017a7a8e79c3",
"s2fieldsofstudy": [
"Computer Science",
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
215814494 | pes2o/s2orc | v3-fos-license | Wave Triad with Forcings as a Nambu System
The dynamics of an ideal wave triad with real amplitudes has a well-known Nambu representation with energy and enstrophy as conservation laws. Here we derive Nambu representations for systems with constant forcings. These equations have been applied to triads of Rossby-Haurwitz waves in the atmosphere where they are forced with orography. The conservation laws are based on relations for the unforced amplitudes and a Hamiltonian given by the total energy plus terms involving the unforced amplitudes. The forcing of the unstable wavenumber causes a recharge cycle.
Introduction
Nambu [1] has suggested an extension of conservative dynamical systems which is based on the Liouville Theorem. In the simplest nontrivial case this pertains to a system with three degrees of freedom with a second conservation law in addition to a Hamiltonian. The dynamics is given in terms of a Nambu bracket which generalizes Lie-Poisson brackets [2]. Casimirs in this theory are given by the second conservation law in the Nambu bracket. The concept of Nambu mechanics has been extended to continuous hydrodynamic systems with a finite number of conserved integrals [3,4].
In geophysical flows the weakly nonlinear interaction of Rossby-Haurwitz wave is considered to be a main constituent of atmospheric turbulence (see e.g. [5]). Three waves can build a resonant triad with two conservation laws coined as energy and enstrophy [6,7,8]. It is well-known that the triad equations have a Nambu or Lie-Poisson structure [9] with the same bracket as in rigid body dynamics. In the atmosphere Rossby waves are forced by constant orographic inhomogeneities [8,10] and the amplitudes show a typical recharge-discharge cycle.
The aim of this paper is to present Nambu representations for forcings in the real wave triad equations. Harris et al. [11] have determined stability and boundedness properties of the equation in complex form with a forcing applied to the unstable mode. The Hamiltonian for our system is the unforced energy plus functions of the unforced amplitudes. For the second conservation laws, the Casimir functions in Hamiltonian theory, we replace enstrophy by relations obtained by the unforced equations. The Nambu bracket is the same as in the unforced equations. In simulations with an intermediate wavenumber forcing, a typical recharge process is induced. Recharge cycles are common in geophysical fluid dynamics and typically modeled as nonlinear oscillators (see the models for baroclinic storms [12], convection [13], and wave-mean flow interaction [14]). To demonstrate the usefulness of the Nambu representation we derive the corresponding equations by approximating the conservation laws for the recharge cycle.
The paper is organized as follows: In Section 2 the geometric representation of spherical Rossby wave triads without forcing is revisited. In Section 3 triads with different forcings are described as a Nambu systems. For the intermediate wavenumber forcing a recharge cycle is obtained and approximated as a canonical Hamiltonian system. In Section 4 the results are summarized and discussed.
Spherical Rossby wave triads without forcing
Large scale atmospheric dynamics is governed by the barotropic vorticity equation. For small amplitudes linear solutions are given by noninteracting Rossby-Haurwitz waves. A triad of these waves is given when they satisfy resonance conditions [6,7,8]. The three waves are decoupled from the rest and energy is only exchanged within this triad. Note that the interaction within a triad is weakly nonlinear and only valid for moderate amplitudes. For higher amplitudes the decoupling breaks down, the waves interact with all others, and the flow becomes turbulent.
Reznik et al. [7] derived the amplitude equations of spherical Rossby wave triads by a multiple time scale analysis of the barotropic vorticity equation (BVEQ) (see also [6], [15]). The equations are real and the phases are disregarded. The amplitudes of the waves in a triad vary slowly compared to the wave frequency. The nonlinearity in the BVQE requires that three waves form a triad if the meridional wave numbers m and the frequencies ω satisfy the resonance conditions m 1 + m 2 = m 3 , and ω 1 + ω 2 = ω 3 , where ω 1 = ω 1 (n 1 , m 1 ), etc., with the total wave number n and and the linear dispersion relation ω(n, m) of the Rossby waves.
The amplitude equations are not determined in the BVEQ directly, but by the condition in the expansion which requires that the perturbations remain bounded for long times. This leads to the three equations for the slow amplitudes A 1 , A 2 , and A 3 in a triad The parameters N i = n i (n i +1) are determined by the total wave numbers of the Rossby waves and Z is the interaction coefficient [7]. Note that the phase space divergence of the equations (1) for the vector
Nambu representation
The system (1) has two conservation laws, the energy and the enstrophy Due to the conservation laws the equations are integrable. Exact solution are given in terms of Jacobian elliptic functions. The amplitude equations can be formulated as a Nambu system for the state space where the ∇-operator represents A-derivatives. The dynamics of an arbitrary function which is the rigid body Nambu bracket up to a constant factor A Nambu representation is suggested in [9] and interpreted geometrically by [16]. A Lie-Poisson structure is obtained by {F, H} C = {F, C, H} [2], where C is a Casimir. For the geometric visualization of phase space dynamics it is helpful that the equations are unchanged for linear combinations of the conservation laws, e.g. ∇C × ∇(H + C).
Standard amplitudes
It is convenient to transform the dynamic equations to standard amplitudes (see e.g. [10]). Here we consider the wave number ordering and which are all positive. The dynamical equations for the standard variables are The unstable mode is the intermediate wavenumber amplitude Y 2 . The conservation laws (2, 3) for the standard variables are the Hamiltonian and the Casimir For the state vector Y = (Y 1 , Y 2 , Y 3 ) the Nambu form (4) reads as (the interaction coefficient Z is omitted in the following, since it can be included in the time scale) with the ∇-operator representing Y -derivatives.
In the analysis below where we consider forced equations we will use Casimir functions based on conservation laws derived in the unforced equations. The advantage of these functions is that they can be derived in the pair of the unforced equations. A well-known conservation law of (10), based on the Manley-Rowe relations, is This can be derived by integrating Y 1 dY 1 = Y 3 dY 3 in the equations for Y 1 , Y 3 . This conservation law allows an alternative Nambu form of the conservative equations dY dt = ∇C 2 × ∇H where is a Casimir function. Here we incorporasted the factor 1/Q in (13) in the Casimir. The bracket notation for (15) is with the rigid body Nambu bracket for an arbitrary function
Forced triad
We consider constant forcings in the three equations separately Since we did not include friction, phase space divergence ∇ ·Ẏ vanishes. To derive the geometric representation we consider the three forcing terms separately.
Forcing of the amplitude Y 1
First we restrict the forcing to the small wavenumber amplitude Y 1 while the forcings in Y 2 and Y 3 are disregarded, f 2 = f 3 = 0 in (19). The conservation law derived in the unforced Y 2 and the Y 3 equations is This is used to define the Casimir The 'forced Hamiltonian' is where H is (11). The Nambu representation for the system with a forcing in Y 1 only is For an arbitrary function F (Y 1 , Y 2 , Y 3 ) the dynamics is dF/dt = {F, C 1 , H f 1 } with the rigid body bracket (18).
Forcing of the amplitude Y 2
Here we consider a constant forcing f 2 in the unstable mode For the Nambu representation we use the Casimir (16) and the 'forced Hamiltonian'.
The Nambu representation for the system with a forcing in Y 2 only is dY dt Amplitudes and CLs and for am arbitrary function F the bracket for the forcing (14) and (26) allow wide excursions of Y 1 and Y 3 with opposite sign. Therefore, the dynamics differs from the f 1 -forcing where J 2 1 = Y 2 2 +Y 2 3 (21) remains constant. This leads to a recharge behavior for the f 2 -forcing ( Fig. 1) which is considered in more detail below.
Forcing of the amplitude Y 3
If the forcing is applied in Y 3 only the Casimir used in the Nambu representation is derived in the unforced Y 1 , Y 2 equations The forced Hamiltonian is The opposite signs in (23) and (30) originate in the definitions (9) with D 3 + D 1 = D 2 . The bracket for the forcing f 3 is dF/dt = {F, C 3 , H f 3 }.
Recharge process in the forced Y 2 -equation
The recharge process for a forcing of the intermediate wavenumber in the triad is demonstrated in a numeric simulation of (25) with the wavenumbers n 1 = 6, n 2 = 7, n 3 = 8. This reveals a recharge cycle with a gradual increase of the forced amplitude, a sudden burst in the unforced waves (denoted as perturbations here), a reversal of the forced wave and a subsequent recovery (see Fig. 1, compare also Figure 4 in [10] with a forcing in the complex equations). Harris et al. [11] underline, that this forcing is not a source of energy. The process is characterized by an opposite sign of the perturbations, Fig. 1 tends to −1 during the recharge). When this relation holds, the perturbations grow with a rate proportional to Y 2 according to dY 1 To describe this process we approximate the equations for a small deviation δ from the opposite sign in the perturbations, During the recharge interval when Y 2 > 0, the sum δ = Y 1 + Y 3 decays (32) and the perturbations align to Y 1 = −Y 3 . These equations can be obtained in a Nambu form if we approximate the two conservation laws (16,26) in the same Nambu operator (18). We approximate the two conservation laws to order O(δ). The conservation law corresponding to the Casimir is and the Hamiltonian is The main equations governing the recharge process are obtained if we ignore δ as a degree of freedom in (32) Similar nonlinear oscillators have been suggested by [12] and [13]. This reduced system possesses a canonical Hamiltonian representation in terms of the variables Y 2 and η = (1/2) log Y 2 1 /2, which describes the perturbation intensity. We can write it as a symplectic system with the Hamiltonian H r = (1/2)Y 2 2 + exp(2η) − f η.
Summary and Discussion
In this paper we derived Nambu representations for constant forcings in the three wave equations for real amplitudes. A geophysical example are the amplitudes equations for resonant Rossby wave triads [6,7]. Without forcing these equations possess two conservation laws, coined as energy and enstrophy. The dynamics can be written in a Nambu form with the canonical Nambu bracket (this is already known from [9]), thus the triads are mathematically equivalent to the rigid body dynamics. An alternative Nambu description is given if enstrophy is replaced by a geometric conservation law based on the Manley-Rowe relations. For forcings in the three equations Nambu forms are obtained with the Hamiltonian extended by perturbations given by functions of the unforced amplitudes. The second conservation laws are based on relations obtained in the unforced equations. The forcing of the intermediate (unstable) wavenumber is considered in detail since these equations yield a recharge process. This is characterized by an opposite alignment of the unforced amplitudes. The approximated equations are obtained in a Nambu representation with expanded conservation laws.
The main result is that we could describe a constantly forced system in a Nambu representation. Note that we did not include dissipation associated with phase space convergence which needs to be included as a separate gradient term [17]. A representation of a physical system in terms of its conservation laws in a Nambu form is useful for the following reasons: (i) Time evolution is interpreted as a nondivergent flow in phase space and conservation laws act as stream-functions, (ii) Consistent approximations are obtained by approximating the conservation laws [4], (iii) Conservative numeric codes can be derived by symmetry properties of the Nambu bracket [18]. Further applications of conservation laws are in nonlinear stability by the Energy-Casimir method [19], and statistical mechanics [20]. For a brief review on applications of Nambu mechanics in geophysical fluid dynamics see the corresponding chapter in [21].
As an outlook this finding gives support to a modeling strategy which is purely based on conservation laws. Blender and Badin [22] have demonstrated that the Rayleigh-Bénard equations can be derived based on a bilinear structure of a conservation law (the Casimir) in the canonical Nambu bracket. Kaltsas and Throumoulopoulos [23] could derive new conservative equations in magneto-dynamics based on this idea. Very promising, but less pursued in hydrodynamics, is the parameterization of processes where we know exact conservation laws (see [24] for chemical reactions). | 2020-04-20T01:00:36.038Z | 2020-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "49ef713eb758cbd763def1e715eaae8af3817444",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "49ef713eb758cbd763def1e715eaae8af3817444",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
15937671 | pes2o/s2orc | v3-fos-license | The Parkes quarter-Jansky flat-spectrum sample 3. Space density and evolution of QSOs
We analyze the Parkes quarter-Jansky flat-spectrum sample of QSOs in terms of space density, including the redshift distribution, the radio luminosity function, and the evidence for a redshift cutoff. With regard to the luminosity function, we note the strong evolution in space density from the present day to epochs corresponding to redshifts ~ 1. We draw attention to a selection effect due to spread in spectral shape that may have misled other investigators to consider the apparent similarities in shape of luminosity functions in different redshift shells as evidence for luminosity evolution. To examine the evolution at redshifts beyond 3, we develop a model-independent method based on the V_max test using each object to predict expectation densities beyond z=3. With this we show that a diminution in space density at z>3 is present at a significance level>4 sigma. We identify a severe bias in such determinations from using flux-density measurements at epochs significantly later than that of the finding survey. The form of the diminution is estimated, and is shown to be very similar to that found for QSOs selected in X-ray and optical wavebands. The diminution is also compared with the current estimates of star-formation evolution, with less conclusive results. In summary we suggest that the reionization epoch is little influenced by powerful flat-spectrum QSOs, and that dust obscuration does not play a major role in our view of the QSO population selected at radio, optical or X-ray wavelengths.
Introduction
This is the last in a series of three papers describing the results of a program to search for high-redshift radio-loud QSOs and to study the evolution of the flat-spectrum QSO population.
Paper 1 ) set out the sample, discussing selection, identification and reconfirmation programmes to determine the optical counterparts to the radio sources. Paper 2 (Hook et al. 2003) presented new spectroscopic observations and redshift determinations. This paper considers the radio-loud QSO space distribution, the epoch-dependent luminosity function, the evi-dence for a redshift cutoff provided by the sample, and the form of this cutoff.
Paper 1 described how the identification programme for 878 flat-spectrum radio sources selected from the Parkes catalogues yielded a near-complete set of optical counterparts. Indeed for the sub-sample at declinations above −40 • with flux densities above catalogue completeness limits, only one source remains unidentified. Of the 379 QSOs in this sub-sample, 355 have measured redshifts, obtained from earlier observations and the extensive spectroscopy programme described in Paper 2. This relative completeness is ideal for studies of space density, as it becomes possible to map the entire "quasar epoch" with a single homogeneous sample, having no optical magnitude limit and free of obscuration effects. In fact a sub-sample of objects from an earlier analysis was used by Shaver et al. (1996) to study the evolution of QSO space density at high redshifts. The study showed that the space density of high-luminosity radio QSOs decreased significantly at redshifts beyond 3. Preliminary data were also used by Jackson & Wall (1999) in considering a dual-population scheme of space densities for unified models of QSOs and radio galaxies.
General features of the luminosity function and its redshift dependence have long been established for QSOs selected at optical and radio wavelengths (e.g. Longair 1966;Schmidt 1968;Fanti et al. 1973). Powerful evolution is required, similar in magnitude for selection at either waveband; the space density of the more luminous QSOs at redshifts of 1 to 2 is at least 10 2 that at the present epoch. It has been hotly debated as to whether the form of this change is luminosity evolution (e.g. Boyle et al. 1988) or luminosity-dependent density evolution (e.g. Dunlop & Peacock 1990). It does not matter: physical models are not available that require either form, although it is clear that luminosity evolution results in lifetimes of non-physical length (Haehnelt & Rees 1993, and references therein). The space density of radioselected QSOs, constituting some 10 per cent of all QSOs, generally appears to parallel that of optically-selected QSOs (e.g. Schmidt et al. 1991;Stern et al. 2000).
There are many reports of a redshift cutoff in the literature: paper after paper speaks of 'the quasar epoch', 'a strongly-evolving population peaking at a redshift of about 2', or 'the quasar redshift cutoff' without specific reference. For optically-selected QSOs, several classic studies demonstrated that such a cutoff does exist (Schmidt et al. 1991;Warren et al. 1994;Kennefick et al. 1995). The Sloan Digital Sky Survey (SDSS) has now found QSOs out to redshifts beyond 6, and analyses of the space density (Fan et al. 2001a,c,b) provide the strongest evidence to date of the drop in space density beyond z = 3. X-ray surveys now appear to show that the X-ray QSO population exhibits a decline at high redshifts similar to that found for optically-selected QSOs (Hasinger 2003;Barger et al. 2003;Silverman et al. 2004). But do all these observations indicate a real diminution or -at least at optical wavelengths -could it be due to a dust screen (Heisler & Ostriker 1988;Fall & Pei 1993)? It is here that radio-selected samples such as the present one can provide a powerful check: if a significant diminution is seen in the radio luminosity function, it cannot be the result of dust obscuration. Dunlop & Peacock (1990) presented some evidence for just such a cutoff of the radio luminosity function (RLF) for flat-spectrum (QSOdominated) populations; and an earlier analysis of a subsample from the present work (Shaver et al. 1996) added confirmation. More recently Vigotti et al. (2003) defined a complete sample of 13 radio QSOs at z ∼ 4, from which they concluded that the space density of radio QSOs is a factor of 1.9 ± 0.7 smaller than that of similar QSOs at z ∼ 2. However, Jarvis & Rawlings (2000) questioned these radio-QSO results, focussing on the possible effects of spectral curvature.
A possible dust screen has serious implications for the interpretation of the Hubble diagram for SN Ia su-pernovae. Assuming no obscuration, current results from the SCP (Supernova Cosmology Project) collaboration (Knop et al. 2003) and the Hi-z team (Tonry et al. 2003) favour an Ω m = 0.3, Ω Λ = 0.7 universe. Two further related issues make delineation of the QSO epoch very important: galaxy formation, and the reionization of the Universe.
Galaxy formation
The dramatic cosmic evolution of radio galaxies and QSOs stood as a curiosity on its own for over 30 years since the birth of the idea (Ryle 1955), clouded as it was in the source-count controversy (Scheuer 1990). It is relatively recently that corresponding evolution has been delineated for the star-formation rate (Lilly et al. 1996;Madau et al. 1996) and for galaxy evolution, particularly blue galaxies (Ellis 1999). The correlation between star-formation rate and AGN space density (Wall 1998) strongly suggested a physical connection (Boyle & Terlevich 1998). Before the emergence of the Lilly-Madau plot of star-formation history, it was recognized that the model of hierarchical galaxy development in a Cold Dark Matter (CDM) Universe would result in a 'quasar epoch' (Haehnelt 1993;Haehnelt & Rees 1993). The issue of 'quasar epoch' and 'redshift cutoff' has therefore assumed particular importance in consideration of galaxy formation in low-density CDM universes. The very existence of any high-redshift QSOs sets constraints on the epoch of formation of the first galaxies. Haehnelt (1993) showed how the then-new COBE normalization (Smoot et al. 1992) together with the QSO luminosity function at high redshifts as measured by Boyle et al. (1991), provided substantial information on the initial fluctuation spectrum and the matter mix. He found that the z = 4 luminosity function excluded an initial-spectrum index of n ≤ 0.75 or a Hot Dark Matter fraction ≥ 25 per cent. Relevant to the current view of the low-matter-density CDM Universe, he found that Ω Λ ≤ 0.75. Haehnelt & Rees (1993) developed a model for the evolution of the QSO population based on the existence of ∼ 100 generations and linking the QSO phenomenon with the hierarchical build-up of structure in the Universe. The evolution of host objects is mirrored in the evolution of the mass of newly formed black holes; only a moderate efficiency for formation of an average black hole is necessary to model the luminosity function. The model suggested that nearly all galaxies are likely to have passed through a QSO phase. Kauffmann & Haehnelt (2000) produced a more sophisticated model by incorporating a simple scheme for the growth of supermassive black holes into the CDM semianalytic models that chart the formation and evolution of galaxies. In addition to reproducing the observed relation between bulge luminosity and black-hole mass in nearby galaxies (Magorrian et al. 1998), the model is able to mimic the enormous increase in the QSO population from redshift 0 to 2, a feature that the Haehnelt-Rees model was able to describe only qualitatively. Their conclusion: "Our results strongly suggest that the evolution of supermassive black holes, quasars and starburst galaxies is inextricably linked to the hierarchical build-up of galaxies."
Reionization
The paradigm of hierarchical structure growth in a CDM universe has long suggested that after the recombination epoch at z ∼ 1500, the reionization of the Universe took place at redshifts between 6 and 20 (e.g. Gnedin & Ostriker 1997). This reionization is predicted to be patchy and gradual (Miralda-Escudé et al. 2000), although some models indicate that it should happen quite rapidly (e.g. Cen & McDonald 2002;Fan et al. 2002). Two major observational advances support the 'patchy and gradual' scenario. Firstly, SDSS discovery of QSOs at redshifts of 6 or more (Fan et al. 2000(Fan et al. , 2001aBecker et al. 2001) has given a glimpse of what may be the end of the epoch of reionization: the first complete Gunn & Peterson (1965) trough has been observed in the z = 6.29 QSO SDSS 1030+0524 (Becker et al. 2001;Pentericci et al. 2002) and a second has been seen in z = 6.43 QSO SDSS J1148+5251 (Fan et al. 2003). There is disagreement as to whether this marks the true end of reionization (Songaila & Cowie 2002); but the suggestion is that it may be essentially complete by z ∼ 6 − 7. Secondly, the detection of polarized anisotropies with the Wilkinson Microwave Anisotropy Probe (WMAP) has resulted in a measurement of the optical depth τ ∼ 0.17 to Thompson scattering (Bennett et al. 2003;Kogut et al. 2003), implying a reionization redshift of 17±5. The CMB is sensitive to the onset of ionization, while Gunn-Peterson troughs are sensitive to the late stages, the cleanup of remaining HI atoms. Resolving the large uncertainties in these redshifts could yet result in a rapid reionization scenario. Nevertheless several recent papers (see e.g. Haiman & Holder 2003) address the complex and interacting suite of physical mechanisms that may be at play during an extended 'patchy and gradual' reionization epoch 6 < z < 17.
In either a fast or a gradual scenario, identifying the source of this reionization as well as epoch is of vital importance for such interconnected reasons as: -The role of reionization in allowing protogalactic objects to cool into stars, -The small-scale temperature fluctuations in the CMB and how these are influenced by patchiness in the reionization, and -The epoch of the first generation of stars, or galaxies, or collapsed black-hole systems.
It is most likely that the reionization is via photoionization by UV radiation from stars or QSOs, rather than collisional ionization in e.g. blast waves from the explosive deaths of Population III stars (Madau 2000). It may be possible to detect this reionization epoch directly as a step in the background radiation at radio frequencies between 70 and 240 MHz (redshifted 21-cm HI) or in the infrared, 0.7 to 2.6 µm, from H recombination (Shaver et al. 1999). It may be possible with results from the Planck mission to identify features in the CMB that identify what the predominant mechanisms are; and it may be possible to detect the UV sources responsible for the ionizing flux at z ∼ 10 − 20 with the James Webb Space Telescope (Haiman & Holder 2003).
QSOs have long been prime candidates for this reionization. However the apparent decline in space density (from the evidence summarized above and by Madau et al. 1999), is inconsistent with this interpretation. Madau (2000) showed that in the face of this apparent diminution, UV luminosity functions of Lyman-break galaxies (LBG) provide 4 times the estimated QSO contribution at z = 3. It is now commonly accepted that such objects (or their progenitor components) take on the mantle. The formation of short-lived massive stars in such galaxies provides the UV photons (Haehnelt et al. 2001), although QSOs may supply a significant fraction of the UV background at lower redshifts.
Because the cooling time is long, the low-density IGM retains some memory of when and how it was ionized. Several investigators have found a peak in temperature of the IGM at z ∼ 3 (Schaye et al. 2000;Theuns et al. 2002) close to the peak of the 'quasar epoch'. Moreover, observations of several QSOs at the wavelength of HeII Lyα near z = 3 suggest delayed reionization of He I, with the process not yet complete by z = 3 (Kriss et al. 2001). The implication is that the QSO ionizing photons coincident with the peak in activity both reionize HeI and dump entropy into the IGM to raise its temperature.
In all of these aspects, it is clear that conclusions on ionizing flux from QSOs are dependent on poorly determined high-power regions of luminosity functions, on apparent cutoffs observed primarily in optically-selected samples, and then only for the most luminous QSOs.
It is a primary purpose of this paper to determine the radio luminosity function using the near-complete data of the present sample, and to examine the evidence for a redshift cutoff. Before this, we discuss the populations involved in the flat-spectrum sample by examining the N (z) relation ( § 2). Subsequent to the RLF determination in § 3, we consider the issue of a redshift cutoff ( § 4), and the form of this cutoff. In § 5 we construct an overall picture of epoch dependence of space density for radioloud QSOs. We compare this with the parallel results for QSOs selected at optical and X-ray wavelengths, and with the behaviour of star-formation rate with epoch. The final section ( § 6) summarizes results from this paper and our preceding two papers.
The redshift distribution
For a sample of objects complete to some flux-density limit, the redshift distribution, N (z), gives preliminary in-formation on the epoch of the objects, and allows the most direct comparison with other samples. The redshift distribution gives direct information on neither the luminosity function nor its epoch dependence; however it provides essential data for use with other data such as source counts to enable the construction of epoch-dependent luminosity functions. There have been many versions of this. Most are a variant on either the V max method (Schmidt 1968) or the technique of defining the luminosity distribution (Wall et al. 1980;Wall 1983), obtained when a complete N (z) is available at one flux-density level at least.
Such modelling processes now make use of statistical techniques to incorporate data sets of varying completeness at many frequencies and flux-density levels. The sample described here represents only one such data set, more complete than most. Dunlop & Peacock (1990) carried out the most extensive such modelling. They took as a starting point two populations, 'flat-spectrum' and 'steep-spectrum' radio sources, now broadly considered in the light of unified models as beamed radio sources (radioloud QSOs and BL Lac objects) and their unbeamed progenitors, or hosts (FRI and FRII radio galaxies). All these objects are deemed to be powered by accretion-disk / rotating black-hole systems from which a pair of opposing relativistic jets feed double radio lobes. The single axis is collimated during the feeding process by rotation of the black-hole system. The beamed objects, QSOs and BL Lacs, beamed because of relativistic ejections of components along axes aligned with the line-of-sight, have radio structures dominated by relativistically-boosted core emission. This core emission shows the effects of synchrotron self-absorption and therefore has a flat or inverted radio spectrum. The radio emission from powerful radio AGN whose axes are not aligned with the line-ofsight is dominated by their steep-spectrum lobe emission, on the large scales of 10's of Kpc up to 100's of Mpc. The dichotomy between beamed and unbeamed objects as evidenced by their integrated radio spectra is shown in Fig. 1.
The widely-used Dunlop-Peacock models of the luminosity functions may be simply tested against the present data by means of the N (z) distributions that they predict.
We constructed a redshift distribution from the sample of Paper 1 as follows. We selected all sources with S 2.7 GHz ≥ 0.25 Jy in regions for which the 2.7-GHz fluxdensity limit was 0.25 Jy or less, and at declinations 2.5 • ≥ δ ≥ −40 • . We refer to this as Sample 1 and the total area it covers ( Fig. 1 of Paper 1) is 2.676 sr. The source composition, identification and redshift data for this sample are shown in Table 1. Choice of the declination limit comes from both identification and radio-spectral completeness; see §3.
The entries in the identification column, Table 1, refer to (QSO)s, (BL) Lac objects, (G)alaxies, (Obsc)ured fields, and (e) not identified for reasons discussed in Paper 1. As reasonable approximations, the 20 QSOs without measured redshifts were assumed to have the same redshift distribution as those with redshifts; likewise the unmeasured redshifts of the 34 BL Lac objects were assumed to have the same distribution as those measured. Such an approximation is inappropriate for the galaxies, however. A crude Hubble diagram was plotted for the 27 galaxies with redshifts and a simple polynomial was fitted to make rough estimates of the redshifts for the remaining 57 galaxies based on their B magnitudes. Finally the ply scaled by (344 + 111 + 3)/(344 + 111) to obtain the final N (z) of Fig. 2. Table 1). The hatched area shows the redshift distribution for beamed objects alone, the QSOs + BL Lac objects, while the clear region represents the galaxies. The 6 dotted lines show the appropriately-scaled distributions predicted by the flat-spectrum (α ≥ −0.5) components of the Dunlop-Peacock (1990) luminosityfunction models, with the dashed line distinguishing their pure-luminosity-evolution model. The solid line represents the average of 6 of the models, omitting the model showing the very steep rise to z = 5. The symbol + line systems show the predictions from the dual-population models of Jackson & Wall (1999), red representing all beamed objects (QSOs + BL Lacs), blue for QSOs only.
Dunlop & Peacock derived luminosity functions for their two-population model, flat-spectrum and steepspectrum radio sources, representing the luminosity functions as polynomials over the surface (ρ, P radio , z), and obtaining coefficients by best-fitting to multi-frequency survey data including source counts and redshifts. Different models resulted from different starting points and factorizations of the epoch-dependent luminosity function. Their division between flat-spectrum and steep-spectrum sub-populations took spectral index α = −0.5 as the dividing criterion. The predictions of redshift distributions from the flat-spectrum portions of Dunlop-Peacock luminosity functions are shown in Fig. 2. In order to scale these to our spectral-selection criterion, we used the spectral-index histogram of Fig. 1; the ratio of objects with α ≥ −0.4 (our selection criterion, Paper 1) to those with α ≥ −0.5 is 1060/1275 = 0.831.
In view of uncertainties in spectral index and of equating the flat-spectrum population of Dunlop & Peacock with compact radio sources, the overall agreement is good. The form of the decline in N (z) to higher redshifts is impressively described by the Dunlop-Peacock models. Two models stand out in Fig. 2. One model with a spacedensity cutoff at z = 5 predicts a redshift distribution greatly at variance with observations, showing a dominant spike in the distribution at redshifts just below this cutoff. It has been left out of the averaging process. The pureluminosity-evolution model, shown as the dashed line, is distinct in having a quicker rise and flatter maximum than the others. These two features provide a better representation of the data in the range 0 < z < 1.5 than do the other models.
The good fit of the Dunlop-Peacock models to the total N (z) distribution for flat-spectrum objects does not imply a good description of the N (z) for beamed objects (hatched area, Fig. 2) alone. The Dunlop-Peacock models clearly rely on the presence of low-luminosity flat-spectrum galaxies for the quality of overall fit; the 'flat-spectrum' models describe the beamed objects alone rather poorly.
Models considering populations in terms of beamed and host object were developed by Jackson and Wall (Wall & Jackson 1997;Jackson & Wall 1999). The N (z) predictions from these models are shown in Fig. 2. Agreement is reasonable; normalization is correct, and the forms of the curves are similar. This agreement is expected on the basis of the fit of the model to the 5-GHz source count and the incorporation of a redshift cutoff in the model evolution. The models over-predict objects at z > 2, due primarily to a lack of constraint on the evolution of low-luminosity sources.
The Radio Luminosity Function (RLF)
Completeness of identifications enables the radio luminosity function to be constructed in a straightforward way, using the 1/V obs approach (Schmidt 1968;Felten 1976;Avni & Bahcall 1980). The contribution of each object to space density is calculated as the reciprocal of the observable volume, the volume defined by the redshift range(s) in which the object can be seen. Because the sample is optically complete, only radio data (apart from the redshifts) are relevant in defining this range.
An appropriate sub-sample for this calculation is that referred to as Sample 2 in Table 2. Selected from the catalogue of Paper 1, it includes all the QSO identifications with flux densities above survey limits and within the declination range +2.5 • to −40 • . Defining V obs requires knowledge of the radio spectrum both above and below the survey frequency. Above 2.7 GHz, there are the 5.0-GHz data of the Parkes catalogues for all sources in the 2.7-GHz surveys, flux densities at 8.4 GHz for many of these sources (Wright et al. 1990), and about 40 8.87-GHz flux densities for some of the brighter sources (Shimmins & Wall 1973). Below 2.7 GHz, flux densities exist for most members of Sample 2 at 365 MHz from the Texas survey (Douglas et al. 1996), and at 1.4 GHz from the NRAO VLA sky survey (Condon et al. 1998). The Texas survey covers the sky at declinations down to −35.5 • and the NVSS down to −40 • . As a compromise between sample size and spectral completeness, the sub-sample chosen for definition of the RLF, Sample 2 of Table 2, was therefore taken to have a southern declination limit of −40 • . Most of the area surveyed at 2.7 GHz in this range has a completeness limit of S 2.7 GHz = 0.25 Jy, but some regions have limits of 0.10, 0.20 and 0.60 Jy; see Fig. 1 of Paper 1. The steps to defining V obs consist of (1) determining P 2.7 GHz , the luminosity of the radio source at 2.7 GHz (rest frame), and (2) 'moving' the source with its spectrum defined by the measured flux densities, from 0 < z < ∞ to determine in which redshift range(s) it is observable. It is observable at a given redshift if (a) its flux density exceeds the survey limit S 2.7 = 0.25 Jy and (b) its redshifted spectrum over the observer's range 2.7 to 5.0 GHz has a spectral index ≥ −0.4. We interpolated between measured spectral points in the log S ν -log ν plane. Despite the relatively sparse sampling in this plane, combined luminosity and spectral effects of 'moving' the source are complex, sometimes resulting in a source having two regions of observable volume defined by four redshifts. (These effects are discussed further in the following section.) In calculating the RLF, the contribution of each source is then where i is usually unity but is sometimes two. Throughout the analyses we have used the geometry H 0 = 70 km s −1 Following these precepts, the radio luminosity functions calculated for rest-frame powers at 2.7 GHz for 5 redshift ranges are given in Table 3.
These data are displayed in Fig. 3. First impressions are that the curves slide sideways, suggesting simple luminosity evolution, as deduced from similar behaviour in redshift shells for optical luminosity functions (Boyle et al. 1988). However, the transition from the lowest redshift range (z < 0.5) to the next redshift range (0.5 < z < 1.0) is not described by a lateral shift. It is possible that the lower bin is contaminated by unbeamed objects such as Seyfert galaxies and elliptical-galaxy cores; such objects may have entirely different central engines and different luminosity functions as a result. The data of Fig. 3 shown in integral form (Fig. 4) suggest that the RLF changes in form right out to the 1 < z < 2 shell, and it is improbable that contamination by unbeamed objects persists beyond z > 0.5. A closer investigation by both morphology and spectrum is needed to determine if removal of unbeamed objects could 'save' luminosity evolution. However, it is probably beyond saving. For example, from QSOs discovered in the SDSS survey, Fan et al. (2001c) noted that the high-power end of the QSO luminosity function appears flatter than that at lower redshifts.
The impression of luminosity evolution may be misleading in any case. Spectral spread limits the upper power bound of completeness for the RLF in each redshift band. At the maximum redshift of the band, radio sources of the steepest spectra fall below the survey flux-density limit first; the power limit is determined simply from where D is the 'luminosity distance', and α max is the minimum (low-frequency) spectral index, i.e. that effective index corresponding to the source with 'steepest' radio spectrum in the particular redshift range. At lower powers within the bin, the RLF will be incomplete for such objects, but will remain complete for objects of flatter spectra. (The limit is well defined for our sample; we selected objects of α ≥ −0.4, i.e. the spectral limit was imposed on the 'steep' side, with of course no limit as to how 'flat' or 'inverted' the spectra might be.) This limit may cause RLFs of similar slopes to appear to have a knee at similar space densities, mimicking luminosity evolution. In previous discussions of space densities it is not clear that this limit plus spectral spread have been considered; several such studies appear to ascribe a single canonical spectrum to every QSO.
One regrettable result of this power limit is that tracing the space densities in the higher-redshift ranges down to low powers is not possible. Composite RLFs (galaxies plus QSOs) extending over many decades show relatively few QSOs at low redshifts, where the RLFs are dominated by low-luminosity (mostly star-forming) radio galaxies (Sadler et al. 2002). The RLFs of QSOs at high redshifts must therefore flatten and drop drastically towards the lower powers. The dual-population models of Jackson & Wall (1999) demonstrate such behaviour. From the present data, the limit-lines show that the only conclusion to be drawn is that the RLFs may reduce in slope Table 3. The radio luminosity function ρ, in units of log(Mpc −3 ) per ∆z = 0.5 per ∆(logP ) = 0.4, as derived from Sample 2, Table 2. N is the number observed per bin and z the mean redshift of the sources in the bin. Table 3. Sky area and redshift-measurement completeness have been considered in order to plot true space densities per ∆z = 0.5, ∆(logP )= 0.4. towards the lower powers. In §5 we show how a different approach can yield some information throughout the range of redshifts occupied by the present sample.
A third presentation of the RLF data is given in Fig. 5, in which space densities are plotted as a function of redshift for 5 ranges of intrinsic power. The initial dramatic increase in space density with redshift is evident, with densities in the redshift range 1.0 − 2.0 some two orders of magnitude above those for objects at redshifts < 0.5. Small numbers at the highest redshifts (see Table 3) and the completeness limits at the lower redshifts constrain the redshift range observable for each luminosity. In particular it is not possible to judge whether the maximum space density is a function of radio luminosity. The curves overlap adequately to show self-consistency, and to demonstrate the increase in space density from small redshifts to z ∼ 1.5. Beyond this redshift, the space densities for each power range decline, although statistical uncertainties are substantial. Fig. 3 also indicates such a decline; these data therefore suggest a redshift cutoff, at some level of significance.
The Redshift Cutoff
Our preliminary analysis (Shaver et al. 1996) indicated a decrease in radio-QSO space density beyond z = 3. Using a well-defined sub-sample from the present study, Shaver et al. considered the space density of QSOs with P 2.7 GHz ≥ 1.1 × 10 27 W Hz −1 sr −1 . On the basis of uniform space density, the 25 such radio QSOs seen at z ≤ 4 indicate that 15 similar objects would be expected in the range 5 ≤ z ≤ 7. None was found. From Poisson statistics, the difference is significant at the 99.9% level. This preliminary study drew attention to a possible difficulty in the analysis due to the curved nature of some of the radio spectra. Jarvis & Rawlings (2000) examined this in some detail, pointing out the apparently curved nature of many of the radio spectra involved, and indicating how such an effect, a steepening to the high frequencies in particular, might reduce or remove the significance of an apparent redshift cutoff. Their model-dependent analysis used only the highest-power objects and indicated that the apparent cutoff on the basis of such objects might have a significance level as low as that corresponding to 2σ. They suggested that establishing the reality of the cutoff for such objects to a high level of significance might be difficult even with all-sky samples. However, Fig. 6 shows that there is no clear majority of sources with spectra steepening to the higher frequencies. Moreover, we show below that the spectral data in the literature are misleading in terms of the proportion of sources showing spectral steepening to the higher frequencies.
Subsequently we have considered alternative methods to study space density and redshift distribution, methods to utilize the entire sample which can demonstrate simple attributes of the space-distribution without recourse to modelling the luminosity function or its epoch dependence. Abscissa values are the mean redshifts in each element of (∆P 2.7 , ∆z). For each of the two lowest power ranges, the final point with error bar represents incomplete data, as these points fall at powers below the spectral cutoffs shown in Fig. 3. Upper limits, represented by the single dots, were obtained by extrapolating the RLF to this power from higher powers. The actual values therefore lie somewhere along the two dashed lines. (Table 2) in a plot of radio luminosity vs. co-moving volume. We need this new sample for such a plot. Recall that Sample 1 (Table 1) included all sources, not just QSOs, while Sample 2 (Table 2), although confined to QSOs, was drawn from regions of the survey with different completeness limits. In order for a plot of luminosity vs. z (or equivalently, co-moving volume) to be interpreted, the sample must have a single survey limit. Sample 3 is therefore composed of all QSOs from our data table of Paper 1 with survey completeness limit at exactly S 2.7 GHz = 0.25 Jy (Fig. 1, Paper 1), and again at declinations above −40 • for reasons of radio-spectral completeness. Fig. 7 shows lines of survey completeness corresponding to 0.25 Jy for three different radio spectral indices.
The Power-Volume plane; using the whole sample
The plot with co-moving volume on the abscissa rather than redshift gives direct indication of space density. There is an apparent diminution in the density of points at redshifts above ∼ 2.5. The question is whether this is real and significant. In what follows we test the null hypothesis that the space density of QSOs at high redshifts remains constant and equal to that at 1 < z < 3.
Redshift information is not complete for Sample 3; in order to make comparison with prediction we must estimate the number of possible objects at z > 3. Table 2 Fig. 7. The Luminosity -Volume plane for the 252 QSOs with measured redshifts, in survey areas with completeness limit S 2.7 GHz = 0.25 Jy (Sample 3, Table 2). Vertical grid lines indicate redshifts, as marked along the top border. The curved lines indicate survey completeness limits at 0.25 Jy for three spectral indices. Left: the sources plotted with symbols to indicate different spectral indices α 2.7 5.0GHz : solid dots for −0.4 < α < −0.2; open circles for −0.2 < α < +0.2, crosses for +0.2 < α < +0.6, and stars for the extreme spectral inversions α > +0.6. Rest-frame luminosities (P 2.7 ) for this plot were calculated assuming a power-law spectrum given by this index. Note that the dots lie above the α = −0.2 limit line; the open circles above the +0.2 line; the crosses above the +0.6 line; and the stars scatter to below the +0.6 line. Right: The effects of considering spectral data at frequencies below the Parkes survey frequency (2.7 GHz). The values of P 2.7 are substantially changed. Crosses indicate the original positions as in the left panel, while dots show the positions revised with improved estimates due to incorporation of lower-frequency data.
presents the summary. The key element is the sub-sample of 16 objects in the sample of 268 for which redshifts are not available.
The redshift distribution for QSOs is known to be a function of both apparent magnitude and flux density, albeit with huge scatter and only a gentle dependence in each case. Thus in order to estimate redshift proportions for the objects without such data, we treated the identifications made on UKST plates and those from the (deeper) CCD observations separately.
Consider the 11 QSOs without redshifts and identified from UKST plates. Of the 242 objects identified on UKST plates and with measured redshifts, 8 have z > 3. There is no bias in the redshift measurements or lack of; and thus we expect 8/242 × 11 = 0.36 of the 11 objects to have z > 3. The remaining 5 objects may be treated equally; the single unidentified source in the sample (PKS 0225-065) escaped the CCD identification programme by being de-identified later on the basis of an improved radio position. Had it been included we can be confident that an identification would have been obtained, as it was in each of the 87 cases we tried. For these 5 objects, then, we use the CCD-identified QSOs with redshifts, totalling 10 in the sample, for which two redshifts exceeded 3. We thus anticipate 2/10 × 5 = 1.0 of the 5 objects will have z > 3. The number of objects in the sample with measured z > 3 is 10. Thus the number with which to compare predictions for z > 3 is 10(observed)+1.4(estimated)= 11.4. The principal point is that redshift incompleteness does not impede our analysis.
A simple analysis may be carried through on the basis of Fig. 7. If we consider QSOs in specific narrow bins of luminosity and survey limit imposed by spectral index and survey flux limit S lim , then such horizontal stripes in Fig. 7 intersecting the curved survey-cutoff lines provide an area in the figure in which QSOs can be seen by the survey. On the null hypothesis, no redshift diminution, if we now split this area into a region with z < 3 and a region with z > 3, we can use the surface density of QSOs in the lowredshift area to form an expectation value for the higher redshift area. We chose the prediction region to be 1 < z < 3 to coincide roughly with the plateau of the 'quasar epoch', and we selected the high-redshift region to run out to z = 8, the approximate limit to which we could hope to see QSOs given our survey limits and the known range of luminosity and spectral index. This process described above can be refined by reducing the stripes of radio power to zero width; each source then becomes a predictor, provided of course that the survey limit allows it to be seen beyond a redshift of 3. Table 4 presents results of this analysis under the sub-heading 'single survey cutoff'. The immediate result is the apparent one: a prediction of significantly more QSOs at z > 3 than the 11.4 'seen'.
The results reveal a fundamental flaw of this analysis, namely what limit line to adopt, corresponding to which spectral index. It is apparent from Fig. 7 that adopting α = −0.2 is extreme; but even confining the analysis to narrow bands of spectral index does not define where within that band the survey cutoff or completeness line should be placed. The analysis at this point appears to confirm what the eye sees in Fig. 7, but shows that taking the figure at face value is dangerous. Moreover here we have used the 2.7 − 5.0 GHz spectral index, characterizing each spectrum as a single power law; spectral curvature or indeed any complexity of radio spectrum has not been considered. (For low-frequency surveys, the spectral-index issue is not so important, because most sources detected in them have power-law spectra characterized by an index close to −0.75. In corresponding P − z or P − V planes, most sources from low-frequency surveys cluster closely along or just to the left of the single limit line given by this spectral index.) The analysis of Shaver et al. (1996) attempted to circumvent the difficulties by sticking to powers so high that the observational cutoff, the survey completeness limit, did not come in to play. In doing so, the available subsample becomes small and the statistical uncertainties are inevitably larger.
These difficulties suggest the following refinement.
Source-by-source analysis: the 'Single-Source Survey'
There is no need to stick to a single survey-limit line in the P − V plane. Each source can be considered alone, conceptually the result of a survey which found it as a single source. For each such 'single-source survey', a limit line may be drawn in the plane peculiar to that object and incorporating all its radio-spectral information. The prediction of this object for sources at redshifts above 3 may then be added to the predictions from all 'single-source surveys' to derive a prediction total. In effect this is using the V max method to predict the number of objects in volumes at higher redshift on the hypothesis that space density is uniform; it is doing so using the spectral properties of each source individually. A further advantage in such a process is that there is no longer a need to stick to a sample defined by a single fluxdensity limit. To improve statistical weight, all zones of the survey can be used, no matter what the flux-density limit, provided of course that the value of the 2.7-GHz flux density is greater than or equal to the completeness limit for the area in which it was detected. (Sources for which this is not the case were marked in the data-table of Paper 1.) Each source in this analysis contributes a predicted number of sources given by the ratio of its accessible co-moving volume in the redshift range 3 < z < 8 to that in the range 1 < z < 3. The sum of all such predicted sources, based on all sources observed in the redshift range 1 < z < 3, gives us the total number of 3 < z < 8 sources expected in the survey for a constant comoving space density.
A sample appropriate to this analysis is Sample 2 of Table 2, giving a total of 379 radio QSOs, 355 with measured redshifts. From an analysis analogous to that carried out for Sample 3, we estimate that complete identification and redshift data would add 1.8 sources to the 16 members of this sample observed to have z > 3.
As a basic analysis of this type, when individual limits are applied as described, using the α 5.0 2.7 GHz spectral index appropriate to each source, a prediction of 51.5 sources in the redshift range 3 < z < 8 is obtained (Table 4), c.f. the 17.8 sources 'observed'. However a particularly important feature of the approach is that it enables incorporation of 1. If the spectrum decreases to low frequencies, perhaps having a low-frequency cutoff due to synchrotron or free-free absorption, the effective power at the restframe survey frequency is reduced. The result may be a substantially lower position in the P − V plane. This reduces the 'headroom' the object has to predict significant contribution at higher redshift; and it may remove it from the 'prediction contributors' list entirely. Significant steepening to the lower frequencies of course has the opposite effect, raising its rest-frame power, its position in the P − V plane, and increasing its prediction. We have incorporated spectral data at 1.4 GHz from the NVSS survey (Condon et al. 1998) and at 0.365 GHz from the Texas Survey (Douglas et al. 1996) to define the low-frequency spectra for the majority of sources in Samples 2 and 3. The results may be seen in Fig. 7. The extreme-power objects have convex spectra and drop down into the pack; but in addition, a number of less-luminous objects rise by virtue of steep low-frequency spectra. When the prediction is made incorporating the low-frequency data (Table 4), these effects approximately cancel out and the result differs little from the previous estimate: 48.5 objects should be seen in the sample at 3 < z < 8, c.f. the 'observed' number of 17.8. 2. A more substantial difference is produced by the incorporation of spectral data at frequencies higher than 5.0 GHz. Shaver et al. (1996) pointed out that high-frequency flux densities measured quasisimultaneously by Gear et al. (1994) indicated little spectral steepening; and certainly not enough to exclude very-high-redshift objects. However the Gear et al. (1994) sample may not be representative.
For PKS sources a set of flux densities at 8.4 GHz was measured by Wright et al. (1990), including a large fraction of the sources in both Sample 2 and Sample 3. These data suggest that spectral steepening is more common than indicated by the Gear et al. (1994) measurements, although as Fig. 6 shows, it is not a feature of the majority of sources. Spectral steepening beyond 5 GHz in the observer frame has two effects. First it moves the cutoff line upward (Fig. 7) so that the object in question drops from the sample at relatively lower redshifts. Secondly when 'moving' the object to some redshift above the observed redshift, the spectrum becomes steeper than the apparent 2.7 -5.0-GHz spectral limit of −0.4 used to define the original sample; 'flat-spectrum' objects whose spectra steepen beyond 5.0 GHz (observer frame) become undetected as such at higher redshifts.
To consider the first of these two effects, the cutoff (survey limit) line for each source was calculated using each 'segment' of the spectrum as redshift is changed. This simple interpolation in the log S ν -log ν plane results in segmented cutoff lines for each object in the P −V plane as shown in Fig. 8. The second of these two effects is illustrated by PKS 2210-257 in the left-most panel of Fig. 8. As the effective spectrum steepens with increasing redshift, the accessible region in the redshift range 3 < z < 8 is reduced substantially from that given by the power cutoff line. The prediction from such an object reduces correspondingly. 3. There is a third effect due to spectral measurement affecting the 'observed' region 1 < z < 3, if the spectrum steepens at frequencies below the survey frequency. The effect is illustrated in the right panel of Fig. 8: objects such as PKS 0017−307 are invisible to us as 'flat-spectrum' sources at redshifts below a cutoff point at which the upward curvature makes them appear 'steep-spectrum'. This affects only a few sources, but for these, it means an increase in prediction contri- Fig. 8. Two sources to illustrate redshift limits in the single-source analysis. For each object, the upper panel shows the spectrum in the rest frame, while the lower panel shows the object in the P − V plane. In each case the individual cutoff lines are shown as the segmented black curves in the P − V plane, while the smooth coloured curves represent completeness limits at S 2.7 GHz = 0.25 Jy for sources whose spectra are described by single power laws with indices as shown. Light grey bars represent the predictive region 1 ≤ z ≤ 3, dark grey the region 3 ≤ z ≤ 8 in which the object might be visible; the width of these bars is irrelevant. In the case of PKS 2210-257, the individual survey limit suggests that the object should be visible out to z=4.54. However, it is not visible to this redshift as an object with an observed 2.7 -5.0 GHz spectrum flatter than −0.4; the spectrum (shown blue) steepens to an effective index of ≤ −0.4 at a redshift of 3.22, for which the 'observed' frequencies of 2.7 and 5.0 GHz are shown in the upper panel by the red vertical lines. The object therefore drops from the sample at this redshift. PKS 0017-307 does not enter the observed region of the diagram as an object with spectral index ≥ −0.4 until a redshift of 1.48 is reached. Again in its upper panel, the red vertical lines indicate rest frequencies at 2.7 and 5.0 GHz for the critical redshift of 1.48, at which point the spectrum 'flattens' to have α ≥ −0.4, denoted by the blue line. The object runs into its observable limit at z = 3.60 in the predictive region as shown. The smaller accessible volume in the 'observed' region results in a scaling up of its prediction via the ratio of accessible volumes.
When the available spectral measurements, relatively complete for Sample 2 at five frequencies, are considered for each source, the prediction (Table 4) is 28.8 sources, differing now from the 'observed' number 17.8 by just 2.9σ. The raised level of the power cutoff does most of the damage. It is this use of the data for the Parkes 0.5-Jy sample which we believe yields the relatively low level of significance for a redshift cutoff found by Jarvis & Rawlings (2000).
However there is a fundamental problem with using the 8.40-GHz data. This can be shown by using a set of 8.87-GHz flux densities measured in 1972 (Shimmins & Wall 1973), roughly contemporaneous with the 2.7-GHz surveys. There are 40 sources in Sample 2 with these 'old' measurements, one of which, PKS 1532+016 at S 8.87 GHz = 1.16 Jy, was not measured by Wright et al. (1990). There are clearly large flux-density variations at 8 GHz, the wildest being for PKS 1402−012, 0.67 Jy in 1972, 0.15 Jy in 1989. If the 1972 8.87-GHz measurements are used in preference to the 1989 8.40-GHz measurements, the prediction is 38.6 QSOs (57 contributors) in the range 3 < z < 8. This is very significantly higher than the prediction of 28.8 (53) sources using only 8.40-GHz data, exceeding the 17.8 sources 'observed' by 4.9σ. Fig. 9. The predicted numbers of QSOs at high redshifts. The histograms are the results of 2000 trials of end-toend bootstrap testing. The left-most histogram was compiled using high-frequency spectral data at 8.40 GHz and 8.87 GHz, with the 8.87-GHz (near-contemporary) flux densities used in preference if both were available. The right histogram represents 2000 results from bootstrap testing when only the 8.87-GHz flux densities were used. The vertical line to the left indicates the observed number of QSOs at high redshifts, as discussed in the text. These bootstrap test were carried out in the simple geometry Ω m = Ω tot = 1 but are closely indicative of results for the Λ -dominated geometry adopted here (see Table 4 and text).
The problem is a simple one. Measuring high-frequency flux densities some time after the original survey gives a biased estimate of the spectrum. Any flux-limited survey preferentially selects variable sources in an up-state, whereas flux-density measurements many years later reflect sources in a mean state. The result is that the spectra are artificially steepened. In the present case the result is an underestimate of numbers of objects predicted at high redshifts. It is the variations at frequencies above the survey frequency which matter in this; variations at the lower frequencies are small to insignificant in comparison.
The result emphasizes how responsive the predictions are to flux measurements, and how crucial it is to use contemporaneous measurements. If this much change comes about with replacing the 8-GHz flux densities of just 40 sources with near-contemporary measurements, it is certain that the prediction of 38.6 sources based on using all the remaining (non-contemporary) 8.40-GHz flux den-sities represents an underestimate or lower limit. If the 8.40-GHz flux densities are ignored and only the 8.87-GHz data used as flux densities at frequencies above 5.0 GHz, the result is a prediction of 50.5 sources. This must be an overestimate. The 8.87-GHz flux densities were measured preferentially for bright sources at high frequencies, and thus favour objects well above survey-limit lines. We conclude that on the hypothesis of uniform space distribution, somewhere between 38.6 and 50.5 sources are predicted to have redshifts between 3 and 8 for Sample 2.
To assess the uncertainties an end-to-end bootstrap experiment was run for the two possibilities: (i) using as high frequency data only the 8.87-GHz (1972) measurements, and (ii) using the combination of 8.87 and 8.40-GHz measurements, with the former taking precedence if measurements at both flux densities were available. Because of computing time constraints we had to run this experiment using the simple geometry of Ω tot = Ω m = 1. However as Table 4 shows, the predictions in this geometry are very similar to the predictions of the Λ -dominated cosmology, the numbers in question being 38.6 and 52.9 for the simple geometry vs. 38.6 and 50.5 for the Λ -dominated geometry. The uncertainties should be representative. In the bootstrap sampling, random redshifts were assigned to each source from the total sample of redshifts. The flux densities for the source were then 'corrected' to that particular redshift making use of the measured redshift of the object. The results are shown in Fig. 9. Some 2000 trials for each of the two possibilities produced no prediction as low as the 'observed' number of 17.8 QSOs at z > 3.
There are two results from this analysis: 1. The true prediction of numbers of QSOs at 3 < z < 8 for a uniformly-filled universe lies between 38.6 and 50.5 objects for Sample 2. This is to be compared to the 'observed' number of 17.8. An end-to-end bootstrap test indicates that the difference is highly significant. 2. High-frequency flux-density measurements that are non-contemporary are dangerous. They bias the spectral statistics of any variable-flux sample, because surveys pick out variable sources in their high state, and not in their average state.
The form of the evolution
Figs. 3 and 5 indicate that the form of the evolution, and in particular the shape of the decline at high redshifts, cannot be inferred directly. As an indirect route, we used Sample 2 and proceeded as follows: 1. Ten redshift limits were set up, from z l = 1.0 to z l = 5.5 in steps of ∆z = 0.5. We then determined the combination of P 2.7 and effective spectral index yielding the maximum number of contributors to the RLF for the sub-sample complete to each of these redshift limits. The numbers of RLF contributors in these complete sub-samples ranged from 285 at z = 1.0 down to just 9 at z = 5.5; the numbers decrease since higher RLFs are each complete from z = 1.0 to z = 5.5 in steps of ∆z = 0.5, in the order purple, brown, orange, dark red, light blue, turquoise, blue, green, red, grey. Below: these RLFs normalized to agree over the range at 1.0 < z < 2.5. The bold black line is a least-squares fit with a polynomial of fifth order, given in the text. The grey lines represent 1000 bootstrap trials. In this process, fits which resulted in lines of positive slope beyond z = 5 were rejected.
luminosities are needed to be complete to the larger redshift limits. 2. For each of these 10 sub-samples, we computed the RLF using the 1/V max contributions already calculated for previous estimates of luminosity functions. For the two samples with redshift limits at 5.0 and 5.5, there are no sources in the upper bins, and to use this observation we assigned an upper limit of one source to each of them. The results are shown in Figure 10, upper panel. 3. Although these RLFs are each now complete from z = 0 to z = z l , individually they are inadequate to trace the whole QSO epoch. Those with smaller values of z l by definition cannot reach large redshifts, while those with the larger z l are severely noise limited, particularly at the low-redshift ends. To combine these results to define the overall space behaviour, we normalized each curve to agree statistically over the range 1.0 < z < 2.5. We then fitted a least-squares polynomial through the points, with the results shown in the lower panel of Fig. 10. The heavy black line in this diagram is logρ = −12.49+2.704z−1.145z 2 +0.1796z 3 −0.01019z 4 .
4. Finally a constrained bootstrap experiment of 1000 runs was used to give an approximation to the uncertainty; the grey area in the figure is the result.
Normalizing as described is only valid if the form of evolution is independent of radio luminosity. There is some indication in the data (Fig. 10, upper panel) that the turnovers set in at redshifts increasing with luminosity. If so, then normalizing as described would broaden the maximum, and the overall curve would be representative of the overall evolution form, but in no sense formally accurate. Moreover, the complete individual pieces of luminosity function are not statistically independent, so that the shaded area is indicative of uncertainty, but again is not accurate in a formal sense.
These results enable comparison with other highredshift observations. Fig. 11 shows the shaded area of Fig. 10 in the background, with data from recent compilations of space density as a function of redshift for AGNs selected at X-ray and optical wavelengths (left panel), and star-formation rate (SFR) as a function of redshift (right panel).
Agreement with the form of the X-ray-selected QSO evolution is remarkably good. Silverman et al. (2004) found the X-ray decline to agree in form with the optical decline determined by Fan and co-workers (Fan et al. 2001a,c,b) from SDSS. Silverman et al. also showed that the COMBO-17 survey results of Wolf et al. (2003) follow the X-ray data closely. The Hasinger et al. (2004) X-ray AGN results are again in very good agreement with the current determination. There is thus general accord between the dependence of space density for QSOs found at radio, optical and X-ray wavelengths, all showing a rapid rise in co-moving space density to z ∼ 1.5 followed by declining space densities at z > 3. However, there are strong dependencies of evolution form on luminosity, certainly for the optical and X-ray samples as noted by Hasinger et al. (2004) and Silverman et al. (2004); and there may be such dependence for the current radio-selected sample. The dependence on luminosity is well illustrated by the fact that the rising curve of (lower-optical-luminosity) QSOs selected from the 2dF survey (Croom et al. 2004) is displaced to higher redshifts than the X-ray or radio-selected QSOs. The current agreements are illustrative only; analysis of the significance must await larger samples providing better definition of space density evolution as a function of luminosity in each wavelength band.
The relation between QSO space evolution and starformation-rate history is not so clear. The general similarity was first noted in 1997 (Wall 1998); the rise to (Silverman et al. 2004). Space density behaviour of optically-selected QSOs is given by the set of dark red triangles; the data are from Schmidt et al. (1995), Fan et al. (2001b) and Fan et al. (2004). The point at z ∼ 6 is taken from Fan et al. (2004) and due to conversion between geometries there is uncertainty in the ordinate of 0.1. The X-ray and optical QSO data were scaled vertically to match the current determination of space density at redshifts 2 to 2.5. Right: Star formation rate density (SFRD; units adopted by Blain et al. 2002), with the shaded area and black line again showing the current estimate of radio QSO space density. Data in optical and near-IR bands are distinguished as squares: orange (Lilly et al. 1996), grey (Connolly et al. 1997), green (Steidel et al. 1999), light blue (Bouwens et al. 2004a), black (Giavalisco et al. 2004), purple (Bunker et al. 2004) and red (Bouwens et al. 2004b). In general these data have not been corrected for extinction. Green dots show results of the extinction correction of 4.7 suggested by Steidel et al. (1999) for their data; the light blue band represents an estimate of the SFR determined from these and other points as analyzed by Bouwens et al. (2004b). Measurements from Far-IR and sub-mm observations are shown as circles: FIR as orange filled circles from Flores et al. (1999), and sub-mm as dark red filled and open circles and blue filled circles from Bouwens et al. (2004b), discussed in the text. The current space density determination (black curve) was scaled to match the Lilly et al. (1996) points, the orange open squares. redshifts of ∼ 1.5 appears to be of the same form. But at this redshift it appears that determinations of the SFR from UV, optical and near-IR measures produce a different form of epoch dependence, with an abrupt transition to a law almost constant (or diminishing gradually) with increasing redshift out to z > 6. There are substantial uncertainties in what extinction correction to apply; but this form appears to hold whether or not the data are extinction-corrected (provided as claimed that the correction is not strongly dependent on redshift). The open squares of Fig. 11, uncorrected for extinction, show the gradual decline, while the band (Chapman et al. 2004), representing a fit to extinction-corrected data, shows a star formation rate essentially independent of redshift to z > 6. The data from sub-mm observations (dark blue and red circles, filled and open; Chapman et al. 2004) represent estimates from radio-identified sub-mm galaxies (red circles), and from these galaxies and sub-mm galaxies combined (blue circles). Chapman et al. point out that the similarity of star-formation-rate contributions at z ∼ 2 suggests that the total SFR from all populations may exceed the current estimates significantly.
It remains somewhat puzzling that the sub-mm galaxy star-formation-rate appears to drop beyond z = 3, and that it therefore resembles the AGN space-density law rather than that of the galaxies detected in the optical and near-IR. This may be superficial, in that the points are lower limits, and additional components may be found. It is perhaps less puzzling that the AGN space-density law differs from the overall SFRD in the sense observed. On current hierarchical pictures, redshifts beyond 3 represent the era of rapid galaxy assembly; and there may be a delay before the large enough galaxies have developed to host massive black holes, or before the galaxy-building process provides orbit organization appropriate to fuel such black holes.
Summary
We summarize the results of this and the preceding two papers Hook et al. 2003).
1. (Paper 1) The initial goal of the project was to search for high-redshift QSOs without dust bias. Optical counterparts for essentially all flat-spectrum objects in the sample were obtained. No QSOs at redshifts greater than 5 were found. 2. At the fainter flux densities (S 2.7 < 0.4 Jy) and optical magnitudes (B J > 20.0), substantial numbers of flatspectrum radio galaxies are present (Paper 1, Fig. 7). These may have influenced previous claims for a hidden population of heavily-reddened QSOs; 'red' QSOs (Rieke et al. 1979) do not appear to constitute a major fraction of the total population sampled here. Amongst the reddest of the stellar identifications, about onequarter are BL Lac objects, compared to 9 per cent with no colour selection, supporting the synchrotron interpretation for many of the red objects originally proposed by Rieke et al. (1979). However, the discovery of molecular line emission from a sample of the redder QSOs by Carilli et al. (1998) indicates the likely presence of dust in some systems, possibly concentrated in dense nuclear tori. 3. (Paper 2) A composite optical spectrum for flatspectrum QSOs derived from the present sample shows clear qualitative differences in comparison with radio-quiet composite spectra: the Ly-α(1216Å) and CIV(1549Å) lines are stronger in the current sample. There appears to be no significant difference in the Ly-α decrement between radio-loud and radio-quiet QSOs. 4. (Present paper) The redshift distribution has been derived for the quarter-Jansky sample, with consideration given to the different constituents -red galaxies, BL Lac objects and QSOs. Comparison with the space-density models of Dunlop & Peacock (1990) and Jackson & Wall (1999) indicates substantial agreement, although refinement of space-density modelling is clearly possible using the current data. 5. Radio luminosity functions for flat-spectrum QSOs have been calculated using the 1/V obs method. These show the rapid increase in space density between z = 0 and z = 1, a flattening between z = 1 and z = 2.5, and evidence of declining space density to yet higher redshifts. Attention is drawn to a selection effect producing an apparent flattening of the luminosity functions towards lower powers in each redshift shell. This effect, due to intrinsic spread in radio spectra, may have gone unrecognized in previous analyses, leading to overinterpretation of similarities in form of the RLF at each redshift. 6. The reality of a redshift cutoff has been verified using the 'single-source survey' V max method, in which each QSO in the sample is used to predict the number of sources observable at higher redshifts on the assumption of uniform space density. The technique is robust and model-free, and it makes use of all QSOs in the sample, rather than limiting the statistics to the few objects of highest radio luminosity. 7. The result of this analysis has been to demonstrate that a space-density diminution exists at z > 3, at significance levels of > 4σ. Precise significance cannot be evaluated, because it is shown that spectral data at high frequencies are critical, and that non-contemporary flux-density measurements seriously bias the results towards reducing the apparent redshift diminution. 8. In the light of this high-redshift diminution, an estimate has been made of the overall evolution for the radio-loud QSO population. The model-free approach has been retained, with the form mapped simply by fitting a least-squares polynomial to the data. This was followed by a bootstrap analysis -1000 more polynomial fits -to provide an indication of the uncertainty. 9. The form of this evolution was compared with that determined for X-ray QSOs (Chandra, XMM-Newton and ROSAT results) and for optically-selected QSOs (primarily SDSS results). Agreement is excellent. It was further compared with the evolution of star formation rate. Current best estimates of this from optical and near-IR data show that although the initial rise may be of similar form, there is divergence at redshifts beyond 1.5. While the AGN space density dives down at z > 3, the SFR appears shows relatively little dependence on redshift to z > 6. The picture from sub-mm measurements is less clear. Uncertainties are large, but the SFR determined from sub-mm data appears curiously to follow the AGN relation more closely than it follows the SFR law found from galaxies detected with optical and near-IR measurements. 10. In view of the diminution in space density to high redshifts, radio-loud QSOs would appear to have little role to play in the reionization epoch 6 < z < 17.
The reality of the redshift cutoff for radio-selected QSOs and its similarity with that observed for opticallyselected QSOs leads to the conclusion that the apparent cutoff for the latter is real and is not due to dust. This conclusion is consistent with the results from the CORALS survey (Ellison et al. 2001(Ellison et al. , 2002, in which a complete sub-sample of radio-selected QSOs from the Parkes quarter-Jansky sample was searched for damped Ly-α (DLA) systems. Little significant difference for the comoving mass density of neutral gas was found between the estimate from the CORALS sample and those from previous (optically selected) samples. No major population of highcolumn-density absorbers has been missed. Our view of the high-redshift diminution of the QSO population does not appear to be dominated either by intrinsically dusty systems or by a major Universal dust screen. | 2014-10-01T00:00:00.000Z | 2004-08-06T00:00:00.000 | {
"year": 2004,
"sha1": "6d98303c064c72aabf635464eeb3311f72b25afb",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2005/16/aa1786.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6d98303c064c72aabf635464eeb3311f72b25afb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238766945 | pes2o/s2orc | v3-fos-license | Evaluation of Costs and Efficiencies of Urban Low Impact Development (LID) Practices on Stormwater Runoff and Soil Erosion in an Urban Watershed Using the Water Erosion Prediction Project (WEPP) Model
: Storm events and soil erosion can adversely impact flood control, soil conservation, water quality, the recreation economy, and ecosystem biodiversity in urban systems. Urban Low Impact Development practices (LIDs) can manage stormwater runoff, control soil losses, and improve water quality. The Water Erosion Prediction Project (WEPP) model has been widely applied to assess the responses of hydrology and soil losses to conservation practices in agricultural and forested areas. This research study is the first to calibrate the WEPP model to simulate streamflow discharge in the Brentwood watershed in Austin, Texas and apply the calibrated WEPP model to assess the impacts of LIDs. The costs and impacts of various LID scenarios on annual water balance, and monthly average, and daily runoff volumes, and sediment losses at hillslopes and at the watershed outlet were quantified and compared. The LID scenarios identified that native planting in Critically Eroding Areas (CEAs), native planting in all suitable areas, native planting in CEAs with detention ponds, and native planting in all suitable areas with detention ponds could reduce the predicted average annual stormwater runoff by 20–24% and sediment losses by 86–94% at the watershed outlet, and reduce the average annual soil loss rates on hillslope profiles in sub-watersheds by 86–87% with the lowest costs (USD 2991/yr–USD 5257/yr). Watershed/field characteristics, locations, areas, costs, and the effectiveness of the LID practices were essential in choosing the LID scenarios. These research results can help guide decision-making on the selection and implementation of the most economical and suitable LID practices to strengthen the climate resilience and environmental sustainability of urban systems.
Introduction
Urbanization has led to less green space and more paved and impermeable surfaces. Excess impervious surfaces have altered the natural hydrologic cycle with a reduction in infiltration and an increase in runoff volume and peak discharge [1,2]. With more frequent extreme rainfall events, conventional urban stormwater management practices have increased the risk of flooding, caused a lack of groundwater recharge, and elevated soil erosion and contaminant concentrations in stormwater runoff [3][4][5][6]. Furthermore, in locations with combined sewer systems, the occurrence of combined sewer overflows has increased [7]. Small-scale stormwater control techniques, such as Low Impact Development practices (LIDs), have been promoted for the evapotranspiration, infiltration, detention, retention, drainage, and exfiltration of stormwater. The LIDs can be integrated into urban landscapes to act as temporary storage for stormwater that infiltrates and treats stormwater to alleviate sediment and pollutant losses in stormwater runoff. Practices include point LIDs such as rain gardens and wet detention ponds, linear LIDs such as grassed swales and infiltration trenches, as well as area LIDs such as green roofs and permeable pavement [8]. For example, detention ponds, which are basins that capture and store runoff from the watershed, allow for the settling of suspended solids [9]. Moreover, native plantings may be used in place of impervious surfaces or a conventional lawn. Native species are well adapted to the hydrologic and climatic conditions of regions and often exhibit deep roots that aid in infiltration under flooded conditions and allow water uptake during drought conditions [10]. A rain garden is an engineered depression with vegetation that uses grading to route water for storage and infiltration while also allowing sedimentation. A porous soil or gravel layer can be utilized in the subsurface of depressions or trenches to facilitate accelerated infiltration [11]. Native species should be chosen as the vegetation planted within the rain garden. In addition, permeable pavements allow for infiltration through the pavement surface to in situ soil, reducing runoff generated by storm events and allowing peak flow attenuation and restoration of the natural hydrologic cycle [7].
The selection of suitable LIDs and the evaluation of their potential environmental impacts before implementation are critically important. Hydrologic and soil erosion models have been widely used to project the responses of hydrology and water quality to best management practices (BMPs, including LIDs) representation in agricultural, forest, and urban systems [2,7,[11][12][13][14][15]. For instance, the Water Erosion Prediction Project (WEPP) model has been applied to assess the effects of BMPs on water balance, runoff, and soil losses in various regions [16][17][18][19]. Some users would like to apply WEPP in larger regions consisting of various land use types. Thus, it is important to assess the model's capabilities in evaluating BMP effectiveness in both rural and urban areas.
It is challenging to identify the most suitable LIDs and assess their efficiencies in controlling stormwater and pollutants. The selection of LIDs should be watershed/locationspecific. Moreover, economics is a major driver in adopting soil conservation LIDs. Few studies have evaluated the costs of urban LIDs and their impacts on stormwater runoff and soil losses. Various combinations of single urban LIDs in different locations in a watershed can impact hydrology and soil losses in different ways. Various locations and bundled LIDs need to be considered when implementing practices. This study was designed to: (1) evaluate and compare the impacts of various urban LID scenarios on water balance, stormwater runoff and soil erosion control in an urban watershed; (2) calculate the costs of urban LID scenarios; (3) evaluate and compare the performance of LIDs in Critically Eroding Areas (CEAs) versus in all suitable areas, and single LIDs versus bundled LIDs in different locations (on-site in sub-watersheds and in channels) in the watershed; and (4) recommend the selections of both cost-efficient and environmentally sound urban LIDs.
The Selected Watershed
The highly urbanized Brentwood watershed (141.34 ha) located in Austin, Texas ( Figure 1) was selected for this study. The longitude and latitude of the watershed outlet are −97.73 • and 30.33 • , respectively. Around 56% of the watershed is residential area (lawn land use in Figure 1), and 44% of the watershed is open space [11] (open space is a land use type representing any undeveloped areas, including green space, public space, etc.). The watershed is dominated by moderately eroded Austin-Whitewright complex soils. The permeable area of the watershed is mainly covered by undeveloped tall grass (e.g., Bermuda). Annual precipitation, annual minimum temperature, and annual maximum temperature from 2005 to 2019 in the watershed ranged from 441 to 1514 mm, from −9.4 to −2.1 • C, and from 37.2 to 42.8 • C, respectively. Daily streamflow discharge was monitored
WEPP Model Setup, Calibration, Validation, and Evaluation
The WEPPcloud (https://wepp.cloud/weppcloud/, accessed on 27/07/2021) web interface was used to build the WEPP project for the Brentwood watershed. The built-in 10-m elevation layer from the USGS National Elevation Dataset, 30-m land cover data from the USGS National Land Cover Database, and Soil Survey Geographic Database (SSURGO) soil data from the U.S. Department of Agriculture (USDA) Web Soil Survey were used as model inputs. The daily climate inputs of precipitation amount and maximum and minimum temperatures were obtained from the 4-km GRIDMET climate product (http://www.climatologylab.org/gridmet.html, accessed on 27/07/2021). Precipitation characteristics (duration of storm, time to peak intensity, and peak intensity) and other climate variables including dew point temperature, wind velocity and direction, and solar radiation were stochastically generated using a CLImate GENerator, CLIGEN (v5.32), based on the nearest National Climate Data Center climate station of Austin WB Airport, Texas with long-term climate data [17,20,21]. Watershed delineation details are shown in Table S1 and Figure S1.
The description of methods and algorithms for the simulation of water balance components in WEPP is in Data S1. The parameter estimation tool PEST [220] was used to Commented [ Figure 1. Land use types, elevation, and channel network of the Brentwood watershed.
WEPP Model Setup, Calibration, Validation, and Evaluation
The WEPPcloud (https://wepp.cloud/weppcloud/, accessed on 27 July 2021) web interface was used to build the WEPP project for the Brentwood watershed. The built-in 10-m elevation layer from the USGS National Elevation Dataset, 30-m land cover data from the USGS National Land Cover Database, and Soil Survey Geographic Database (SSURGO) soil data from the U.S. Department of Agriculture (USDA) Web Soil Survey were used as model inputs. The daily climate inputs of precipitation amount and maximum and minimum temperatures were obtained from the 4-km GRIDMET climate product (http://www.climatologylab.org/gridmet.html, accessed on 27 July 2021). Precipitation characteristics (duration of storm, time to peak intensity, and peak intensity) and other climate variables including dew point temperature, wind velocity and direction, and solar radiation were stochastically generated using a CLImate GENerator, CLIGEN (v5.32), based on the nearest National Climate Data Center climate station of Austin WB Airport, Texas with long-term climate data [17,20,21]. Watershed delineation details are shown in Table S1 and Figure S1.
The description of methods and algorithms for the simulation of water balance components in WEPP is in Data S1. The parameter estimation tool PEST [22] was used to complete model calibration and validation. Soil parameters, effective hydraulic conductivity (K e ) and saturated hydraulic conductivity of the restrictive layer underlying the soil profile (K sat ) were modified by PEST for model calibration of monthly runoff depth at the watershed outlet. The least-square error between monitored and simulated monthly runoff depth at the watershed outlet was minimized by PEST during the model calibration process. Monitored daily runoff depth values at the watershed outlet were aggregated into monthly data. Monitored monthly runoff depths from January 2008 to December 2013 and from January 2014 to December 2019 were used for model calibration and validation, respectively.
Statistical methods, the Nash-Sutcliffe model efficiency coefficient (NSE), the coefficient of determination (R 2 ), and the percent bias (PBIAS) were calculated to assess model performance. Simulation results having NSE and R 2 values greater than 0.5 and PBIAS within ±25% for monthly runoff depth were considered acceptable [15,23]. The annual wa- ter balance plot and monitored and simulated monthly runoff depth plot in the watershed were also used to evaluate model performance.
Identification of Locations for LID Implementation
The suitable areas to implement LIDs were identified based on simulated average annual soil losses for hillslopes in the watershed from the calibrated WEPP model ( Figure 2, Table S2). Hillslopes were ranked in descending order of sediment yield. Hillslopes with greater soil losses were identified as the suitable areas (red areas, Figure 2, Table S2) as they would produce disproportionately more sediment and should be given top priority to implement LIDs. We classified the hillslopes with the highest soil losses as Critically Eroding Areas (CEAs) (striped, black lines on red areas, Figure 2, Table S2). The areas, slopes, land uses, runoff volumes, and sediment yields of hillslopes for the baseline are summarized in Table S2. The CEAs (e.g., hillslopes 10,17,16,14,12), 17% of watershed area, generated 90% of the sediment yields in the watershed. Detention ponds were designed in each of nine channels and near the watershed outlet. The channel length, width, and slopes are described in Table S1.
complete model calibration and validation. Soil parameters, effective hydraulic conductivity (Ke) and saturated hydraulic conductivity of the restrictive layer underlying the soil profile (Ksat) were modified by PEST for model calibration of monthly runoff depth at the watershed outlet. The least-square error between monitored and simulated monthly runoff depth at the watershed outlet was minimized by PEST during the model calibration process. Monitored daily runoff depth values at the watershed outlet were aggregated into monthly data. Monitored monthly runoff depths from January 2008 to December 2013 and from January 2014 to December 2019 were used for model calibration and validation, respectively.
Statistical methods, the Nash-Sutcliffe model efficiency coefficient (NSE), the coefficient of determination (R 2 ), and the percent bias (PBIAS) were calculated to assess model performance. Simulation results having NSE and R 2 values greater than 0.5 and PBIAS within 25% for monthly runoff depth were considered acceptable [15,231]. The annual water balance plot and monitored and simulated monthly runoff depth plot in the watershed were also used to evaluate model performance.
Identification of Locations for LID Implementation
The suitable areas to implement LIDs were identified based on simulated average annual soil losses for hillslopes in the watershed from the calibrated WEPP model ( Figure 2, Table S2). Hillslopes were ranked in descending order of sediment yield. Hillslopes with greater soil losses were identified as the suitable areas (red areas, Figure 2, Table S2) as they would produce disproportionately more sediment and should be given top priority to implement LIDs. We classified the hillslopes with the highest soil losses as Critically Eroding Areas (CEAs) (striped, black lines on red areas, Figure 2, Table S2). The areas, slopes, land uses, runoff volumes, and sediment yields of hillslopes for the baseline are summarized in Table S2. The CEAs (e.g., hillslopes 10,17,16,14,12), 17% of watershed area, generated 90% of the sediment yields in the watershed. Detention ponds were designed in each of nine channels and near the watershed outlet. The channel length, width, and slopes are described in Table S1. Figure 2. Suitable areas and Critically Eroding Areas (CEAs) for implementation of (a) native planting, (b) permeable pavement, and (c) rain garden. Detention ponds were designed in the channels.
LID Design and Representation in the Model
Detention ponds, native plantings, permeable pavement, and rain gardens were selected in this study to evaluate their costs and impacts on hydrology and soil losses. These LIDs were chosen as they are common urban LIDs suitable for the Brentwood watershed. Wet detention ponds, designed in this study, can reduce peak discharge and improve water quality [24]. The native planting species in this study is Big Bluestem grass, which is native to Texas and is recommended by the Texas Department of Transportation for erosion control within Austin [25]. Gravel layers were utilized below the permeable pavement to increase void space and provide temporary storage of stormwater [11]. The approaches to representing these four selected LIDs in the WEPP model are shown in Table S3.
The current structure, land use, and soil condition of the Brentwood watershed were considered as the baseline in this study. The 13 scenarios, consisting of the aforementioned four LIDs, were designed in the Brentwood watershed (Table 1), with modifications from the baseline conditions (Tables S3 and S4). For single LIDs in all suitable areas (scenarios 4-7), the details of the LID design, the relevant operations, and the corresponding ID number in the Natural Resources Conservation Service (NRCS) Texas payment schedules (https://www.nrcs.usda.gov/wps/portal/nrcs/detail/national/programs/financial/ ?cid=nrcseprd1328414, accessed on 27 July 2021) are described in Table S4.
Cost Calculation of LIDs
A BMP cost estimation tool developed, improved, and applied in previous research [2,3] was used to calculate the costs of LIDs. The LID total costs (Tc) were calculated based on costs of construction (Cc), maintenance (Rmc, the ratio of annual maintenance costs to construction costs), LID design life (dl), interest rate (s), annual cost of applying LIDs (Ca), annual cost caused by land loss (Cl 1 ), and total land loss cost (Cl 2 ) (Equation (1)) [2,26]. i is the year during the design life.
The design life of all LIDs was 15 years (2005-2019) in this study. The details of cost calculations for single LIDs in all suitable areas (scenarios 4-7) are shown in Table S4. The costs of single LIDs in CEAs (scenarios 1-3) were calculated based on the costs of corresponding single LIDs in all suitable areas and area differences between all suitable areas and CEAs. The costs of bundled LIDs (scenarios 8-13) were calculated based on aggregated values of single practices.
Evaluations and Comparison of LIDs
The costs of the LID scenarios and their effects on the average annual water balance, average annual runoff depths and sediment losses at the watershed outlet, and soil losses on hillslope profiles during 2005-2019 were evaluated and compared. The LID scenarios with lower costs and greater reductions in average annual runoff depths were identified. Moreover, the responses of the annual and monthly average of runoff depths and soil losses to LID scenarios were assessed and compared. Additionally, a representative hillslope (16) suitable for all three LIDs that were implemented on hillslope profiles (native planting, NP; permeable pavement, PP; and rain garden, RG) was selected to investigate the effects of LIDs on hydrology and soil losses on hillslopes. Specifically, we examined the impacts of the three LIDs (NP, PP, and RG) on daily runoff rates and depths and sediment delivery under the five extreme storm events during the simulation period (2005-2019).
Comparisons between LIDs in CEAs and LIDs in all suitable areas and between single LIDs and bundled LIDs were also performed. The impacts of locations and areas for LID implementation on LID potential efficiencies in stormwater runoff and soil loss control were discussed. How the LID scenario design affected LID effectiveness and the physical processes of treating stormwater runoff and reducing soil losses were also covered. Recommendations for the most cost-effective and environmentally sound LID scenarios for the Brentwood watershed were provided. Further research on model improvements, data collection, model application, and LID implementation was also discussed.
Baseline Water Balance and Runoff Depths
The watershed had high evapotranspiration (ET) and minimal percolation from 2005 to 2019 based on annual water balance results ( Figure S1). Specifically, the average annual ET (718 mm), surface runoff (142 mm), and percolation (54 mm) within the watershed during 2005-2019 were 77%, 16%, and 6% of average annual precipitation (908 mm), respectively ( Figure S1). The simulated ET was high, and percolation was minimal, which is common for a small urban watershed in Austin, Texas, with a humid subtropical climate. The flow partitioning estimation methods adopted in the model provided reasonable water balance simulations in the watershed from 2005 to 2019. For example, daily climate inputs were used with the Penman method to estimate ET, and the storage routing techniques were used in percolation estimation [15,27,28].
The simulated monthly runoff depth at the watershed outlet was satisfactory during calibration and validation periods (Figure 3a,b, 0.50 ≤ NSE ≤ 0.71, R 2 = 0.77, −7% ≤ PBIAS ≤ 20%). Generally, the simulated monthly runoff matched well with the observed data during calibration and validation (Figure 3a,b). Effective hydraulic conductivity (K e ) values for the soil on hillslopes were modified in the range of 24-46 mm/h for the watershed to adjust the soil profile infiltration rate and then change the cumulative infiltration. This range for K e parameter value was reasonable compared to the previous model calibration research in this watershed [11]. The Green-Ampt Mein-Larson method based on rainfall intensities and duration [29][30][31] and the Muskingum-Cunge channel routing method incorporating water storage and transport concepts and solving the kinematic wave equation [16,32,33] provided reasonable water infiltration and peak runoff rate estimation, respectively.
LID Impacts on Water Balance
The impacts of the 13 scenarios on average annual surface runoff, percolation, and evapotranspiration for 2005-2019 in the watershed are shown in Figure S3. Except for detention ponds (scenario 7, DP), other LID scenarios increased average annual ET by 4-10% and reduced average annual surface runoff and percolation by 15-56% and 1-11%, respectively ( Figure S3). Wet detention ponds were designed in channels to store stormwater for a period and release stormwater through channels, during which the impacts on surface runoff or percolation could be minimal [7]. Compared to the baseline, the native planting, permeable grass pavement, and rain garden involved more tall grass planting, which had the potential to result in higher ET, especially in Central Texas with a humid subtropical climate [11]. Moreover, these LIDs also could intercept storm rainfall, reduce peak runoff and volume, and enhance infiltration and groundwater recharge [7,11]. These LID scenarios were able to control stormwater runoff effectively from 2005 to 2019. Scenarios 8, 11, 1, and 4 (NP(C)_DP, native planting in CEAs with detention ponds; NP_DP, native planting in all suitable areas with detention ponds; NP(C), native planting in CEAs; NP, native planting in all suitable areas) reduced average annual runoff depths by 54-56%, which were more than the reductions for other scenarios (0-30%) ( Figure S3). intensities and duration [25-2729-31] and the Muskingum-Cunge channel routing method incorporating water storage and transport concepts and solving the kinematic wave equation [16,32,33] provided reasonable water infiltration and peak runoff rate estimation, respectively.
LID Impacts on Water Balance
The impacts of the 13 scenarios on average annual surface runoff, percolation, and evapotranspiration for 2005-2019 in the watershed are shown in Figure S3. Except for detention ponds (scenario 7, DP), other LID scenarios increased average annual ET by 4-10%
31]
Co che Co Figure 3. The simulated and observed runoff depths at the watershed outlet during the model calibration (a) and validation (b) periods.
Costs of LIDs and Impacts on Average Annual Runoff Depths and Soil Losses
The 13 LID scenarios were ranked based on their costs from least to greatest (USD 1824/yr-USD 807,614/yr) and LID scenario efficiencies in reducing average annual runoff depths ( Figure 4a) and sediment losses (Figure 4b) at the watershed outlet, and soil losses from hillslope profiles (Figure 4c) compared to the baseline were plotted. The LID scenarios with lower costs (8, 11, 1, and 4) and greater reductions in runoff depths and soil losses were identified. The average annual costs of scenarios 8, 11, 1, and 4 ranged from USD 2991/yr to USD 5257/yr, respectively, which were less than other scenarios where average annual costs ranged from USD 30,014/yr to USD 807,614/yr, except for scenario 7 (USD 1824/yr). Even though scenario 7 (DP) had the lowest cost, it could only reduce average annual runoff depth and sediment yields at the watershed outlet by 4% and 59%, respectively, but could not affect soil losses from hillslopes ( Figure 4). (Figure 5a,b). Scenario 8 reduced monthly average runoff depths by 4-6 mm (13-44%) and reduced monthly average sediment losses by 0.28-6.59 Mg (85-100%) from January to December (Figure 6a,b). Scenarios 8, 11, 1, and 4 reduced average annual runoff depths at the watershed outlet by 20-24%, sediment losses at the watershed outlet by 86-94%, and soil losses from hillslopes by 86-87% (Figure 4). The reductions in average annual runoff depths and soil losses caused by scenarios 8, 11, 1, and 4 were greater than scenarios 3, 10, 6, and 13 (RG(C), rain gardens in CEAs; RG(C)_DP, rain gardens in CEAs with detention ponds; RG, rain gardens in all suitable areas; RG_DP, rain gardens in all suitable areas with detention ponds). The costs of scenarios 3, 10, 6, and 13 ranged from USD 30,013/yr to USD 45,158/yr ( Figure 4). Moreover, the reductions in average annual runoff depths and soil losses from scenarios 9 (PP(C)_DP, permeable pavements in CEAs with detention ponds) and 12 (permeable pavements in all suitable areas with detention ponds) were similar to those from scenarios 8, 11, 1, and 4 ( Figure 4). However, the average annual costs of scenarios 9 and 12 were USD 651,920/yr and USD 807,614/yr, respectively, much greater than those of scenarios 8, 11, 1, and 4 (USD 2991/yr-USD 5257/yr). Specifically, both scenarios 9 and 12 reduced average annual runoff depths at the watershed outlet by 22%, sediment losses at the watershed outlet by 86%, and soil losses from hillslopes by 64%, respectively (Figure 4).
LID Impacts on Average Annual and Monthly Runoff Depths and Soil Losses
LID scenario reduction efficiencies on the annual and monthly average runoff depths and sediment losses at the watershed outlet varied across years and months (Figures 5 and 6). The abilities of scenarios to reduce annual and monthly average runoff depths and sediment losses were consistent with their capabilities to reduce the average annual values. In general, scenarios (8, 11, 1, and 4) that yielded greater reductions in average annual runoff depths and sediment losses also reduced annual and monthly average values more than others (Figures 5 and 6). For example, scenario 11 (NP) reduced annual runoff depths by 24-93 mm (12-100%) and reduced annual sediment losses by 0.14-100.61 Mg (86-100%) at the watershed outlet from 2005 to 2019 (Figure 5a,b). Scenario 8 reduced monthly average runoff depths by 4-6 mm (13-44%) and reduced monthly average sediment losses by 0.28-6.59 Mg (85-100%) from January to December (Figure 6a,b). March, May, and October, with monthly average precipitation depths ranging from 81 to 145 mm. On the contrary, averages of monthly average sediment loss reductions for the 13 scenarios were 0.22-0.75 Mg in February, November, and December, when monthly average precipitation depths ranged from 34 to 56 mm (Figure 6b).
Hillslope Runoff Rates and Depths and Soil Losses under Storm Events
The impacts of three single LIDs (NP, RG, and PP) at hillslope 16 on the daily peak runoff rate, runoff depth, and sediment delivery under the five largest storm events with daily precipitation depths ranging from 106 to 175 mm were plotted (Figure 7). Each LID
Hillslope Runoff Rates and Depths and Soil Losses under Storm Events
The impacts of three single LIDs (NP, RG, and PP) at hillslope 16 on the daily peak runoff rate, runoff depth, and sediment delivery under the five largest storm events with daily precipitation depths ranging from 106 to 175 mm were plotted (Figure 7). Each LID was able to reduce the daily runoff peak rate, reduce runoff depth, and eliminate soil losses from hillslope 16 under extreme events (Figure 7). The reductions to daily peak runoff rates, runoff depths, and sediment delivery at hillslope 16 varied for the different LID types (Figure 7). The capacities of LID types to control peak runoff rates, runoff depths, and sediment delivery were ranked: PP > RG > NP. For example, daily peak runoff rates were reduced by 79-94% for the five storm events with PP but were reduced by 54-87% with NP ( Figure 7a). Moreover, PP reduced daily runoff depth and sediment delivery by 30% and 100%, respectively, on October 13, 2013, from 175 mm of rainfall. Native planting (NP) reduced runoff depth and sediment delivery by 7% and 87%, respectively, on the same day.
Water 2021, 13, x FOR PEER REVIEW 12 of 18 was able to reduce the daily runoff peak rate, reduce runoff depth, and eliminate soil losses from hillslope 16 under extreme events (Figure 7). The reductions to daily peak runoff rates, runoff depths, and sediment delivery at hillslope 16 varied for the different LID types (Figure 7). The capacities of LID types to control peak runoff rates, runoff depths, and sediment delivery were ranked: PP > RG > NP. For example, daily peak runoff rates were reduced by 79-94% for the five storm events with PP but were reduced by 54-87% with NP ( Figure 7a). Moreover, PP reduced daily runoff depth and sediment delivery by 30% and 100%, respectively, on October 13, 2013, from 175 mm of rainfall. Native planting (NP) reduced runoff depth and sediment delivery by 7% and 87%, respectively, on the same day. For these three LIDs on hillslopes, their capabilities in reducing peak runoff rates were consistent with their potential to reduce runoff depths and sediment deliveries. For instance, PP led to greater reductions in peak runoff rates than RG and NP, and PP's reductions to runoff depths and sediment deliveries were also the greatest. Similar to their impacts on runoff and soil losses for average monthly, annual, and average annual levels, LIDs have the potential to reduce runoff and soil losses more during time periods with greater precipitation depths.
Peak Runoff Depth Simulation and Potential Improvements
The model overestimated runoff peaks in October 2013, May and October 2015, and August 2017 in the watershed with monthly precipitation depths of 359, 477, 341, and 307 mm, respectively (Figure 2a,b). There was at least one storm event during each of these months having a large precipitation depth. For example, precipitation depths were 175, 116, 117, 79, and 142 mm, with storm durations of 4.77, 2.59, 3.06, 2.60, and 2.75 h, on 13 October 2013, 25 May 2015, 23 and 24 October 2015, and 26 August 2017, respectively. The unreasonable representation of daily flow partitioning dynamics during these storm events might cause overestimated daily runoff volumes and result in overestimated monthly runoff depths in the watershed [15,16,18]. Specifically, the simulated runoff depths from hillslope 1 under these storm events were 147, 90, 58, 64, and 112 mm, respectively. These storm events occurred on days with average daily air temperatures of 23-24 • C and dew point temperatures of 7-18 • C, and the peak runoff rates ranged from 2-6 mm/h and effective hydraulic conductivity values ranged from 85-188 mm/h. The Austin-Whitewright complex soils in the watershed became saturated, and daily total soil water contents were estimated as 357, 418, 345, 357, and 351 mm, respectively. The Penman equation and percolation algorithms simulated low daily ET (3-6 mm) and percolation (0.00-1.38 mm) values at hillslope 1 [15]. The model might have underestimated daily ET and percolation and overestimated daily runoff depths under these conditions and storm events. The uniform soil texture and depth used in these model simulations might have led to underestimated infiltration and total soil water, and overestimated runoff depths [15,34]. Soil depth and texture information and model inputs could better represent spatial-temporal changes in soil characteristics across the watershed and improve the model's performance in capturing dynamics between daily flow partitioning and runoff depths [15,35].
LIDs in CEAs vs. LIDs in All Suitable Areas
Implementing LIDs in CEAs was more cost-effective than implementing LIDs in all suitable areas, as LIDs in CEAs could achieve similar reductions in runoff and soil loss control to those of LIDs placed in all suitable areas, which were more expensive. From 2005 to 2019, the efficiencies in reducing average annual runoff depths and soil and sediment losses for scenarios in all suitable areas were close to those of the corresponding scenarios in just the CEAs [2,7]. The average annual costs of scenarios in CEAs were lower than the corresponding scenarios in all suitable areas [2,7]. For instance, the reductions to average annual runoff depths and sediment losses at the watershed outlet and soil losses from hillslopes from both scenarios 11 (NP_DP) and 8 (NP(C)_DP) were 24%, 94%, and 87%, respectively. The average annual cost of scenario 8 was USD 4815/yr, lower than that of scenario 11, USD 5257/yr (Figure 4). Similarly, scenarios 1 (NP(C)) and 4 (NP)'s reductions in average annual runoff depths and sediment losses at the watershed outlet and soil losses from hillslopes were similar (Figure 4).
The cost difference between single LIDs (in CEAs or in all suitable areas) and the corresponding bundled LIDs (single LIDs with detention ponds) was the cost of the detention ponds, USD 1824/yr. For controlling soil losses from hillslopes, the efficiencies of the single LIDs were the same as the corresponding bundled LIDs, as wet detention ponds were implemented in channels, and did not affect upstream hillslope water movement and soil erosion (scenario 7, DP on Figure 4c) [7].
Effectiveness of LIDs
The details of the LID scenario design impacted their effectiveness [2,7,12,26]. For example, when installed with drivable grass, the PP relevant scenarios designed in this study were able to allow rainfall to infiltrate into the soil and reduce flooding and pollutant losses (e.g., total suspended solids, nutrients, heavy metals) that might be carried by stormwater [8,9]. By changing bare soil to bromegrass and increasing soil hydraulic conductivity, PP led to greater reductions in the peak runoff rate, runoff depth, and sediment delivery at a single hillslope than those reduced by RG and NP [8][9][10][11]. Unlike PP, stormwater that entered RG was from upstream drainage areas and storm events could result in too much runoff for RG to treat if the upstream drainage areas were large. This may have contributed to poorer performance with RG in reducing runoff peaks and depths and sediment delivery from single hillslopes compared to those from PP [7,9,10]. With their extensive and deep plant roots, NP could trap and absorb localized stormwater on-site, control soil erosion, increase water percolation, improve nutrient filtration, and enhance soil water replenishment [8,9]. However, NP did not increase soil hydraulic conductivity like PP nor create a depressional area to store water like RG [8,9]. Thus, NP was less effective in reducing runoff peaks and depths and soil losses for a single hillslope than PP and RG under extreme storm events (Figure 7). Being designed in the largest areas, the NP scenarios could yield greater reductions in average annual runoff depths and soil losses and annual and monthly average soil losses than PP and RG scenarios (Figures 4, 5b and 6b). However, the NP scenarios could not obtain the greatest runoff depth reductions for every year or month (Figures 5a and 6b). Instead of NP(C)_DP and NP_DP, the PP scenarios (PP(C)_DP and PP_DP) were more effective in reducing runoff depths for extremely wet months, May and October, with monthly average precipitation depths of 145 and 114 mm (Figure 6b).
Selection of LIDs
The selections of the suitable LIDs for a specific region can be based on field characteristics (soil, climate, land use), locations with specific stormwater runoff and pollutants of concern, the costs to implement and maintain LIDs, etc. [2,7,12,26]. Locations, areas to implement LIDs, and LID scenario characteristics can impact how efficiently different LID practices can control stormwater runoff and sediment losses. Besides, other potential impacts (soil moisture and temperature, downstream water quality, wildlife habitat, etc.) and funding for installing LID practices are essential considerations when choosing the most suitable ones [2,26,36,37].
The scenarios that were designed on the hillslopes can reduce runoff and sediment losses from hillslopes and at the watershed outlet. However, DPs, designed in the channel or at the end of drainage areas, do not impact rates and the amount of runoff and soil that leave the hillslopes. Instead, DPs were useful in reducing runoff rates in the channel and treating stormwater and reducing sediment losses at the watershed outlet in the Brentwood watershed. Compared to PP and RG, NP was less effective in reducing daily runoff depths and sediment losses from hillslope 16 for large storm events. However, the NP scenarios (1, 4, 8, and 11) were able to achieve greater reductions in average annual and annual runoff depths and sediment losses from the entire watershed because NP was applied across larger areas than PP and RG. Specifically, all suitable areas for NP were hillslopes 9, 10, 12, 13, 14, 16, and 17, with an area of 19.64 ha. All suitable areas designed for PP and RG were 17.32 ha (hillslopes 9, 10, 13, 14, 16, 17) and 8.6 ha (hillslopes 13,14,16,17), respectively. The average annual cost for NP was USD 73/ha/yr, much lower than those for RG and PP (USD 2186/ha/yr and USD 20,540/ha/yr).
Future Research
Instead of uniform soil inputs, variable soil inputs based on measured soil properties across different areas in the watershed have the potential to enhance simulations of daily dynamics between water balance and peak runoff depths [15]. Moreover, there is a need to improve the representation of more LIDs in the WEPP model [29]. For example, adding features of implementing impoundments on hillslopes can allow for the representation of rain barrels and cisterns in the model. Further, more soil erosion monitoring data for on-site LIDs can support model validation of water quality variables for designed LID scenarios and better demonstrate LID effectiveness [15]. Moreover, there are potential opportunities to choose and implement various LID scenarios in different locations in the watershed to explore LID scenarios with the maximum water quantity and quality benefits with minimum cost [2,7].
Conclusions
The WEPP model was applied to evaluate the implementation of LID scenarios, including permeable pavement, rain gardens, native plantings, and detention ponds, in the Brentwood, Texas watershed from 2005 to 2019. WEPP was calibrated and validated for monthly runoff depth at the watershed outlet from 2008 to 2019. The costs of LID scenarios and their impacts on runoff and sediment losses were evaluated and compared. The LID scenarios with native plantings were identified as the most cost-effective scenarios with the greatest reductions in average annual runoff depths and sediment losses and annual and monthly average sediment losses at the watershed outlet, and average annual soil losses from the hillslopes. Permeable pavement was more effective in reducing daily runoff rates, runoff depths, and sediment delivery than rain gardens or native planting from extreme storm events for hillslopes.
The LID scenarios applied to CEAs were better choices than the corresponding scenarios applied to all suitable areas. The reductions in average annual runoff depths and sediment losses from LID scenarios applied to all suitable areas were close to those from LID scenarios applied only in the CEAs, which were more cost-effective. Scenarios with a single LID bundled with detention ponds were more effective in controlling stormwater runoff and sediment losses than using the single LID alone. The selections of suitable LIDs for a specific region can be based on field characteristics, specific pollutants of concern, LID costs, and potential funding support for LID implementation. These research results can support decision-making on the selection of the most cost-effective LIDs with the greatest stormwater and soil erosion control benefits.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/w13152076/s1, Table S1: Delineation of the watershed by the online WEPPcloud interface, Figure S1: Watershed delineation for the Brentwood watershed from the WEPPcloud interface, Table S2: Areas, slopes, land uses, runoff volumes, and sediment yields at hillslopes for the baseline, Table S3: The representation of LIDs in the WEPP model, Table S4: Design and cost calculations of single LIDs in all suitable areas, Figure S2
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-09-09T20:45:49.323Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "ff1c4fdbcef4fdd51c7268bf52e3a067dcfbc8cf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/13/15/2076/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "84e05abce11728c1f1b0edec1fdf7051b7b6895c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
15979644 | pes2o/s2orc | v3-fos-license | Generation of T-cell receptors targeting a genetically stable and immunodominant cytotoxic T-lymphocyte epitope within hepatitis C virus non-structural protein 3
Hepatitis C virus (HCV) is a major cause of severe liver disease, and one major contributing factor is thought to involve a dysfunction of virus-specific T-cells. T-cell receptor (TCR) gene therapy with HCV-specific TCRs would increase the number of effector T-cells to promote virus clearance. We therefore took advantage of HLA-A2 transgenic mice to generate multiple TCR candidates against HCV using DNA vaccination followed by generation of stable T-cell–BW (T-BW) tumour hybrid cells. Using this approach, large numbers of non-structural protein 3 (NS3)-specific functional T-BW hybrids can be generated efficiently. These predominantly target the genetically stable HCV genotype 1 NS31073–1081 CTL epitope, frequently associated with clearance of HCV in humans. These T-BW hybrid clones recognized the NS31073 peptide with a high avidity. The hybridoma effectively recognized virus variants and targeted cells with low HLA-A2 expression, which has not been reported previously. Importantly, high-avidity murine TCRs effectively redirected human non-HCV-specific T-lymphocytes to recognize human hepatoma cells with HCV RNA replication driven by a subgenomic HCV replicon. Taken together, TCR candidates with a range of functional avidities, which can be used to study immune recognition of HCV-positive targets, have been generated. This has implications for TCR-related immunotherapy against HCV.
INTRODUCTION
It is estimated that 180 million individuals are infected with hepatitis C virus (HCV) worldwide today, of which 130 million are chronic carriers at risk of developing liver cirrhosis and liver cancer. Being a small, enveloped RNA virus, HCV is one of the most persistent viruses in humans, and spontaneous resolution of HCV infection only occurs in a minority of the infected (Alter, 2006). Considering the steady increase of reported incidence in Europe (Rantala & van de Laar, 2008), a reduction of HCV prevalence is not anticipated in the near future.
Analyses of individuals who have resolved their HCV infection have indicated that a successful immunity requires effector T-cells. CD4 + and CD8 + lymphocyte activation early in the infection is associated strongly with eradication of HCV infection (Cucchiarini et al., 2000;Diepolder et al., 1996;Lechner et al., 2000;Pape et al., 1999;Schulze zur Wiesch et al., 2005), and an early development of polyfunctional T-cells further predicts a spontaneous resolution of HCV infection (Badr et al., 2008). However, failure to sustain virus-specific CD8 + lymphocytes is frequently observed in chronically infected patients (Lechner et al., 2005), tolerance induction (Cabrera et al., 2004;Sugimoto et al., 2003) and impairment in proliferative capacity and effector function (Wedemeyer et al., 2002), which are thought to be a consequence of continued antigen stimulation and/or a viral factor related to antigen-presenting cell (Bain et al., 2001;Goutagny et al., 2003;Nattermann et al., 2006;Ulsenheimer et al., 2005;Wertheimer et al., 2007).
A successful immune response in people infected with HCV is characterized by strong and persistent CTL responses. One well-defined CTL epitope is the HCV non-structural protein 3 (NS3) 1073-1081 peptide (NS3 1073 ), and responses against this epitope are often found in spontaneous resolution of HCV infection, which are well-maintained after spontaneous recovery of HCV (Wertheimer et al., 2003). Several vaccines that target induction of NS3-specific T-cell immunity are currently being tested in clinical trials (Thompson & McHutchison, 2009). Recently, a novel strategy for engineering new effector T-cells against NS3 1073 via gene transfer of a T-cell receptor (TCR) in vitro was described (Zhang et al., 2010). Given the observation that the NS3 1073 epitope can be associated with cross-reactivity with an influenza A virus (IAV) neuraminidase (NA) epitope, and that cross-reactive CD8 + T-cells have been found in fulminant acute hepatitis patients (Urbani et al., 2005), access to T-cells that differ in functional avidity would be desirable for comparison.
To date, only one NS3 1073 -specific human TCR (Zhang et al., 2010) has been identified and no murine alternatives are available. Murine TCRs present several advantages over human TCRs in gene modification of human T-cells, including an enhanced and more sustained level of surface expression and improved anti-tumour activity (Kieback & Uckert, 2010). Moreover, a recent murine TCR gene-therapy trial that targeted tumour antigens demonstrated a therapeutic effect in cancer patients, although some immune responses to murine TCR variable regions were found in a subset of patients (Davis et al., 2010). Given these facts, we therefore chose to explore the HLA-A2 transgenic (HHD) mouse model to establish more HCV TCR gene candidates. Here a large number of functional NS3-specific T-cell-BW (T-BW) hybrid clones were obtained by fusing the BW5147 cell line and activated splenocytes cells from HHD mice immunized with an electroporation-enhanced HCV NS3 DNA vaccine. The results indicate that the T-BW hybrid clones are highly specific to the NS3 1073 of HCV genotype 1a and 1b variants. Differences were found regarding their functional avidity and affinity to the NS3 1073 /HLA complex pentamer, as well as their ability to respond to human hepatoma cells harbouring HCV subgenomic RNA.
During the screening process, the hybrid clones were regularly subcloned and checked for CD3 expression to ensure that they maintained a stable CD3 expression and IL-2 activity ( Supplementary Fig. S1, available in JGV Online). Although the clone I8A4 demonstrated repetitively low response against NS3 1073 , the percentage of live CD3expressing cells and the cell-surface CD3 expression of this clone were similar to the others at the time of testing. None of the clones showed CD8 surface expression (data not shown), which is analogous to previous reported BWderived T-cell clones (Rock et al., 1990); thus, the cytokine release detected here is independent of the CD8 cofactor.
TCR gene usage of NS3 1073 -specific T-BW cell hybrid clones Examination of the TCR usage of these T-BW hybrid clones was done in order to assess the TCR responsible for recognizing NS3 1073 . RNA was isolated from each of these clones and at least five TCR a and b gene products were sequenced. Sequence analysis showed that six of the nine clones were sister clones (Fig. 2a); AV9 and BV2 gene usage and an identical third complementarity-determining region (CDR3) sequence were found in I4F8, I2B11, I4E9, I6B3, I7B7 and I4F9. The AV9 gene was also used by I4G7, which has a unique CDR3 coding sequence using AJ27. The AV16 and AV2 genes were used by I8H4 and I8A4, respectively, and have their own CDR3 with different AJ genes. The TCR b genes were likewise determined for all clones. Three clones utilized VB13 with various subfamilies BV13-1 (I8H4), BV13-2 (I8A4), and BV13-3 and BV13-1 were found in I4G7 and they all used different BD and BJ genes. The I4F8 and its sister clones all used the BV2 gene and shared identical CDR3 nucleotide sequences.
Here, the CDR3a and CDR3b loops were between 10-13 and 11-14 aa residues, respectively. They exhibit a type II pattern bias (Turner et al., 2006) that is characterized by the preference of the motifs 'KLTFG' (CDR3a loops) and 'GG' (CDR3b loops). A commonly found motif in the BJ segments here was 'FGPGTR', which is reported only in three out of 13 murine and four out of 14 human TRBJ genes (IMGT Repertoire IG and TR database; http://www. imgt.org/IMGTrepertoire/). Interestingly, the same three motifs also exist in a recently reported human TCR that is specific to the NS3 1073 peptide (Zhang et al., 2010). This result suggests that highly conserved CDR3 motifs are shared by the murine and human TCRs that are specific to the NS3 1073 peptide.
Affinity to NS3 1073 /HLA-A2 pentamer
Since the T-BW hybrid clones lack the CD8 coreceptor, which might be required for the stabilization of the TCRpeptide/HLA complex (Roszkowski et al., 2003), we next asked whether any of these clones could bind the NS3 1073 / HLA-A2 pentamer. Following incubation with an Rphycoerythrin (R-PE)-labelled NS3 1073 /HLA-A2 pentamer, the fluorescence intensity of the staining was quantified by flow cytometry. Compared with the negative-control pentamer HBVcore 18-27 -HLA-A2, it was found that I8H4, one of the high-avidity clones, had an increased fluorescence in NS3 1073 /HLA-A2 pentamer staining (Fig. 2b).
Comparison of the functional avidity
We next asked whether the functional avidity in these Tcell hybrid clones might differ. The clones were regularly subcloned and checked by FACS and were tested when the TCR/CD3 expression was at least 50 % at the time of the experiment. They were tested with T2 target cells pulsed with diluted amounts of NS3 1073 peptide and the results showed that, while I8A4 required at least 400 ng peptide ml 21 to obtain a half-maximum response of IL-2 release, about 10 ng peptide ml 21 was sufficient for stimulating I8H4, I4G7 and I4F8 for a half-maximum response of IL-2 release (Fig. 3). The TCR/CD3 surface expression was 51 % (I8H4), 82 % (I8A4), 88 % (I4G7) and 93 % (I4F8) at the time of this experiment. The experiment was performed three times with similar results. Based on these data, I8A4 is ranked as a low-avidity T-cell clone, while I8H4, I4G7 and I4F8 resemble moderate/high-avidity T-cells described previously (McKee et al., 2005).
Cross-genotype reactivity
We next asked whether these clones cross-reacted with other viral peptides, in particular the naturally occurring genotypes of HCV NS3 1073 and the IAV NA, Flu-NA 231 , to which cross-reactivity to HCV NS3 1073 of genotype 1b strain has been described (Wedemeyer et al., 2001). Shown in Fig. 4, we found that these T-cell clones were activated only when cultured with the genotype 1a and 1b peptide of the HCV NS3 1073 . None of the other HCV genotype peptide variants tested was sufficiently stimulatory to induce substantial IL-2 production, although all variants displayed significant binding affinity to the HLA-A2 molecule (Fytili et al., 2008). No IL-2 production was found against the Flu-NA peptide (Fig. 4) or other viral peptides, including the HCV non-structural protein 5 (NS5) peptide NS5 2221 and NS5 1992 SPDADLIEANL and VLTDFKTWL, and human cytomegalovirus (HCMV) pp65 495 NLVPMVATV (data not shown). The TCR/CD3 surface expression was 97 % (I8H4), 70 % (I8A4), 99 % (I4G7) and 95 % (I4F8) at the time of this experiment. The Depicted fold induction is given as the ratio of cytokine concentration in co-cultures containing peptide-loaded target (10 mg indicated peptide ml "1 ) over the unloaded control (0 mg indicated peptide ml "1 ). Mean values of duplicate cocultures from one experiment are shown. Amount of cytokine released (pg ml "1 ) is shown on the top of each bar. *Not detected.
results suggest that these T-cell clones are highly specific for genotype 1 of HCV NS3.
Relevance of each amino acid position of NS3 1073 peptide for hybridoma reactivity Since amino acid positions 2, 7, 9 and positions 3, 4, 5 of NS3 1703 are key positions in HLA binding and interaction with human TCR (Söderholm et al., 2006), respectively, we next tested whether they have relevance for the four murine T-BW hybrid clones, 18H4, 18A4, 14G7 and 14F8. Alaninesubstituted peptide analogues of NS3 1073 loaded ono T2 cells in descending concentrations were tested against these T-cell clones. As shown in Fig. 5, alanine substitutions at position 3, 4, 5 and 7 completely abolished the IL-2 release in all four T-cell clones, which is in line with that observed for human T-cells. Complete or partial IL-2 release in response to alanine substitutions at position 1, 2, 6, 8 and 9 was, however, obtained in high-avidity clones, and was also observed at reduced peptide concentrations. This was unfortunately not observed for I8A4 (low avidity), which responded only to alanine substitution at position 1 and none of the other substitutions. The TCR/CD3 surface expression was 51 % (I8H4), 82 % (I8A4), 88 % (I4G7) and 93 % (I4F8) at the time of this experiment.
Recognition of target cells with low HLA-A2 expression
T2 cells are a lymphoblastic cell line commonly used as antigen-presenting cells for loading exogenous peptide and, due to the constitutive HLA-A.2 expression and TAP (transporter associated with antigen processing) deficiency, the density of these target molecules is often saturated. We next asked whether our T-cell hybrid clones recognize target cells that have low HLA-A2 expression. The hepatoblastoma Huh-6 and C1R-A2 cell lines were chosen for this experiment. Cell-surface expression of HLA-A2 assessed by staining with an anti-HLA-A2 mAb (Fig. 6a) after NS3 1073 peptide stabilization showed that both cell lines have lower HLA-A2 expression than T2 cells. Huh-6 cells showed the lowest HLA-A2 expression, as only a fraction of the cells were positive for staining compared with the negative-control C1R-null cell line (Fig. 6b). We then assessed IL-2 production in the T-cell clones cocultured with these target cells. Shown in Fig. 6(c), NS3 1073 peptide-loaded C1R-A2 cells could stimulate significant IL-2 production in all four T-cell lines, at a similar magnitude to that observed for the T2 target cells. Significant IL-2 release against the peptide-loaded Huh-6 cells was, however, Fig. 4. Cross-reactivity against other viral peptides that share similarity with the NS3 1073 peptide sequence encoded by the DNA vaccine (genotype 1a). Overnight IL-2 production in T-BW hybrid clones against T2 cells loaded with the indicated viral peptide (10 mg peptide ml "1 ) was measured for each T-cell clone and is shown as a percentage of IL-2 production by genotype 1a of the NS3 1073 peptide. Amino acids that differ from the genotype 1a are indicated in bold. Mean values and SD of triplicate co-cultures are shown. Flu, Flu-NA 231 ; gt, genotype. T-cell receptors against HLA-A2-restricted HCV NS3 antigen only detected in the high-avidity clones, especially the I8H4 clone (Fig. 6d). TCR/CD3 surface expression was 90 % (I8H4), 80 % (I8A4), 89 % (I4G7) and 90 % (I4F8) at the time of this experiment.
Reactivity against HCV RNA replicon hepatoma cells
We next analysed the ability of the T-cell hybrids to recognize the Huh-7/Lunet HCV replicon cells that harbour the subgenomic HCV replicon. Huh-7/Lunet hepatoma replicon cells maintain constant levels of HCV replication over several years and represent an excellent model of persistent HCV infection (Bartenschlager & Sparacio, 2007). As Huh-7 cells lack HLA-A2 expression, they are stably transduced with lentiviral vectors expressing HLA-A2 and a selectable marker encoding the blasticidinresistance gene, and transfected with the HCV genotype 1b Con1-ET subgenomic replicon and a selection marker conferring the neomycin-resistance gene by using an approach described previously (Ahlén et al., 2007). As shown in Fig. 7(b), antigen-specific IL-2 production by the T-cell clones was detected after co-culture at different ratios with peptide (NS3 1073 genotype 1a)-loaded HCV replicon cells harbouring both the Con1-ET replicon and HLA-A2 (R-neo/A2) or control cells expressing only the HLA-A2 (A2). In these cultures both IL-2 and IFN-c were detected, particularly from the high-avidity I8H4, I4F8 and I4G7 T-cell clones, and this indicates that the lentiviraltransferred HLA expression is functional. The TCR/CD3 surface expression was 90 % (I8H4), 80 % (I8A4), 89 % (I4G7) and 90 % (I4F8) at the time of the experiment. In the same experiment, we investigated whether these T-cell clones recognized the replicon cells directly without addition of exogenous peptide. Shown in Fig. 7(a), IL-2 production was detected in the co-culture of the I8H4 and HLA-A2-positive HCV replicon cells (R-neo/A2), and this IL-2 production was dependent on the ratio of the added cells. No IFN-c was detected (data not shown). Stimulation with HCV replicon cells without HLA-A2 expression or cells with HLA-A2 expression only did not stimulated any IL-2 release from the I8H4. This result concurs with previous results that I8H4 is the only clone with affinity to the NS3 1073 /HLA-A2 pentamer (Fig. 2) and has an increased response to genotype 1b sequence (Fig. 4) and the Huh-6 target cells (Fig. 6).
Transfer of TCR genes into naïve non-HCV specific human T-cells
Although BW hybridomas are a good model to assess TCR functions, they have a limited clinical application in humans. To know whether murine TCRs identified here are functional to redirect non-HCV-specific human T-lymphocytes against HCV + cells, peripheral blood lymphocytes (PBLs) from healthy human donors were thus transduced with retroviral vectors packaged with genes encoding I8H4 TCR and the I4F8 TCR, which reacted strongly against the NS3 1073 peptide in previous experiments. Surface expression of the respective TCRs following retroviral transduction was detected following transduction in 10-20 % CD3 + T-cells and not noticeably in mock, untransduced PBL controls (,0.15 %) stained with the mouse Vb antibodies. Transduced T-cells, here defined as mouseVb + humanCD8 + -expressing cells, were found to produce significant amounts of IFN-c when stimulated with NS3 1073 1a or 1b peptide-loaded T2 cells (Fig. 8a). Tumour necrosis factor alpha (TNF-a) was found frequently in this IFN-c + population and a 'triple'-positive population (IL-2 + TNF-a + IFN-c + ) was clearly demonstrated (Fig. 8b). The CTL function was studied on luciferase-producing bioluminescent Huh-7/Lunet HCV replicon cells (HLA-A2 + Luc-ubi-neo Con1 + ) that had been co-incubated with transduced T-cells using a charge-coupled device (CCD) camera. Images were taken at 20 h after co-incubation and analysed by Living Image software (version 4.2; Caliper Life Sciences). The analysis showed an efficient elimination of bioluminescent HCV replicon cells that have been coincubated with I4F8 and I8H4 TCR-transduced T-cells (Fig. 8c, d), and a significant increase of hepatocellular aspartate transaminase was also detected in these co-culture supernatants when compared with those co-incubated with mock-transduced T-cells (Fig. 8e). In conclusion, our data indicate that murine TCRs identified by the current approach can provide antigen specificity to naïve non-HCVspecific T-cells to generate new effector T-cells that are polyfunctional and can eliminate HCV RNA-replicating hepatoma cells.
T-cell receptors against HLA-A2-restricted HCV NS3 antigen model generated by DNA vaccination by electroporation and stable immortalization of these T-cell clones via somatic-cell hybridization with the BW5147 cell line.
These T-cell clones recognize the NS3 1073 epitope, which is one of the most important CTL epitopes in hepatitis C often associated with spontaneous resolution of HCV infection (Wertheimer et al., 2003). As shown here, they are highly specific to NS3 1073 and cross-recognition is restricted within genotype 1a and 1b virus variants and not to other related viral sequences tested. This is interesting because genotypes 1a and 1b viruses are found in up to 70 % of infected patients in the USA and Europe. Moreover, these clones differ from each other with regard to their functional avidity and affinity. This resembles the discrepancy in T-cell avidity observed in humans between donors who have recovered from HCV infection and chronically infected patients (Neveu et al., 2008). In this regard, the high-avidity clones I8H4, I4G7 and I4F8 generated in this study have similar EM 50 (effective molarity) values (approx. 10 ng ml 21 ) to those found in CD8 + T-cells in individuals who have recovered from HCV infection, while clone I8A4, which has a 10-100-fold-higher EM 50 value (400 ng ml 21 ), resembles the T-cells found in chronically infected patients (Neveu et al., 2008). Furthermore, there has been an elegant study by Kasprowicz et al. (2008) in which it was demonstrated that NS3 1073 -specific T-cells that are CD8independent might represent cells of higher functions, i.e. require substantially less viral peptide as well as fewer cells to achieve the effector function. This might be of relevance for the mouse TCRs reported here, as they were raised in the absence of human CD8 and are capable of recognizing the target cells in a CD8-independent manner.
To our knowledge, this is the first study that has examined the TCR repertoire that is induced by an HCV vaccine. The TCR repertoire induced by natural infection of HCV has so far described the b chain (Miles et al., 2011), but it is interesting to find that our murine TCRs not only share similar CDR3 lengths to those found there, but also the same sequence motifs found in a human NS3 1073 TCR (Zhang et al., 2010). Given the conserved nature of these motifs, it is likely that they play a key role in the interface between TCR and antigen contacts, tickling the T-cell to respond.
Moreover, our report is the first to demonstrate that HCVspecific T-cell hybridomas of different functional avidities towards a human HCV CTL target can be generated rapidly by DNA vaccination of HHD mice. Studies of HCV antigen presentation often require substantial amounts of HCVspecific lymphocytes/primary human T-cells (Accapezzato et al., 2005;Barth et al., 2005Barth et al., , 2008Lapenta et al., 2006) but, because of the short lifespan and the requirement of repetitive antigen stimulation with primary T-cell lines, an alternative approach to obtaining large amounts of HCVspecific T-cells would be of interest. The T-cell hybrid Fig. 8. Multiple effector functions in the TCR-expressing naïve HCV-non-specific CD8 T-cells co-cultured with peptide-loaded T2 cells or the endogenous viral peptide in HCV Con1 genotype 1b replicon cells. (a) Intracellular IFN-c staining on mouse (mu)Vb + humanCD8 + lymphocytes in transduced T-cells co-cultured overnight with peptide (NS3 1073 1a or 1b)-loaded T2 cells or controls, and (b) intracellular TNF-a and IL-2 staining in the IFN-c + population in (a). (c) Bioluminescence of HCV replicon-encoded luciferase activity in Lunet-HlaA2-Luc-ubi-neo Con1 replicon cells that have been co-incubated with human T-cells transduced with I4F8 or I8H4 TCR, as assessment of antiviral inhibition specific to the endogenous NS3 1073 viral peptide (1b). (d) Anti-HCV effect on Lunet-HlaA2-Luc-ubi-neo Con1 replicon cells, and (e) hepatocellular injury caused by TCR-transduced T-cells in quadruple cocultures. Mean values and SD are given and expressed as percentage relative light units for luciferase (where mock corresponds to 100 %), and international units (I.U.) per litre for aspartate transaminase. * Indicates P value ,0.01 (Student's t-test) compared with mock. Dotted line indicates the cut-off value of background bioluminescence in empty wells.
clones presented here could be useful reporter T-cells with mouse cytokines as reporter protein; unlike primary T-cell clones, they do not require maintenance through antigen stimulation or cytokine growth factors, but grow vigorously in simple cell-culture medium without special supplements. Their activity does not fluctuate in a cyclical manner, as occurs with some antigen-stimulated T-cell clones. Freezing and recovery are easy and, moreover, good viability and activity are usually observed. Some limitations need to be considered here. One is that avidity is determined in peptide-titration assays and not by the offrate of pentamers after binding to the TCR. The second is that BW-derived hybrids lack CTL function because CD8 expression is suppressed (Rock et al., 1990).
Despite the fact that only one hybridoma (I8H4) here recognized endogenously processed antigen from human hepatoma cells that replicate HCV RNA, we note that functional properties were further improved when I8H4 and I4F8, the latter also a high-avidity TCR, were genetically transferred and expressed on naïve non-HCV-specific human peripheral T-lymphocytes. Both murine TCRs were highly functional in human CD8 T-cells to give rise to antigen-specific polyfunctional human CD8 + T-cells capable of demonstrating at least four effector functions, including eliminating HCV genotype 1b replicon cells and causing aspartate aminotransferase (AST) enzyme release. This is encouraging, as polyfunctional effector T-cells capable of IL-2 production are associated with effective control of HCV (Ciuffreda et al., 2008) and represent an important feature in protective T-cell memory induced by highly efficacious human vaccines (Ahmed & Akondy, 2011). Because HCV genotype 1 is in general associated with drug resistance, the notion that our TCRs can crossrecognize genotype 1a and genotype 1b viral peptides makes them interesting as immunotherapy candidates.
It is true that we have failed to induce NS3 1406 T-cells in this study, because in HLA transgenic mice they are better induced by peptide vaccination (Brinster et al., 2001). However, with the current approach we have now generated new TCRs that target NS5 CTL epitopes that are associated with control of acute self-limited hepatitis C (A. Pasetto and others, unpublished data). With the bioluminescent HCV replicon cells as an indicator for CTL efficiency, it would be interesting to compare the antiviral potentials in TCRmodified effector T-cells with various peptide specificities.
In summary, the current study has demonstrated an efficient way to generate TCR candidates that range in different functional avidities, with specificity against an important HCV CTL target. It has implications on the development of HCV immunotherapy and the understanding of T-cell recognition of HCV-infected cells.
Human PBMCs. PBMCs from healthy blood donors were collected at the Karolinska University Hospital under informed consent, and isolated using Ficoll-Hypaque density-gradient centrifugation. Ethical permission was obtained from the Regional Ethical Review Board (EPN) of the Karolinska Institutet.
Generation of HCV NS3-specific T-BW hybridoma clones.
Splenocytes and lymph node cells were isolated from coNS3/4A-DNA immunized HHD-C57BL/6 mice as described previously (Ahlén et al., 2007). Following 5 days stimulation with HCV NS3 peptides, CD8 + T-cells were purified using CD8 MACS MicroBeads (Miltenyi Biotec) and fused to BW TCR neg cells. Briefly, the CD8 + cells were mixed with BW TCR neg cells in a 1 : 5 ratio and PEG1450 (Sigma) was added; thereafter, the cells were resuspended in minimal essential medium (MEM) and incubated for 5 min at 37 uC, and cultured for 48 h before selection in HAT (Sigma Aldrich) and HT (HAT without aminopterin; Gibco, Invitrogen) media. Hybridized cell clones were maintained in complete BW medium. Positive clones were retested and regularly subcloned and checked for CD3 expression.
HCV replicon cells. Hepatoma Huh-7-Lunet cells designated Lunet-HlaA2-neoET, Lunet-blr/neo ET or Lunet-HlaA2 (neo) or Lunet-HLA-A2-Luc-ubi-neo Con1 were generated by rotocol similar to that described previously (Ahlén et al., 2007). The Lunet-HlaA2-neoET has ectopic HLA-A2 expression and a selectable HCV subgenomic RNA replicon of genotype 1b, harbouring replication-enhancing mutations in NS3 and NS4B (Con1-ET). This is the same for Lunet-HLA-A2-Luc-ubi-neo Con1, but it also co-expresses the firefly luciferase gene. The control replicon cell line Lunet-blr/neo ET was transduced with an empty viral vector without the HLA-A2 gene. These cells were maintained in complete DMEM with addition of blasticidin S hydrochloride (3 mg ml 21 ) and G418 (1 mg ml 21 ). Lunet-HlaA2, the other control cell line (expressing HLA-A2 under blasticidin selection but without the HCV replicon), was maintained in the same DMEM as above but with 3 mg blasticidin S hydrochloride ml 21 and no G418. For the co-culture experiments, the Lunet cells were washed and reseeded 1 day before in antibiotic-free medium. All medium and supplements were purchased from Invitrogen.
Measurement of cytokine release. Target cells (2610 5 ) were cultured for 24 or 48 h with equal numbers or the indicated number of each BW-T cell hybrid clone in duplicates with or without different viral peptides. Concentrations of mouse IL-2 and IFN-c in the supernatant were measured using mouse IL-2 or IFN-c ELISA (Mabtech) and calculated against standard curves generated with a cytokine standard.
TCR gene typing and sequencing, and retroviral TCR constructs. Total RNA was reverse-transcribed to cDNA using Super-Scipt III RT enzyme (Invitrogen). TCR variable alpha (VA) and variable beta (VB) chain typing was done by PCR with Platinum Taq polymerase (Invitrogen) and primer sets covering the entire murine TCR VA and VB repertoire. PCR products were cloned (pCR-4 TOPO system, Invitrogen) and the plasmids were sequenced by Eurofins MWG (Ebersberg). Sequences were analysed and classified according to the nomenclature given in the Immunogenetics database (European Bioinformatics Institute, Cambridge, UK). Full-length TCR genes were amplified and, after sequence confirmation, synthetic genes linked with the autoprotease 2A sequence were made (GeneArt, LifeTechnology) and assembled into pMP-71-G-Pre retroviral plasmid (kindly provided by Wolfgang Uckert, Max-Delbrück-Center for Molecular Medicine). Phoenix amphotropic packaging line (Nolan's lab, Standford University) was used to package the expression plasmids for expression in primary human T-cells. Calcium phosphate transfection was done with 20 mg of each vector (pMP71-NS3-H4, pMP71-NS3-F8, pMP71-NS5-19, pMP71-NS5-69 and pMP71-EGFP) and 12.5 ml of 50 mM chloroquine.
Retrovirus transduction. Human PBMCs were stimulated with 600 or 300 U IL-2 ml 21 (R&D System or Prepotech) and 50 ng anti-CD3 ml 21 (OKT-3 eBioscience). Lymphocytes were harvested and transduced by spinoculation on retronectin-coated wells with polybrene (Millipore). Spinoculation was repeated the next day and TCR surface expression was analysed by FACS 72 h after the first spinoculation.
Bioluminescence cell imaging and transaminase measurement. Either 50 000 or 100 000 Lunet-HlaA2-Luc-ubi-neo Con1 cells were co-cultured with transduced or mock-transduced T-cells in a ratio of 2 : 1. Following 20 h co-incubation the medium was replaced with luciferin solution prior to imaging with a chargecoupled device camera. Signals from bioluminescent Lunet-HlaA2-Luc-ubi-neo Con1 cells were analysed with the Living Image Software version 4.2 and IVIS Spectrum instrument (Caliper Life Sciences). The AST level in supernatants was quantified by a validated AST assay at the Clinical Chemistry Laboratory at the Karolinska University Hospital. | 2018-04-03T02:46:30.442Z | 2012-02-01T00:00:00.000 | {
"year": 2012,
"sha1": "38ce9c93af596ae334b107723c618ea695c72c6f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1099/vir.0.037903-0",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "12af5844603e3fb509e44d2a5642efac22350f6e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231754811 | pes2o/s2orc | v3-fos-license | Factors Affecting a Short-Term Response to Anti-VEGF Therapy in Diabetic Macular Edema
Diabetic macular edema (DME) is a common cause of visual impairment in patients with diabetes. Although intravitreal anti-vascular endothelial growth factor (VEGF) injections were efficacious in clinical trials, several patients exhibited a poor response. This study aimed to compare clinical features between patients who were susceptible to intravitreal anti-VEGF injections for DME and those who were not. A single-center, retrospective study of 102 such patients was conducted (123 eyes; mean ± standard deviation age, 63.4 ± 10.8 years; 57.8% males). Systemic and ocular data, assessed at baseline and after a month, were compared between good (>20% decrease in central macular thickness (CMT)) and poor (≤20% decrease in CMT) responders using the Mann–Whitney U test/Fisher’s exact test. Eighty-one eyes (65.9%) were good responders. The glycosylated hemoglobin level was higher (p = 0.011) in poor (7.5% ± 0.94%) than in good (7.04% ± 1.19%) responders. The foveal avascular zone was larger (p = 0.0003) in poor (0.67 ± 0.33 μm2) than in good (0.47 ± 0.23 μm2) responders. The number of microaneurysms in the pericapillary network was higher (p = 0.0007) in poor (2.7 ± 2.2) than in good (1.4 ± 2.0) responders. Baseline glycemic control and macular ischemia may be associated with the short-term response to intravitreal anti-VEGF injections.
Introduction
Diabetic macular edema (DME) is a common cause of visual impairment in patients with diabetic retinopathy. This microvascular complication is estimated to affect one in 15 patients with diabetes; thus, there are more than 20 million cases worldwide [1,2]. Vascular endothelial growth factors (VEGFs) are known to play an important role in increasing vascular permeability in patients with diabetic retinopathy [3]. Intravitreal anti-VEGF injections are recognized to improve visual outcomes and decrease macular fluid in patients with DME [4,5]. Such agents are the current gold standard in the treatment of DME, and their safety and efficacy have been proven in large randomized clinical trials, as well as in real-world studies [6]. However, the pathogenesis of DME is complex, with multiple factors contributing to its pathophysiology, including angiogenic, inflammatory, hypoxic, and hemodynamic processes that lead to the breakdown of the blood-retinal barrier and leakage of the intraretinal fluid [7]. This may be why certain patients respond moderately or even poorly to anti-VEGF therapy. For example, in two landmark clinical trials, 14.4% (RIDE) and 15.2% (RISE) of patients experienced no improvement or decreased visual acuity at the primary endpoint, even though patients with DME in clinical trials receive far more injections than patients in clinical practice [5,8].
A number of studies have been conducted to examine which factors influence the clinical outcomes of DME treatment with anti-VEGF agents [9][10][11][12][13][14][15][16]. However, the results were inconsistent and most focused on systemic factors, visual acuity, or macular anatomical factors using optical coherence tomography (OCT), but not fluorescein angiography (FA) [9][10][11][12][13][14]. The purpose of this study was to elucidate which clinical features affect the anatomical response to intravitreal anti-VEGF therapy in patients with DME; we discovered that FA findings were highly associated with that response.
Patients
The medical records of consecutive patients who received their first intravitreal anti-VEGF injections for center-involving DME at Juntendo University Urayasu Hospital from March 2014 to October 2015 were evaluated retrospectively. All patients underwent systemic and ophthalmological examination before their first injection. Systemic examination included measurements of blood pressure, serum glycosylated hemoglobin (HbA1c), serum creatinine, and urine protein. Ophthalmological examination included best-corrected visual acuity (BCVA) measurement, intraocular pressure measurement, fundus color photography, OCT (Cirrus HD-OCT; Carl Zeiss Meditec AG, Jena, Germany), and FA (TRC-50DX; Topcon Corp., Tokyo, Japan). BCVA was determined using a Landolt C chart and converted from the decimal system to the logarithm of the minimum angle of resolution (logMAR).
The exclusion criteria were as follows: 1. prior vitreoretinal surgery; 2. any other treatment for DME (such as previous anti-VEGF therapy, topical steroid therapy, or focal/grid laser photocoagulation for DME) within 6 months before the first injection; 3. insufficient quality of OCT or FA; 4. the presence of any retinal diseases other than diabetic retinopathy (such as macular degeneration or retinal vascular occlusions); 5. substantial vitreomacular traction or the presence of an epiretinal membrane. All the procedures and measurements adhered to the tenets of the Declaration of Helsinki of 1975, revised in 2013, and the study was approved by the ethics committee at the Juntendo University Urayasu Hospital.
Intravitreal Anti-VEGF Injections
All intravitreal anti-VEGF injections were performed in the operating room after obtaining patients' written informed consent. The medications used were either aflibercept (2 mg/0.05 mL) or ranibizumab (0.5 mg/0.05 mL). A single introductory intravitreal injection of aflibercept or ranibizumab was administered, followed by pro re nata injections. OCT was performed one month after injection. The response to anti-VEGF injections was defined as the reduction in central macular thickness (CMT) at one month after injection compared to the CMT before injection. Cases in which CMT reduction was more than 20% were designated to the "good response" group and the others were designated to the "poor response" group.
Imaging
Cases were classified into four groups, using OCT, based on DME morphology, according to previous reports [17,18]: cystoid macular edema (CME), sponge-like diffuse retinal thickening (SDRT), serous retinal detachment (SRD), and all three factors combined (FULL) (Figure 1a). The foveal avascular zone (FAZ) was determined from the FA images by identifying the innermost capillaries around the fovea (the perifoveal capillary network (PCN)); calculations were performed using Image J, as detailed in a previous report [19] (Figure 1b). Microaneurysms (MAs) in the PCN were counted using both early-and late-phase FA images as previously reported [19] (Figure 1b).
Statistical Analysis
Data are expressed as means ± standard deviation and were analyzed using the Mann-Whitney U-test or Fisher's exact test. The intensity of correlation between FAZ size and number of MAs in PCN was evaluated by Pearson's correlation coefficient r. FAZ size and number of MAs in PCN between four groups of DME morphology were evaluated using one-way ANOVA with Tukey's post hoc test. A p-value less than 0.05 was considered statistically significant. All statistical analyses were performed using Prism 6 (GraphPad Software, Inc., San Diego, CA, USA).
Baseline Characteristics
In total, 102 patients (123 eyes) were included in this study. The average age of the patients was 63.4 ± 10.8 years (range from 29 to 87). Fifty-nine (57.8%) were male and 43 (42.2%) were female. All patients were diagnosed with type 2 diabetes. Baseline characteristics, according to response to anti-VEGF therapy, are summarized in Table 1. The level of HbA1c was significantly lower in good than in poor responders (7.04% ± 1.19% vs. 7.50% ± 0.94%; p = 0.011).
Statistical Analysis
Data are expressed as means ± standard deviation and were analyzed using the Mann-Whitney U-test or Fisher's exact test. The intensity of correlation between FAZ size and number of MAs in PCN was evaluated by Pearson's correlation coefficient r. FAZ size and number of MAs in PCN between four groups of DME morphology were evaluated using one-way ANOVA with Tukey's post hoc test. A p-value less than 0.05 was considered statistically significant. All statistical analyses were performed using Prism 6 (GraphPad Software, Inc., San Diego, CA, USA).
Baseline Characteristics
In total, 102 patients (123 eyes) were included in this study. The average age of the patients was 63.4 ± 10.8 years (range from 29 to 87). Fifty-nine (57.8%) were male and 43 (42.2%) were female. All patients were diagnosed with type 2 diabetes. Baseline characteristics, according to response to anti-VEGF therapy, are summarized in Table 1. The level of HbA1c was significantly lower in good than in poor responders (7.04% ± 1.19% vs. 7.50% ± 0.94%; p = 0.011).
Baseline Ocular Characteristics
We divided patients into four groups based on DME morphology using OCT according to previous reports [17,18]; a representative OCT image for each group is presented in Figure 1a. FAZ size and the number of MAs were determined from the FA images according to a previous report [19]; representative FA images are presented in Figure 1b. Baseline ocular characteristics according to response to anti-VEGF therapy are summarized in Table 2. The FAZ in poor responders was significantly larger than that in good responders (0.67 ± 0.33 µm 2 vs. 0.47 ± 0.23; p = 0.0003). The number of MAs in the PCN in poor responders was also significantly higher than that in good responders (2.7 ± 2.2 vs. 1.4 ± 2.0; p = 0.0007). There was a significant correlation between the number of MAs in the PCN and FAZ size (r = 0.42, p < 0.0001) (Figure 2a). Among poor responders, CME was the most common type of macular edema (47.6%) ( Table 3). On the other hand, SDRT was the most common type among good responders (39.5%) (p = 0.060). Eyes with CME had a significantly higher number of MAs in the PCN (p = 0.0024) and a significantly larger FAZ size (p = 0.0003) than eyes with other types of macular edema (Figure 2b,c). Moreover, among eyes with CME, poor responders had significantly larger FAZs and a significantly higher number of MAs in the PCN than good responders (p = 0.039 and p = 0.024) (Figure 2d,e).
Discussion
We investigated the factors affecting the anatomical outcome in patients with DME treated with ranibizumab or aflibercept. Similar studies have been performed previously [9][10][11][12][13][14][15][16]; however, to our knowledge, no detailed examination of FAZ size and the number of MAs, using FA images, has been reported for such a study.
In this study, we discovered that a high baseline HbA1c level, a large baseline FAZ, and a high baseline number of MAs in the PCN were associated with a poor response to anti-VEGF injections. The influence of baseline HbA1c on the outcome of anti-VEGF therapy for patients with DME was controversial in previous studies [10,11,16,[20][21][22]. Chen et al. [16] discovered that HbA1c was a prognostic factor for visual outcome only in eyes that responded to intravitreal ranibizumab injection. In addition, Matsuda et al. [11] observed a statistically significant improvement in visual acuity in patients with HbA1c ≤7.0% after anti-VEGF therapy, whereas a statistically significant but smaller improvement in visual acuity was observed in patients with HbA1c >7.0%. On the other hand, Singh et al. [20] revealed that vision improvement upon ranibizumab injection was not affected by systemic factors such as HbA1c, renal function, or blood pressure. A post-hoc analysis of the RIDE/RISE trials demonstrated that improvement in visual acuity, remission of macular edema, and improvement in the severity of diabetic retinopathy following ranibizumab treatment appeared to be independent of baseline HbA1c levels [10]. In our study, the HbA1c level, but not the presence of diabetic nephropathy and hypertension, was associated with the response to anti-VEGF therapy. Patients with a lower HbA1c level tended to be more susceptible to anti-VEGF therapy than those with a higher HbA1c level were, suggesting that blood-sugar control may play a role in the anatomical outcome of anti-VEGF treatment for patients with DME.
In previous reports [23,24], macular ischemia (a FAZ larger than 1000 µm 2 ) was statistically significantly associated with a worse visual, but not anatomical, outcome. However, in that study, those without macular ischemia were categorized into one group. In our study, most of the subjects (117/123, 95.1%) did not have macular ischemia according to that definition; we considered the whole range of FAZ sizes, without categorization, and revealed that FAZ size was associated with the anatomical response to anti-VEGF therapy. We also demonstrated that the number of MAs was associated with the response to anti-VEGF therapy. Murakami et al. [19] reported that, compared with eyes with SRD or SDRT, those with CME had more MAs in the PCN and a larger FAZ upon FA. In our study, CME was the most common of the four DME types among poor responders (47.6%), while SDRT was the most common type among good responders (39.5%). We have also demonstrated that FAZ size and the number of MAs in the PCN were higher in eyes with CME than those in eyes with SDRT, SRD, or FULL. Moreover, in our study, there was a correlation between the size of the FAZ and the number of MAs in the PCN.
In terms of OCT-based morphologic findings, bevacizumab appears less effective in the SRD type of DME than in the others, although Kim et al. reported that changes in CMT and BCVA from baseline were not significantly different between groups at 12 months [17,25]. Seo et al. [14] divided their patients into only three groups based on DME type: SDRT, CME, and SRD. They discovered that ranibizumab was most effective for treatment of the SDRT type of DME, and the fewest injections were needed for that type. In our study, SDRT was also the most common DME type among good responders. Shimura et al. [17] reported that the effectiveness of bevacizumab in reducing macular edema was greater in the SDRT and CME groups than in the FULL and SRD groups. However, they also mentioned that two subgroups of CME could be distinguished according to their response to anti-VEGF therapy. Taken together, we suggest that eyes with CME are the least susceptible to anti-VEGF therapy, corresponding to their large number of MAs in the PCN and their large FAZs. Indeed, in the eyes with CME, poor responders had a larger FAZ and a higher number of MAs in the PCN than good responders did in our study. However, it is unclear how FAZ size and the number of MAs in the PCN interact with CME. It has been shown in several publications that the histopathology of CME consists of liquefactive necrosis of Müller cells, which may lead to cystoid spaces [19,26,27]. Murakami et al. [19] hypothesized that cystoid spaces, in which neuroglial cells have been necrotized, would not produce the growth factors necessary for the maintenance of the inner blood-retinal barrier. They also hypothesized that the imbalance between intra-and extramural pressure in the capillaries around cystoid spaces would result in weak points in the capillary wall, leading to the development of MAs. From that perspective, we hypothesize that CME characterized by MAs in the PCN is caused by the necrotization of neuroglia, and that this type CME is less susceptible to anti-VEGF therapy, which does not rescue neuroglial necrosis. Further investigation to verify this hypothesis is warranted.
We separated patients into good and poor responders according to the percentage of CMT reduction between baseline and one month after primary intravitreal anti-VEGF injection. Several previous studies have also defined the treatment response according to anatomical outcomes after intravitreal anti-VEGF therapy [12,28,29]. For example, Bressler et al. [12] divided subjects into four categories. Similar to that in our study, they set the CMT reduction threshold to 20%, but they analyzed the reduction at three different time points during the first treatment year. Koyanagi et al. [29] defined "immediate responders" as those with a more than 25% decrease in CMT at three months after treatment, as opposed to "delayed responders," who did not exhibit such a decrease at the same time point. We defined "good responders" as those with a more than 20% decrease in CMT at one month after initial treatment. Shah et al. [30] revealed that anatomical outcomes after one injection were predictive of OCT findings at three months. This suggests that anatomical outcomes at one month after injection may indicate whether anti-VEGF therapy is effective, allowing for an early switch, if necessary, to e.g., corticosteroid treatment. Indeed, Cho et al. [31] evaluated the short-term efficacy of intravitreal bevacizumab and posterior sub-Tenon triamcinolone injections in eyes with different DME types. They discovered that the reduction of intraretinal edema, such as CME, was greater with triamcinolone than with bevacizumab. As anti-VEGF therapy is expensive, it would be helpful to be able to predict whether a patient will respond thereto in the early phases of treatment.
OCT angiography (OCTA) is a relatively new technology and a useful tool to evaluate microvasculature. Although it is a noninvasive modality and can be used to produce three-dimensional images of the retinal microvasculature, it does not allow the detection of all MAs; MAs are typically easier to visualize using FA [32,33]. On the other hand, in terms of the FAZ size, OCTA may be preferable to FA as FAZ edges are easier to delineate using OCTA than FA, and FAZ size appears larger using FA than OCTA [34]. In this study, we measured the FAZ size and the number of MAs in the PCN using FA rather than OCTA to improve the detection of MAs. Lee et al. [35] investigated the structural integrity of the superficial and deep capillary plexuses using OCTA in patients with DME, as well as their association with the response to anti-VEGF treatment. They observed that poor responders tended to exhibit more MAs in the deep capillary plexus and a larger FAZ size. Our FA results are consistent with the OCTA results of Lee et al. [35] in terms of the number of MAs and the FAZ size, despite using different modalities and despite the fact that we evaluated MAs only in the PCN whereas Lee et al. evaluated MAs using macular images. Moreover, we analyzed the interaction between FA findings and DME morphology associated with the response to anti-VEGF therapy, which distinguishes our study from previous studies. Further analyses using both FA and OCTA are needed.
The limitations of this study are inherent to its retrospective and short-term nature. The small sample size of our study may have attenuated the statistical power for detecting differences between the groups. To determine the efficacy of intravitreal anti-VEGF treatment for patients with DME, prospective studies with larger sample sizes are needed. Additionally, as the response to anti-VEGF treatment has been defined in many ways in different studies, it may be difficult to compare our study to those using different anatomic and visual criteria.
In conclusion, this study revealed that glycemic control and the level of ischemia in the macula at baseline may be associated with the short-term response to the first intravitreal anti-VEGF injection, and may help predict which patients with DME should be considered for a change to anti-inflammatory treatment. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Juntendo University Urayasu Hospital (protocol code 2020-065).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy. | 2021-02-03T06:17:16.176Z | 2021-01-25T00:00:00.000 | {
"year": 2021,
"sha1": "4bcb20c2d5dfa94307051c759b336c59c0087bb6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/11/2/83/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "634f79bb03662b602ab4702fcbc13dabcfb61e41",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210248920 | pes2o/s2orc | v3-fos-license | Microstructure and mechanical properties of Ag and Ge multi-micro alloyed Al-(3.2) Cu-(2) Li-(0.6) Mg alloys
This work represent an attempt to investigate the possible improvement to replace (Cu) by (Li) in (Al-Cu) alloys used in aerospace industries. It is well established that (Al-Li) alloys are attractive for military and aerospace application because their properties are superior to those of congenital (Al) alloys. (Li) reduced the weight of the plane; hence, more speed can be attained. Several other improvements are also intruded. Alloys were prepared during this work by casting. they include (Al-Li-Cu-Mg-Zr), (Al-Li-Cu-Mg-Zr-Ag), and (Al-Li-Cu-Mg-Zr-Ag-Ge) alloys. Result showed that addition 0.35%Ag to Al-Li-Cu-Mg-Zr alloy improve strength from (42MPa) to (127MPa) when aging at 175°C. and improve hardness from (156.64 Hv) to (173.74Hv) At the same temperature. But the deterioration when addition 0.2 Ge to Al-Li-Cu-Mg-Zr alloy weakness strength from (127MPa) to (26MPa) at the same time improved hardness from (156.64Hv) to (171.76Hv) at the same temperature. Optical microscopy (OM), X-ray diffraction (XRD) were done to study the structure and the surface features and chemical composition of the alloys. Many phases shown which have an influence on the mechanical and chemical properties, such as (Al3Li), (Al3Zr), (Al2MgLi), omega (Ω) precipitates, (Ag2CuLi), (Ge4Zr5).Pin-on-disc testing technique was commonly used for investigating sliding wear. B significant increase in wear resistance of the alloy. The weight loss was only 60% of the other alloys.
Introduction
The aluminum-lithium alloy features unique properties such as low density, high elasticity, tensile strength, corrosion resistance, good weighting and high hardness compared with traditional aluminum alloys, so they are suitable for space use. [1,2]. Space industries always require lightweight materials, high strength, fatigue, corrosion, and thermal stability. Aluminum alloys, for example Al-Cu-Mg and Al-Zn-Mg, have good mechanical properties in addition to their low density. However, the development of this industry has created a constant need for lighter alloys for several reasons: Increased aircraft operating efficiency reduced fuel consumption and improved flight characteristics, thus increasing interest in aluminum-lithium alloys. [1,3] .Lithium is the lightest metallic element, with a density of 3.50 μg, the first metal and a third element in the periodic table rank (73Li). It enters into the nuclear bomb with heavy hydrogen for its light weight, and because the bonding energy of the lithium nucleus cannot be combined with heavy hydrogen nuclei , So when cast with the aluminum with a density of 2.7g.cm-3 will reduce the density of the resulting alloy to less than the density of aluminum.
Lithium is the only metal that, when inserted with aluminum in the alloy increases the modulus of elasticity and reduces density. Each percent of lithium is added to the aluminum. The density decreases about .0.08 g.cm-3 [1,2,4] These alloys have a very high resistance to chemical corrosion compared to other aluminum-lithium alloys, and these alloys have higher thermal stability properties than other aluminum alloys. [2,3] . But the main problem with the use of these alloys lies in the difficulties and constraints in the preparation. Lithium is a very effective element and burns rapidly when exposed to air. There are several research projects to solve this problem, such as: The process of partial protection of lithium with aluminum foil to prevent it from being exposed to air, or to add it in an empty atmosphere or under an atmosphere of inert gases such as Alarcon, in order to prevent its oxidation, in addition to the treatment costs associated with such methods and what may be caused by oxides or others In the fused alloy after freezing.
The first experience of using lithium as a casting element with aluminum was more than half a century ago, and it has undergone many stages of development. For this purpose, many researches were carried out in universities and industrial companies. This was not used commercially until 1958, as well as their high sensitivity. [1,3,4] 2. Experimental procedure
Preparation of samples
The chemical composition of Al-Li-Cu-Mg alloys examined in this investigation is shown in Table ( 1). Was melted and cast in vacuums induction furnace under Ar atmosphere .the ingot was then homogenized, and hot rolled to a 4mm thick for heat treatment. The samples obtained were solution treated at 560°C for 1 h followed by ice water quenching, and then aged at 175°C for different time periods.
XRD
The X-Ray Diffraction (XRD) measurements were also carried out in an X´Pert PRO MPD diffract meter using Cu Kα radiation at 40Kv and 35mA setting in the 2ɵ range from 20° to 120°.
Microstructure examination
Some alloys were prepared for microscopic imaging using OLYMPUS (BH -2) microscopy.The smoothing stages were carried out on the KNUTH-ROTOR. The polishing process was done on the mechanical polishing machine (CREICHRT.DP-20). As for the solution of the demonstration, use Keller's Reagent solution with the following structure:(1 cm3 HF, 1.5 cm3 Hcl, 2.5 cm3 HNO3, 95 cm3 H2O) with a show time ranging from 10 to 15 sec.
Hardness Tests
The micro hardness account for all alloys used in the search after aging. For each test an average of three readings was recorded. In this work, micro Vickers hardness testing machine type (a Digital Micro hardness tested HV1000) (HV) The micro hardness was used to evaluate product, with load 1000g with socking time 10 sec. Micro hardness values were obtained by using the equation: Hv: Hardness Vickers (kg/mm²). P: applied load (Kg). d: the average diameter of the indentation (mm).
Wear Tests
The alloys are dimension (20*29*5mm), which were weighed by a Sensitive balance of (±0.0001, where it was weighed before and after the wear test. The wear apparatus is a rotational type micro-tester (pin on disc) MT4003version 10 was adopted, .The sliding wear was determined by using pin on disk concept using (300r.p.m) and constant radius (6mm). A martensitic steel disc was used with Ra=0.265µm and Hv=852.The wear test was carried out using load (15N).They were examined after grinding of up to (2500).the duration of the tests was (10, 20, 30, 40, 50, and 60). The test was performed at room temperature (25ºC).
Tensile Tests
Standard specimens alloys were prepared with dimensions shown in Figure(
XRD
After heating the alloy D, E, and G at 560° C which were all α phase (super saturation) for about 1hr. and quenched in ice water.
Alloy (A) it found that the aging heat treatment for 12 hr. lead to precipitate the intermetallic compound (Al4Li9) , (Al3Zr) , (Al3Zr5)and (Li6.46Mg) this is accomplished by the decomposition of(Al3Zr) phase. Precipitate fine particles (Al3Zr) will lead to Control of recrystallization shown in figure (3). Alloy (C) it found that the aging heat treatment for 8 hr. lead to precipitate the intermetallic compound (Al2.94Ge) , (AlGe0.064) , (Ge 4 Zr 5 )and (AlGe0.333) this is accomplished by the decomposition of(Ge4Zr5) phase. Precipitate fine particles (Ge4Zr5) will lead to increases strength shown in Figure (5). The addition of some phases did not appear because of the possibility of the device is old.
Microstructure
All three alloys in hot rolling and aged condition were examined using optical microscope. It is observed that the A alloy contains the phase (Al 3 Zr) which is completely identical with the aluminum matrix which is the source of resistance. Either B alloy was found to be phase (AgAlLi 2 ).The C alloy was found to be Phase deposition (Ge 4 Zr 5 ) it has a hardening base role in this alloy shown as figure (6, A, B, C).
Hardness
The aging treatment of this alloy was carried out in a typical manner Figure (7) showing the relationship between the hardness and the aging time of alloy A at 175 ° C. Find that the top of her salad at 12 hours reaches 156.6. The superiority of this alloy in hardness is that it contains more than a solid element such as (Mg, Cu, Li, Zr), which are the stages of solidification (T1, S, δ´). The difference in the time period to reach the peak of hardness is due to the basic phase (δ´) in the D-specific growth alloy. For the low mismatch scale, so aging does not occur until the balanced phase (δ´) and the other phases become apparent. The alloy D is superior to the traditional Al-Li alloy.
Wear
All alloys were subjected to wear test under load of (15 N) for different times (10, 20, 30, 40, 50, and 60) min at room temperature. From tribological point of view, the reason for a sharp rise in the weight loss within the initial cycles is expected by the process of asperity smoothing in the initial stage of contact, the results are shown in Figures below. This case, then its respond to stress by brittle fracture rather than plastic deformation, therefore, it is expected that the load is concentrated over a small contact area on the flat sample surface. Hence, the surface asperities fractured in a brittle material in contact positions and the occurrence of fracture loads to increased friction; this provides an additional mechanism for the dissipation of energy at the sliding contact. Therefore, increasing time (sliding distance) the surface asperities of the disc and the sample surface become reduced, leading to a more conformal (favorable) contact.
In other work, the meaning contact pressure become slowly (the contact area increases) therefore the scale of the fracture when it occurs will be small, leading to reduction in the frictional force and weight loss. It is well established that the wear rate depends on the normal load and the hardness or yield strength of the material. Figure (10, 11, 12, 13, 14, 15) shows the wear behavior of the tested samples. The addition Zr to alloys works to reduce loss weight by limit (7.8) because formation of a phase β´(Al 3 Zr). The addition Ag to alloys works reduce loss weight about (4.6) because gives more compounds T 1 (Al 2 CuLi),ɵ´(Al 2 Cu), δ´(Al 3 Li) ,and Ω . The addition Ge to alloys works reduces loss weight (16.6) because gives more compound (Ge 4 Zr 5 ).
Tensile properties
The results of the tensile test and mechanical properties (tensile load, percentage of deformation and Ultimate tensile strength ) of the prepared alloys are listed in Table (2) and Through figure (16, 17, 18 ). In the alloy A, a decrease in tensile value was observed (42 MPa). This is due to the presence of the zirconium component, which works to soften the crystals and form a phase (Al 3 Zr) which is less stable as well, deposited on the crystalline boundary. . The alloy of B has a tensile value of (127 MPa). The reason for this is the presence of silver which acts as the formation of the phase (AgAl 2 Li),(Al 2 MgLi),(Al3Li). In the solid-solution strengthening, the presence of copper with silver has precipitation strengthening zones that increase tensile strength and increase toughness. The C alloy found that the tensile strength decreased to (26 MPa) the reason for the presence of germanium, which works on the expansion but found when added to the aluminum alloy -Lithium increases the lithium's ability in aluminum, so it inhibits the phase deposition (Ge 4 Zr 5 ) ,(Al 3 Li) as long as in the rigid solution either After the process of aging, it is deposited in the form of rods that are almost identical, not cutable by the moving masses, thus reducing tensile strength.
Conclusions
1) Alloy is B superior then the A, C alloy In terms of mechanical properties.
2) The use of rolling and thermal coefficients and aging at 175°C temperature improves strength and hardness. | 2019-10-31T08:57:59.621Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "a3d17ea8be7bab22b029530e54e3f3de854a3696",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1294/2/022018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "677110fe2122607c3f0aaf2e71915c5d9231740f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
268179521 | pes2o/s2orc | v3-fos-license | Recent Advances in Thermal Management Strategies for Lithium-Ion Batteries: A Comprehensive Review
: Effective thermal management is essential for ensuring the safety, performance, and longevity of lithium-ion batteries across diverse applications, from electric vehicles to energy storage systems. This paper presents a thorough review of thermal management strategies, emphasizing recent advancements and future prospects. The analysis begins with an evaluation of industry-standard practices and their limitations, followed by a detailed examination of single-phase and multi-phase cooling approaches. Successful implementations and challenges are discussed through relevant examples. The exploration extends to innovative materials and structures that augment thermal efficiency, along with advanced sensors and thermal control systems for real-time monitoring. The paper addresses strategies for mitigating the risks of overheating and propagation. Furthermore, it highlights the significance of advanced models and numerical simulations in comprehending long-term thermal degradation. The integration of machine learning algorithms is explored to enhance precision in detecting and predicting thermal issues. The review concludes with an analysis of challenges and solutions in thermal management under extreme conditions, including ultra-fast charging and low temperatures. In summary, this comprehensive review offers insights into current and future strategies for lithium-ion battery thermal management, with a dedicated focus on improving the safety, performance, and durability of these vital energy sources.
Introduction
In the current landscape of sustainable mobility, the thermal management of lithiumion batteries (LIBs) in electric vehicles (EVs) has established itself as an essential field of research, crucial to improving the efficiency and ensuring the safety of these energy systems.Battery thermal management systems (BTMSs) play a key role in this context, as they are decisive in keeping LIBs within an optimal temperature range, thus contributing to optimizing their performance and prolonging their lifetime.Recent research [1] highlights that advances in innovative materials and advanced designs in BTMSs are key for the effective management of the heat generated in the charging and discharging processes, which is especially relevant in high-demand applications such as EVs.However, the field faces significant challenges, mainly related to overheating and temperature variations in LIBs.These problems can compromise both the safety and performance of batteries, accelerating their aging and reducing their energy storage capacity.In extreme cases, these problems can even trigger fire or explosion risks due to thermal runaway (TR) [2].This situation underlines the imperative need for efficient BTMSs that mitigate these risks, maintaining uniformity in battery temperature and avoiding extreme conditions [3,4].Studies such as those presented in [5] emphasize the critical importance of integrity in BTMS studies, focusing on battery thermal safety as a key element to prevent overheatingrelated incidents in advanced battery technologies.
• The primary contribution of this work lies in its comprehensive approach, addressing not only thermal efficiency to enhance battery performance but also placing significant emphasis on safety.This is achieved through innovative strategies in the design of BTMSs that tackle both overheating and temperature variations, thereby mitigating risks of accelerated aging and potential fire hazards.• We contribute to the scientific literature by highlighting the essential role of advanced materials and innovative designs in BTMSs.This work provides a thorough review of recent advancements in this regard, emphasizing how these innovations can be crucial for effective thermal management during charging and discharging processes, especially in high-demand applications such as electric vehicles.• A significant novelty of this review is the emphasis on researching internal heat generation in lithium-ion batteries.Through a detailed analysis of thermo-electrochemical processes and the impact of variable battery properties on heat generation, this work contributes to a better understanding of the fundamentals underlying battery efficiency and safety.• This paper highlights a comprehensive evaluation of various thermal management strategies used in EVs.From pulsed operations to hybrid systems combining liquid cooling with PCMs, we provide a complete overview of the advantages and disadvantages of each approach, identifying best practices to optimize thermal efficiency and minimize pressure loss.• We present specific results from a recent hybrid system that combines liquid cooling channels with PCMs.This work not only highlights the theory behind this innovation but also demonstrates its practical application, optimizing thermal efficiency and addressing pressure loss, which is crucial for successful implementation in EVs.
The rest of this study is structured as follows: Section 2 delves into the fundamentals of thermal management of LIBs and the shortcomings of existing systems.Section 3 focuses on advances in cooling methods, while Section 4 discusses sophisticated thermal management models.Section 5, meanwhile, investigates novel technologies in thermal monitoring and regulation.The aim is to highlight the critical role of BTMSs in the sustainable development of EVs and to make valuable contributions to the field of thermal management research, presenting findings relevant to both specialists and researchers in related fields.Finally, Section 6 concludes the paper.
Thermal Management in Lithium-Ion Batteries
Thermal management in LIBs is critical to their efficient and safe operation, especially in applications such as EVs and energy storage systems.Maintaining these batteries within an optimal temperature range, typically between 20 • C and 40 • C, is essential to prevent reliability problems [12,13].There are three types of approaches to thermal management, active, passive, and hybrid systems, each with distinctive characteristics and suitable for different applications and requirements [14].Active systems employ mechanical or electrical means, such as pumps and fans, to regulate the temperature of the batteries [15,16].These methods, which include air and liquid cooling, are highly effective in dissipating heat but have the disadvantage of increasing system power consumption and thus reducing the overall efficiency of the battery [17].Despite their effectiveness, these systems require a more complex design and are usually more expensive [18].
Passive systems, which use technologies such as PCMs and heat pipes, rely on natural processes such as conduction and convection for heat transfer [19].These methods do not require additional energy, which makes them more efficient in terms of energy consumption and simpler in design.However, they may face challenges in their ability to handle high thermal loads or in situations of extreme temperatures [20].In addition, certain materials such as PCMs may have issues with low thermal conductivity and risk of leakage after melting [21].Hybrid systems, which combine aspects of active and passive approaches, seek to balance the advantages of both.For example, the integration of PCMs with air or liquid cooling systems can improve temperature control compared to purely passive methods, without reaching the high energy consumption of fully active systems [22].These hybrid systems offer a promising solution but require careful design and advanced engineering to achieve an optimal balance between energy efficiency and thermal management effectiveness [23].The proper choice of thermal management system is essential for LIBs, considering factors such as battery size, lifespan, and charge and discharge rates.Advances in new materials, such as nanometer PCMs, and advanced cooling and heating techniques are improving the efficiency and safety of these systems.These innovations are contributing to the increased adoption of batteries in a variety of applications, reducing costs and encouraging the use of cleaner, more sustainable energies [12].In addition, the integration and compatibility of these systems with the overall EV or storage system design is a challenge [24].Numerous research studies have been conducted that have proposed various design improvements to increase the efficiency of BTMSs, as detailed in Table 1.
Table 1.Thermal management systems in batteries: comparative analysis.
Ref. BTMS Method
Operating Principle Key Findings Advantages Disadvantages [24] Active Uses forced air flow to cool the batteries in a rectangular container.
Modifications to outlet size and shape significantly decrease system temperature, improves cooling uniformity.
Requires no moving parts, improves temperature uniformity.
Limited heat transfer capacity, less effective for high thermal loads. [
25] Active
Circulates water around the battery pack to dissipate heat.
More effective for thermal management at low cycling rates.
Effective for thermal management at low cycling rates, improves thermal performance.
Not as effective at high cycling rates, may require combination with other systems.
Ref. BTMS Method
Operating Principle Key Findings Advantages Disadvantages [26] Passive Uses PCMs with applied pressure to enhance heat dissipation.
PCMs show the most promising performance compared to traditional active air/liquid cooling methods.
Maintains stable temperatures without energy consumption, improved performance with pressure.
Increased mechanical complexity and costs due to pressure application.
[27] Passive Uses PCMs, such as paraffin, to absorb and release heat during phase change.
Provides more uniform temperature distribution compared to air-cooling and liquid cooling.
Effective thermal management, uniform temperature distribution, paraffin is resistant and safe.
Low thermal conductivity of paraffin, slow thermal response.
[28] Passive Proposes a passive BTMS using a tetrahedral lattice porous plate for drone batteries.Notable challenges are identified in BTMSs, especially under harsh operating conditions.A key limitation lies in the low thermal conductivity of PCMs, which leads to uneven temperature distribution within the battery cells, adversely affecting the performance and efficiency of LIBs [33].This problem is intensified in extreme situations, such as discharge rates higher than 1 • C or in environments with ambient temperatures higher than 35 • C, where temperature differences of less than 3 • C can be observed between individual cells, significantly impacting the performance and durability of LIBs [35].Also, substantial limitations are observed in current BTMSs, particularly in fast-loading scenarios and high ambient temperatures, which can result in inefficient thermal management and increase the risk of TR [36].In addition, air-cooled and passive cooling systems show a limited ability to adapt quickly to variations in thermal load, highlighting the importance of developing more dynamic and adaptive BTMSs to improve battery thermal stability over a wider range of operating conditions [37].This situation is further complicated by the inadequacy of air-cooled and PCM-based systems in contexts of high ambient temperatures or high charge/discharge rates, where the poor thermal conductivity of these systems compromises the long-term performance of the batteries [38].
Innovations in Cooling Approaches for Battery Management Systems
Advances in refrigeration techniques, both single-phase and multiphase, have been significant.Single-phase refrigeration, although simpler in design, faces limitations in its heat transfer capacity compared to multiphase techniques [39,40].Furthermore, one must not only understand the differences in their effectiveness and where they can be best applied, but also consider how improvements in materials, system design, and implementation strategies can help overcome current obstacles.For example, in single-phase cooling, new nanofluids are being explored as a possible solution to improve thermal conductivity and make heat transfer more efficient.Several studies have examined nanofluids with different combinations of base fluids (such as water, ethylene glycol, and engine oil) and nanoparticles (such as alumina (Al 2 O 3 ), iron oxide (Fe 2 O 3 ), copper oxide (CuO), and titanium dioxide (TiO 2 )).Although the potential of engine oil nanofluids with Fe 2 O 3 has not yet been fully investigated, a significant improvement in the cooling of the battery model has been observed by increasing the Fe 2 O 3 concentration from 2% to 5%, which resulted in a decrease in its temperature.The inclusion of Fe 2 O 3 as a nanoparticle not only increased the thermal capacity and heat transfer efficiency but also improved the overall thermal performance of the system compared to using engine oil alone [38].
Another nanofluid containing multi-walled carbon nanotubes (MWCNTs) has also shown promise due to its high thermal conductivity and low density.The thermal uniformity of the battery improves with the use of nanofluids, especially at higher MWCNT concentrations.The maximum thermal deviation drops significantly using the 0.45-0.5% MWCNT nanofluid.Another innovative design is the immersion cooling system, which uses a singular fluid in a static state (Novec-7200) and indicates remarkable efficiency in thermal regulation.This system keeps the maximum cell temperature below 40 • C and ensures that the temperature gradient is maintained within a range of 3 • C [39].Within the field of multiphase refrigeration, traditional refrigerants such as HFCs (hydrofluorocarbons) and HCFCs (hydrochlorofluorocarbons) are two groups of widely used gases that play a crucial role in refrigeration, proving to be vital for efficient performance in environments with extreme temperatures.However, despite being fundamental components in air-conditioning systems, their effect on the environment has prompted the search for more sustainable options [40].
Therefore, new dielectric fluids with lower boiling points are being explored that promise significant improvements in thermal management under various operating conditions.Among these innovative solutions, the use of fluids such as SF33 stands out for their ability to maintain battery temperatures below 34 • C, representing a notable advance toward more sustainable and efficient cooling practices [41].In addition, minichannel cold plate cooling systems containing a two-phase fluid are effective in extracting heat through boiling [42].The use of specific refrigerants, such as R410a, R134a, and R600a, in direct contact with the battery cells allows a significant improvement in temperature distribution.For example, the use of R600a has achieved impressive heat transfer coefficients, keeping module temperatures within safe and much lower ranges compared to systems without refrigeration [43].To extend battery lifespan and improve battery safety by effectively optimizing the cooling design, the incorporation of thermal management models becomes indispensable.Further research highlights that choosing the right cooling fluids can markedly increase thermal efficiency, underscoring the vital importance of these models in improving cooling systems [44].For example, in [45], the authors propose a multi-scale and multiphase model, pioneering in comprehensively simulating the venting process in LIBs during a TR event, from heat and gas generation to particle accumulation.In [46], the authors comment on the NTGK (extreme temperature condition, Tiedemann-Gu-Kim) model, which stands out for its accuracy in simulating the internal electrochemical processes of the batteries, offering realistic results that have been confirmed experimentally, differentiating it from more basic models.Two-phase immersion cooling is the model studied in [47].This model uses the phase change of the coolant to achieve efficient heat transfer, offering rapid cooling, thermal stability, and energy efficiency, ideal for critical systems such as EV batteries, and maintains the battery temperature below 34 • C.
Another approach analyzed is the pseudo-two-dimensional (P2D) model, which reduces the complexity of the lateral structure in LIBs by focusing on electrode thickness to study ion diffusion and electrochemical dynamics.This method allows an accurate representation of variations in concentrations and electric potentials.Although it is fundamental to understand in detail the behavior of batteries, the intensity of the computational calculations required limits its implementation in instantaneous simulations required by battery management systems [48].According to other research [39], the electrochemicalthermal model (ECT) is the most complete model for simulating batteries, but its high computational demand limits its practical use.The electrical-thermal (ET) model is preferred in thermal management because of its cost efficiency and accuracy.For thermal degradation studies, the Arrhenius method is used.Equivalent circuit models are valued for their effective approximation of electrochemical behavior with low computational cost.However, thermal management focuses on two strategies for temperature estimation: the first involves the use of artificial intelligence (AI), especially neural networks, and the second relies on models describing the internal electrochemical dynamics.These strategies allow the temperature of batteries to be predicted with high accuracy using power control modules.Artificial neural networks (ANNs), trained with real or simulated data, offer estimates with a minimum error of 1.38% compared to conventional methods, eliminating the need for complex simulations, and positioning itself as an effective tool in thermal optimization [39,40].Figure 1 shows several types of ANN that can be used to predict battery temperature.
Within these types, Elman-NN networks are suitable to simulate the dynamic thermal behavior of the battery, where the temperature at each instant depends on the previous values.Elman-NNs analyze the variables of mathematical models using hidden layers and a contextual layer, learning the relationship between inputs and outputs with training data [40].In contrast, LSTMs are key to predicting battery temperature by processing data sequences and addressing thermal complexities.Deep neural networks (DNNs) analyze complex data to predict thermal changes, while convolutional neural networks (CNNs) specialize in detecting and predicting areas of heat from visual data.This set of technologies offers an accurate and advanced solution for efficient thermal management of batteries [41].The multilayer perceptron (MLP) network is effective in modeling nonlinear interactions, making it ideally suited to address challenges in thermal problems that feature nonlinear physical complexities.It uses meaningful inputs such as heat generation and temperature measurements, providing a rich contextual basis for analysis.MLP excels at predicting temperatures with a margin of error of only 0.8 • C using a single sensor, demonstrating its high efficiency in data-constrained contexts [42].Within these types, Elman-NN networks are suitable to simulate the dynamic thermal behavior of the battery, where the temperature at each instant depends on the previous values.Elman-NNs analyze the variables of mathematical models using hidden layers and a contextual layer, learning the relationship between inputs and outputs with training data [40].In contrast, LSTMs are key to predicting battery temperature by processing data sequences and addressing thermal complexities.Deep neural networks (DNNs) analyze complex data to predict thermal changes, while convolutional neural networks (CNNs) specialize in detecting and predicting areas of heat from visual data.This set of technologies offers an accurate and advanced solution for efficient thermal management of batteries [41].The multilayer perceptron (MLP) network is effective in modeling nonlinear interactions, making it ideally suited to address challenges in thermal problems that feature nonlinear physical complexities.It uses meaningful inputs such as heat generation and temperature measurements, providing a rich contextual basis for analysis.MLP excels at predicting temperatures with a margin of error of only 0.8 °C using a single sensor, demonstrating its high efficiency in data-constrained contexts [42].
On the other hand, model-based approaches delve into the internal reactions and properties to simulate their behavior and battery wear using tools such as the Kalman filter (KF), extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filtering (PF), among others.These methods seek to pinpoint internal conditions, e.g., load level.Hybrid models, which combine several techniques, are particularly effective.On the other hand, model-based approaches delve into the internal reactions and properties to simulate their behavior and battery wear using tools such as the Kalman filter (KF), extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filtering (PF), among others.These methods seek to pinpoint internal conditions, e.g., load level.Hybrid models, which combine several techniques, are particularly effective.However, challenges, such as accuracy in the representation of deterioration and parameter calibration with limited data, remain complex areas [43].The KF optimizes estimates of noisy dynamical systems by a recursive process that minimizes the prediction error.It is versatile for linear and nonlinear systems, with its effectiveness depending on the accuracy of the model and noise analysis [44].There are studies in the literature that address this issue.In [45], the authors perform a comparison between the KF, EKF, and UKF for estimating the battery state of charge.They demonstrated that the UKF was the most accurate with an error of less than 0.3%, outperforming the EKF, which reduced the error to less than 0.5% by accounting for nonlinearities, and the KF, which had an error reduced from 2% to 1.5%.The success of the UKF is due to its effective handling of nonlinearities across sampling points, highlighting its superiority in the accuracy of SOC estimation in lithium-ion batteries.The authors of [46] employed the EKF together with a simple resistance-capacitance (1-RC) model that facilitated the determination of the state of charge (SOC) in batteries.This technique proved to be effective in estimating the overall SOC with an error margin of less than 2%.However, it failed to identify specific variations between individual cells, resulting in notable discrepancies between them.This situation points to the need to dynamically adapt and optimize the model for each cell to obtain an individualized and accurate SOC estimation.The authors of [47] use PF for SOC estimation in batteries by an innovative approach using weighted particles to represent possible states.This method simplifies nonlinear calculations, optimizes performance, and employs the radial simplex sphere principle for efficient sampling, achieving more accurate SOC predictions and corrections.
Therefore, the application of the EKF as an essential component of the algorithm designed for real-time estimation of SOC in LIBs of EVs is validated.Therefore, the prediction and measurement functions for applying the battery model within the EKF are described.The estimation process involves estimating the SOC and battery bias voltage in advance, applying a discrete model that incorporates both prior state and recent inputs, and adding a random error component to reflect uncertainty.Parallel to this, the measurement function bridges the current voltage and current observations to the theoretical battery state, adapting to the nonlinear complexity of the battery model and introducing a random error into the measurements.The KF comes into play by continuously adjusting these initial estimates based on the observed differences between predictions and actual measurements, which effectively refines the accuracy of the battery state estimation [48].Recent innovations in materials and structures are revolutionizing thermal efficiency, especially in the field of LIBs, a key technology in sectors such as EVs.One of the main innovations is the use of PCMs, which keep the temperature of batteries within a safe and constant range by harnessing the latent heat during their phase transitions [35,49,50].However, these materials often present the challenge of low thermal conductivity, which is being addressed by incorporating high-conductivity metal matrices and adding metal nanoparticles or porous materials [48,49].
Another area of significant advancement is the development of miniature channel cooling plates, which have been shown to be effective in managing battery pack temperature [51,52].These plates, often made of aluminum, allow for better heat distribution and more effective temperature control.However, they face challenges related to the complexity and cost of production [51].Hybrid structures combining passive thermal management and active cooling systems have also emerged as promising solutions, offering greater efficiency, but at the cost of increased weight and complexity.In addition, the use of innovative materials such as graphene has been explored to improve heat dissipation due to their high thermal conductivity [49,53].Advances in the design of airflow structures and the use of fins on cooling plates have also shown improvements in thermal efficiency [52,54].However, these solutions may increase airflow resistance and require higher energy consumption.One of the most significant advantages of these innovations is their ability to improve the safety and efficiency of batteries, which is critical in high-demand applications such as EVs [55,56].However, these technologies still face significant challenges, including the trade-off between improvements in thermal conductivity and production cost, as well as the need to effectively integrate these solutions into large-scale battery designs [50,57].Although significant progress has been made in improving thermal efficiency through various innovations in materials and structures, challenges remain.These include cost optimization, simplification of manufacturing processes, and effective integration of these solutions into large-scale battery systems.Continued research and development are key to addressing these challenges and taking full advantage of the benefits of these advanced technologies.A summary of innovations in battery thermal management is presented in Table 2.
Ref. Innovations in Thermal Efficiency
Advantages Disadvantages Challenges [35] Use of PCMs They absorb latent heat during phase transition, keeping the battery temperature within a safe range.
Low thermal conductivity, which limits the ability to dissipate heat evenly.
Development of materials with higher thermal conductivity and life cycle.
[51] Use of miniature channel cooling plates Increased contact area between the coolant and the cells, which improves heat transfer.
Higher complexity and manufacturing cost.
Optimize channel distribution to reduce pressure loss.
[49] Phase change composite materials They combine the advantages of PCMs with those of conductive materials, improving thermal conductivity.
Higher production cost.
Develop composite materials with higher energy density. [55] Hybrid system combining heat pipes with evaporative cooling Improves thermal efficiency in high-current applications.
Increased complexity and manufacturing cost.
Optimize system design to reduce pressure losses.
[58] Graphene composite structures Excellent thermal conductivity, which improves heat distribution within the battery.
High production cost.
Develop more efficient production methods.
[56] Miniature channel design with tilt angles Reduces pressure losses, improving heat transfer.
Excessively high tilt angles can cause leakage problems.
Optimize the tilt angle for maximum thermal efficiency. [50] Passive interfacial thermal regulator based on shape memory alloy It changes its thermal conductance reversibly, improving battery performance in hot and cold climates.
Challenges related to the development of shape memory alloys with increased thermal cycling and long-term stability.
Optimize device design to facilitate integration into modules and battery packs. [59] Hybrid system combining heat pipes with evaporative cooling Improves thermal efficiency in high-current applications.
Increased complexity and manufacturing cost.
Optimize system design to reduce pressure losses.
[53] Modular cooling plate design Greater versatility and adaptability to variable configurations.Modular designs require joints and connections that can increase the overall thermal resistance.
Achieve large-scale manufacturing of these modular systems in a cost-effective manner.
[57] System based on liquid cooling of honeycomb structure and phase-change materials Significantly reduces the maximum temperature and temperature difference in the batteries.
Structural and cooling complexity leads to higher manufacturing costs.
Evaluation under extreme conditions such as actual loading and unloading cycles or thermal packaging situations.
[60] Modular liquid cooling system Greater versatility and adaptability to variable configurations.
Modular designs require joints and connections that
can increase the overall thermal resistance.
Achieve large-scale manufacturing of these modular systems in a cost-effective manner.[61] System based on liquid cooling of honeycomb structure and phase-change materials Significantly reduces the maximum temperature and temperature difference in the batteries.
Compact structure and uniform heat dissipation.
Evaluate its performance under extreme conditions such as real loading and unloading cycles or thermal packaging situations.
Ref. Innovations in Thermal Efficiency
Advantages Disadvantages Challenges [54] Phase change composite materials Improve thermal uniformity within the battery modules.
Composite materials tend
to be more expensive to produce.
Develop composite materials with higher effective thermal conductivity. [52] Mini-channel cooling plates with spine-shaped fins They improve heat transfer performance and reduce thermal gradients.
Horizontal fins cause a significantly higher pressure loss.
Optimize the geometry and arrangement of the fins to achieve the optimum balance between heat transfer and pressure loss.
Emerging Technologies in Thermal Monitoring and Control
In recent years, the growing demand for EVs and energy storage systems has driven intense research and development in the field of lithium-ion batteries.A key aspect of this technological evolution is the efficient and safe management of battery temperature, a complex challenge that involves the fusion of advanced sensors, control systems, and risk mitigation, as well as the application of smart technologies and machine learning.The importance of advanced sensors in this area cannot be underestimated.With the incorporation of distributed optical fibers and nanosensors in battery cells, thermal monitoring with high spatial and temporal resolution has been achieved.These sensors provide critical data that are essential to understanding and managing the internal conditions of batteries, enabling the detection of significant temperature variations that could lead to failures [50,62].In parallel, control systems have significantly evolved thanks to the integration of AI and ML algorithms.These systems not only process the data collected by advanced sensors but also learn from them, continuously improving their ability to predict and mitigate risks.For example, by analyzing thermal behavior patterns, these systems can anticipate and prevent TR incidents, one of the main risks in LIBs [63].Risk mitigation is enhanced by the development of intelligent technologies and ML.These tools offer an unprecedented ability to analyze and predict battery behavior, enabling more efficient and safer thermal management.The integration of predictive models based on historical data and techniques such as ANNs and reinforcement learning have resulted in more advanced and reliable battery management systems [64,65].In short, the convergence of these innovative technologies is transforming the way LIBs are monitored, controlled, and safely maintained.This advancement is not only crucial to improve the efficiency and lifetime of these batteries, but also to ensure their safe use in critical applications such as EVs and large-scale energy storage systems [66,67].
Advanced Sensors
In the field of LIBs, the incorporation of advanced sensors is revolutionizing the way safety and efficiency are monitored and improved.Thin-film RTD (TFRTD) sensors, specifically copper-nickel alloy sensors, have been noted for their ability to be integrated into current collectors, providing fast and accurate internal temperature measurement.These sensors exhibit 82% faster response speeds and 33% higher accuracy compared to external RTDs, which is crucial to avoid overheating and TR [68].In external short-circuit detection, non-contact magnetoelectric composite sensors, which combine piezoelectric elastomers and magnetostrictive ferrite, have shown high current sensitivity, with an accuracy greater than 99% and a current sensitivity of 0.346-5.975mV/A.These sensors can distinguish between short circuits and mechanical vibrations, which makes them suitable for applications in EVs [69].On the other hand, OFDR-based fiber optic sensors offer distributed temperature measurements with a spatial resolution of up to 3 mm.These sensors can identify heat accumulations around positive current tabs during high-rate discharges, revealing non-uniform heat generations even in small cylindrical cells [70].In smart batteries, the fusion of multiple internal sensors provides a more accurate way to estimate the SOC.For example, expansion force (EF) sensors have been shown to have a more sensitive relationship with SOC compared to voltage and are independent of dynamic current.These sensors can also provide information on battery health status [71].
FBG sensors are noted for their low invasiveness and resistance to electromagnetic interference, with a temperature sensitivity of approximately 10 pm/ • C and strain sensitivity of 1-2 pm/µε.They are capable of quasi-distributed sensing and thermal mapping within battery packs, making them suitable for estimating the state of charge and state of health, and predicting battery capacity [72].Finally, fiber optic sensors have been used to monitor strain and temperature variations in individual cells, employing algorithms such as the fast-recursive algorithm to establish nonlinear correlation models between strain signals and key electrical parameters, allowing accurate estimation of the SOC in battery packs [73].These advances represent major progress in the monitoring and management of lithium-ion batteries, paving the way for safer and more efficient applications, especially in EVs and large-scale energy storage systems.Figure 2 shows the classification of advanced sensors in BTMSs.
Batteries 2024, 10, x FOR PEER REVIEW 11 of 20 batteries, the fusion of multiple internal sensors provides a more accurate way to estimate the SOC.For example, expansion force (EF) sensors have been shown to have a more sensitive relationship with SOC compared to voltage and are independent of dynamic current.These sensors can also provide information on battery health status [71].FBG sensors are noted for their low invasiveness and resistance to electromagnetic interference, with a temperature sensitivity of approximately 10 pm/°C and strain sensitivity of 1-2 pm/με.They are capable of quasi-distributed sensing and thermal mapping within battery packs, making them suitable for estimating the state of charge and state of health, and predicting battery capacity [72].Finally, fiber optic sensors have been used to monitor strain and temperature variations in individual cells, employing algorithms such as the fast-recursive algorithm to establish nonlinear correlation models between strain signals and key electrical parameters, allowing accurate estimation of the SOC in battery packs [73].These advances represent major progress in the monitoring and management of lithium-ion batteries, paving the way for safer and more efficient applications, especially in EVs and large-scale energy storage systems.Figure 2 shows the classification of advanced sensors in BTMSs.At the heart of this evolution are advanced sensors, whose development continues apace.Emerging technologies such as distributed optical fiber sensors, electric field sensors, and nanosensors offer high spatial and temporal resolution, facilitating detailed thermal mapping of batteries.Particularly, nanosensor-based sensors, composed of materials such as metal oxides and carbon, are capable of measuring temperature in a distributed manner throughout the battery module.Their small size does not affect battery performance and they provide real-time thermal data with high spatial resolution, which is key in smart thermal management [50].In addition, the use of optical fibers distributed internally in the battery cells allows real-time monitoring of thermal distribution, a significant improvement over conventional sensors.This is vital for detecting temperature variations At the heart of this evolution are advanced sensors, whose development continues apace.Emerging technologies such as distributed optical fiber sensors, electric field sensors, and nanosensors offer high spatial and temporal resolution, facilitating detailed thermal mapping of batteries.Particularly, nanosensor-based sensors, composed of materials such as metal oxides and carbon, are capable of measuring temperature in a distributed manner throughout the battery module.Their small size does not affect battery performance and they provide real-time thermal data with high spatial resolution, which is key in smart thermal management [50].In addition, the use of optical fibers distributed internally in the battery cells allows real-time monitoring of thermal distribution, a significant improvement over conventional sensors.This is vital for detecting temperature variations within the battery and along its length, preventing failures due to temperature differences [62].The integration of these optical sensors with electric/magnetic field sensors facilitates detailed thermal mapping and allows the detection of variations associated with thermal changes.These dense arrays of sensors collect large volumes of data, essential for accurate and efficient monitoring.Finally, battery monitoring systems include distributed networks of multiple sensors that evaluate thermal and electrical parameters at the cell, module, and pack level, providing a complete picture of the battery status and enhancing the safety and efficiency of battery use [66].
Application of Intelligent Technologies and Machine Learning
The application of intelligent technologies and ML in battery and power distribution systems has emerged as a critical field in the evolution toward more efficient and safer energy management.In the context of fast charging of lithium-ion batteries, the development of the MSCC-DRL (multi-stage constant current based on deep reinforcement learning) model, which uses deep reinforcement learning, demonstrates a significant advance in reducing charging times while maintaining safety and minimizing battery degradation.In parallel, a systematic review of ML applications in smart distribution systems highlights how these technologies are revolutionizing the planning and operation of power grids, improving the efficiency and effectiveness of these systems [74,75].In the EV arena, battery technologies and battery management systems have seen remarkable advances.Developments in ANNs for the health management of lithium-ion energy storage batteries, as well as hybrid ML models for thermal modeling and battery diagnostics, are clear examples of how ML is improving the safety, efficiency, and durability of these batteries [76][77][78][79].In addition, the use of deep learning to estimate the state of charge, health, and remaining life of batteries indicates significant progress in intelligent battery management, enabling more accurate and safer operation of battery systems in EVs [79].The integration of ML models into BMSs has significantly transformed lithium-ion battery management, especially in the context of EVs.The innovative approach presented in [74] illustrates the application of deep reinforcement learning to optimize charging efficiency.This model interacts with the battery environment, allowing the trained agent to autonomously determine the optimal charging profile, thus maximizing battery lifetime.The results show that this approach allows charging batteries in as little as 6-14 minutes, obtaining charging times up to four times shorter than traditional methods.In addition, the agent demonstrates the ability to adapt to variations in parameters such as electrode thickness, optimizing charging autonomously in different conditions.
Study [76] emphasizes how advanced battery and BMS technologies, enriched with ML, have improved safety and efficiency in EVs.Intelligent BMSs, which use ML models, are critical for vital functions such as SOC and cell balancing, thus improving the actual autonomy and safety of EVs.State-of-health (SOH) management of LIBs is another area where ML has had a significant impact.According to [77], ANNs have been successfully applied to predict and monitor SOH, a crucial factor for the safe and efficient operation of energy storage systems.These models can capture complex nonlinear relationships between multiple factors, such as voltage, current, and temperature, and the battery health state.Furthermore, [78] introduces a hybrid ML model for thermal modeling and battery diagnostics.This approach combines mechanistic models with data-driven tradeoffs, providing a powerful tool to prevent overheating and ensure safety during battery operation.Deep learning has proven to be a powerful tool for health status estimation thanks to its ability to learn complex relationships between input data and health indicators.Architectures such as feedforward, convolutional, recurrent neural networks (LSTMs and GRUs), and Transformers have been applied with promising results on data-driven models [80][81][82].The authors of [79] demonstrate how deep learning is used to accurately estimate the SOC, SOH, and remaining useful life (RUL) of batteries.This approach enables more accurate and effective battery management, which is crucial for the safe and efficient operation of EVs [76,82].
Deep learning (DL) allows these states to be estimated from large sets of historical battery operation data, without the need for complex electrochemical models.Different ar-chitectures such as DNNs, recurrent networks (RNNs), LSTMs, and convolutional networks (CNNs) have been successfully applied [78,79].Other works have applied LSTM networks, GRUs, and autoencoders to estimate the SOH and predict the RUL.For example, the model proposed by [83] based on an autoencoder with a particle filter achieved a mean square error of 12.1457 for the CALCE database, better than the particle filter and Kalman models.Finally, [84] analyzes different AI strategies, including SOH and SOC estimation.These strategies highlight the versatility and effectiveness of ML in various applications within BMSs, thus improving the performance and safety of EVs.Six ML algorithms are analyzed in this study: linear regression, random forest, gradient boosting, light gradient boosting machine, extreme gradient boosting, and support vector machines.The input data include current, temperature, and SOC, while voltage is used as the output [83,85].The results show that the random forest provides superior performance with an R2 of 0.999 and minimal errors.This shows that ML can accurately estimate the state of lithium-ion batteries.
Therefore, the adoption of ML models in BMSs has opened a path towards more advanced and sustainable energy storage and EV systems.These advances are critical to improving the efficiency, safety, and lifespan of LIBs, marking a milestone in the transition to more efficient and environmentally friendly mobility.Studies [86,87] highlight the importance of these technologies in the future of battery management and electric mobility.Moreover, the integration of AI in LIB management is a growing area of research, with significant applications in EV and energy storage systems.Lithium-ion battery health management, especially in energy storage systems, has gained importance due to the need to manage SOH, SOC, and RUL accurately.ANN models are emerging as effective tools to address these challenges, leveraging their ability to decipher complex and nonlinear relationships between input data and battery health indicators [76,88,89].On the other hand, accurate monitoring of the internal temperature distribution is crucial for the safety of LIBs.A novel approach to this is a hybrid thermal-neural network (LTNN) model that combines a mechanism-based distributed thermal model with machine learning-based axial thermal gradient compensation.This hybrid LTNN model has been shown to be highly compatible with common state observation methods, providing accurate and spatially resolved internal thermal monitoring and diagnostics for LIBs [78].
In the context of battery management systems (BMSs) in EVs, DL has emerged as a key technique to address battery-related algorithms and operational issues.The use of DL in BMS enables accurate estimation of SOC, SOH, and RUL, which is critical for EV reliability, safety, and performance [82,90].The role of AI in solving battery management problems also extends to estimating the state of Li-ion batteries.Methods such as random forests, support vector machines (SVMs), and gradient momentum algorithms have demonstrated superior performance in discharge prediction, suggesting that integrating these methods with BMS can significantly improve the performance of EVs [84,91].In addition, battery safety is a critical issue, especially in EVs and grid-scale storage.Fire incidents have highlighted the importance of battery safety, particularly regarding unpredictable thermal runaway.Machine learning approaches offer new opportunities to predict and prevent battery failures in practical applications, addressing multi-sector and multi-scale challenges [84,90].AI and ML are playing a crucial role in improving the health and safety management of Li-ion batteries.These technologies offer promising solutions for accurate SOH, SOC, and RUL estimation, advanced thermal management, and failure and safety risk prevention in a variety of applications, including EVs and energy storage systems.
Challenges and Solutions in Extreme Conditions
Lithium-ion batteries, crucial in the era of electric mobility, face notable challenges in extreme temperature conditions.These conditions, defined outside the optimal operating range (298.15K to 323.15 K), significantly impact battery efficiency and safety.At elevated temperatures, the acceleration of electrochemical degradation and the risk of thermal decomposition are primary concerns, while temperatures below the optimal range compromise battery capacity and power [92,93].The challenge is intensified when considering that heat flow directly affects ionic and electronic conductivity, altering the redox processes at the electrodes and shortening the battery lifetime [94].To address these difficulties, hybrid thermal management systems have been developed, combining liquid cooling with PCMs.These systems act as thermal buffers, passively absorbing and releasing heat during exothermic and endothermic reactions, thus keeping the temperature within a safe range and extending the battery lifetime [95].Regulations such as ISO 12405-4 [96] and UN38.3 [97] play a crucial role, establishing maximum temperature limits of 55 • C and 60 • C. In addition, temperatures below 0 • C are considered extreme, although manufacturers often specify stricter ranges to optimize performance and durability.The importance of effective thermal management cannot be underestimated, especially when considering the wide variety of applications for these batteries, from EVs to energy storage in harsh environments [98,99].
In the future, thermal management systems will need to balance high capacity and fast charging with thermal efficiency, especially challenging over a wide temperature range.In addition, fast charging presents additional challenges, where optimization is key to controlling heat and temperature gradients while maintaining safety and performance [100,101].Energy storage systems in harsh environments require advanced thermal management approaches, such as phase change cooling, to maintain stable performance under extreme conditions [102].In addition, AI-based controllers, such as ANNs and fuzzy logic, are emerging as solutions to optimize battery safety and lifetime in EVs by dynamically adapting to temperature variations [103].In conclusion, the effectiveness of thermal management in LIB is vital to overcome the challenges posed by extreme conditions.Continued development of advanced materials, techniques, and regulations is critical to ensure the performance, safety, and long-term viability of these essential technologies [94,98].Table 3 provides a comprehensive examination of various thermal management technologies employed in BTMSs.The detailed analysis encompasses the technology's description, its relation to thermal leakage, specific benefits, and associated limitations and challenges.The technologies covered include hybrid systems, liquid cooling, PCM active heating systems, phase shift cooling, thermal management with AI, and thermotolerant separators.Each entry sheds light on the unique features, advantages, and considerations of the respective thermal management technology within the realm of BTMSs.
Phase Shift Cooling
Ref. [106] It uses the evaporation and condensation of a refrigerant fluid to effectively absorb and dissipate heat.This method is based on the latent heat of phase change of the refrigerant, offering high heat dissipation capacity.
Efficiently controls temperature under peak load and unload, preventing overheating and excessive thermal runaway.
It offers precise thermal control and is capable of handling high thermal loads, making it suitable for energy-intensive applications.
It requires careful design to ensure the efficiency of the phase change system and can present challenges in refrigerant replenishment.
Thermal Management with AI
Refs. [95,105] It implements AI algorithms to monitor and adjust thermal management in real time, based on usage patterns and environmental conditions.
It enables fast and accurate response to temperature variations, optimizing thermal management to reduce TR and improve efficiency.
Maximizes battery life and performance by continuously adapting to changing conditions, improving safety and efficiency.
It depends on the accuracy of algorithms and data collection and may require constant updates and maintenance.Thermotolerant Separators Ref. [107] Advanced separators designed to withstand high temperatures without losing functionality, improving battery stability and safety in extreme heat conditions.
They prevent overheating and reduce TR by maintaining structural and functional integrity at high temperatures, avoiding internal short circuits.
They significantly increase safety in extreme conditions, resisting high temperatures without degrading.
They can increase the cost of battery manufacturing and present challenges in integration with other components.
Conclusions
This study on the thermal management of LIBs focuses on fundamental aspects of their sustainable and safe development, particularly in critical applications such as EVs and energy storage systems.This work highlights the cruciality of BTMSs in keeping LIBs within an optimal temperature range, optimizing their performance, and prolonging their lifetime.Current challenges include overheating and temperature variations, which can compromise the safety and performance of batteries, accelerating their aging and reducing their energy storage capacity.Significant innovations in materials and structures have been made that are revolutionizing thermal efficiency in LIBs.The use of PCMs is one such innovation, which helps to maintain battery temperature within a safe and constant range.However, these materials present the challenge of low thermal conductivity, which is being addressed by incorporating high-conductivity metal matrices and the addition of metal nanoparticles or porous materials.
The Integration of smart technologies and ML into battery and power distribution systems has emerged as a critical field.Models such as MSCC-DRL, which uses deep reinforcement learning, are making progress in optimizing charging efficiency and estimating the state of charge, health, and remaining life of batteries, indicating significant progress in intelligent battery management.Looking ahead, thermal management systems will need to balance high capacity and fast charging with thermal efficiency over a wide temperature range.Energy storage systems in harsh environments will require advanced thermal management approaches, and AI-based controllers are emerging as key solutions to optimize EV battery safety and lifetime by dynamically adapting to temperature variations.Despite significant advances, challenges remain, including cost optimization, simplification of manufacturing processes, and effective integration of these solutions into large-scale battery systems.Continued research and development are critical to address these challenges and maximize the benefits of these advanced technologies.
The significant contribution of this research lies in its innovative approach to the thermal management of LIBs, especially highlighting the adoption of ML models in BMSs.This has paved the way towards more advanced and sustainable energy storage and EV systems, marking a milestone in the transition towards more efficient and environmentally friendly mobility.In summary, this study represents a crucial breakthrough in improving the efficiency, safety, and longevity of lithium-ion batteries, contributing significantly to the future of battery management and electric mobility.Data Availability Statement: Data will be made available on request.
Figure 1 .
Figure 1.Core structure of machine learning (ML) systems.
Figure 1 .
Figure 1.Core structure of machine learning (ML) systems.
Table 2 .
Innovations in Thermal Efficiency.
Table 3 .
Challenges and solutions in BTMSs. | 2024-03-03T16:09:17.345Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "eb2368c353257e679725bcd88abeca633979a903",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-0105/10/3/83/pdf?version=1709284828",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b0a1082d5da9c609417d2d9ca29381b12c73aeb1",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
246196186 | pes2o/s2orc | v3-fos-license | THE FACTORS INFLUENCING COLLEGE STUDENTS’ ACCEPTANCE OF MOBILE PAYMENT IN MALAYSIA
: Mobile payment refers to a payment method by which a consumer pays a bill for goods or services through a mobile terminal. mobile payment users can send payment instructions directly or indirectly to a bank financial enterprise via mobile devices or proximity sensing devices, thereby enabling currency payments and funds transfers. It realizes the integration of terminal equipment, Internet, application providers and financial institutions, and completes financial business such as currency payment. However, the adoption rates of this payment method are relatively low in Malaysia. This paper analyzes the factors that affect the willingness of Malaysian college students to use mobile payment. Three well-established theories, Unified Theory of Acceptance and Use of Technology (UTAUT), theory of Perceived Risk (PR), and Diffusion of Innovations Theory (DIT) are applied to investigate user acceptance of mobile payments. An empirical model for acceptance of mobile payment in Malaysia is established in this paper. Survey data from mobile payments users will be used to test the proposed hypothesis and t he model. The result of data analysis shows that Malaysian consumers’ perceptions of cost has no statistically significant relationship with acceptance of mobile payment. On the other hand, the factor of perceived risk, personal innovativeness, performance expectancy, subjective norm and effort expectancy play important roles. JEL M10;
Introduction
The payment method used by consumers will have a significant impact on the future of a country's financial system and business model. Mobile payment system is becoming more and more popular in the financial field. It has great potential to replace cash and become the most popular means to complete transactions in the near future (OECD, 2012;Cocosila & Trabelsi, 2016). The Malaysian government has also issued some policies to actively encourage Malaysians to move towards a cashless society. Bank Negara Malaysia (BNM) has launched a blueprint for the financial sector (2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020) to develop and manage the future of Malaysia's financial markets and payment systems (Ooi Widjaja, 2016).
However, the development of mobile payment is still in the infant stage in Malaysia, its uptake remains limited (Qasim & Abu-Shanab, 2016). Statistic shows that the number of mobile payment users in Malaysia is only 3.2 million, and penetration is only 9.9% (Statista-MCMC, 2018).
Though there are many advantages of mobile shopping and payment, the usage in Malaysia is still very low, and the attitude to the channel of mobile shopping remains unclear (Ghazali, 2018). Due to the rapid rise of communication technologies, mobile payment system has emerged as a popular method to facilitate payment transactions. Notwithstanding its widespread use, what affects intention of mobile users towards paying through mobile phones and why in the context of developing market remain largely unanswered (Ting et al, 2016). Thus, the purpose of this paper is to study the factors of acceptance of mobile payment in Malaysia. This paper builds a model that affects the college students' acceptance of mobile payment in Malaysia, and makes a forecast on the long-term development of mobile payment in Malaysia and puts forward some useful suggestions.
Literature Review
This paper's theoretical constructs are based on Unified Theory of Acceptance and Use of Technology (UTAUT), the Theory of Perceived Risk (PR), and Diffusion of Innovations Theory (DIT). These well-established theories can be helpful to build a rigid theoretical foundation for this research. They are the most influential theories in clarifying and predicting users' acceptance and adoption in a new system.
UTAUT integrates eight adoption theory and proposes the critical elements with regards to new information technology acceptance. The four independent variables in UTAUT are Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SI), Facilitating Condition (FC). UTAUT supposed that the four independent variables are direct determinant of usage intention and behavior. Gender, age, experience, and voluntariness of use are posited to mediate the impact of the four key constructs on usage intention and behavior (Venkatesh, 2003).
Perceived risk was originally a concept in the field of psychology, which was introduced in 1960 by Harvard University scholar Bauer into the field of marketing and used in the study of consumer behavior (Bauer, 1960). The Theory of Perceived Risk proposed that any behaviors of consumer may lead to a certain risk, in other words, consumer behavior could be considered as risk taking. Perceived risk suggests the idea that consumers' may be influenced during the mobile payment process by the feelings like anxiety, concern, discomfort, uncertainty, and cognitive dissonance in this research. Diffusion of Innovation Theory (DIT) is one of the classic theories of communication effect research put forward by Everett. M. Rogers (1962) to explain how people accept new ideas, new things and new products. DIT proposed that the spread of innovation is always slower at the beginning. When the number of adopters reaches a certain amount, the diffusion process suddenly accelerates and enters the take-off stage. The process continues until most of the people who are likely to adopt innovations in the system have been adopted, and the speed of diffusion gradually slows down. When the number of adopters no longer increases, it reaches the saturation point. Adopters of a new technology can be divided into several types: innovators, early adopters, early majority, late majority and laggards. The central idea of DIT is that confronted of a new technology, some people will be more open-minded and willing to adopt innovation than others. In other words, personal innovativeness is an important factor to determine whether a new technology is accepted.
Research Model and Hypotheses
Based on these theories, the hypotheses are developed and then the research model is proposed. The independent variables are Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SI), Perceived Risk (PR), Perceived Cost (PC) and Personal Innovation (PI), and Acceptance is the dependent variable.
Performance Expectancy (PE)
Performance Expectation refers to the extent to which individuals believe that the use of a new technology can help them improve their performance. With mobile applications, users can create faster processes on any work related to finance, and it is relatively safe to use mobile payment applications. As mobile payment can save a lot of time, it has a highly positive effect on the performance of the users. With the convenience provided by this application, many activities can be completed in a short time. The use of mobile banking. Furthermore, Mobile payments assist many people to gain profits, including making it easier to carry out financial transaction activities so that many people adopt the system in their daily activities (Oliveira et al., 2016). From the results above, the following hypothesis was drawn: H1: Performance Expectancy of mobile payment services has a positive effect upon college students' acceptance of mobile payment services.
Effort Expectancy (EE)
Effort Expectancy was as the level of easiness related while using any system (Venkatesh et.al, 2003). In other words, it refers to the degree to which new technologies provide ease of use, which can reduce users' efforts in their work. Generally, if a payment system is conducted easily, many people are willing to use it. To some extent, mobile payment has such characteristics. mobile payment applications have a very positive impact on job expectations due to their ease of use. Mobile payment applications are also classified as user-friendly, so it may encourage many people to use it because of their ease of use and accuracy. It is because mobile payment uses a system easily understood by many people that it has a positive impact on business expectations (Slade, 2015). This leads to the following hypothesis.
H2: Effort Expectancy of mobile payment services has a positive effect upon college students' acceptance of mobile payment services.
Social Influence (SI)
Social Influence is defined as the extent to which individuals think that people around them, such as colleagues, family and friends, have an impact on their use of a new technology. It relates to a person's beliefs about whether peers and people of importance to the person think he or she should engage in the behavior. Some scholars point out that adoption of a new technology is significantly affected by social institutional pressure (Bass, 1969). A technology may provide a positive impact when utilizing social networks because they will have more intentions (Musa, 2015). This study defines SI as a measure of the influence of important peers and other social groups, including friends, parents and colleagues, on acceptance of mobile payment in Malaysia. As Malaysia is a country with strong social labor division and cooperation, consumers tend to seek opinions from their families and friends on mobile payment experience, and their social influence may have a significant impact on consumers' willingness to use mobile payment. Therefore, the hypothesis for this study with regard to SI is proposed as: H3: Social Influence has a positive effect upon college students' acceptance of mobile payment services.
Perceived Risk (PR)
In this paper, perceived risk refers to the psychological expectation and risk bearing of possible adverse consequences when using mobile payment services. When any new technology comes out, users may have a sense of risk when using it. Consumers' risk perception comes from their feelings of uncertainty or anxiety about their behaviors and the final results of their behaviors. The characteristics of mobile payment indicate that consumers may encounter potential risks, such as concerns about the vulnerability of wireless communication security. Specifically, users may worry about whether their personal privacy may be disclosed and whether account funds may be stolen (Slade et al., 2013). In addition, new users may not be familiar with the using step of mobile payment, as well as the operation differences of various technical products, which may confuse consumers and ultimately improve their perception of technical risks (Gaur et al., 2012). In light of these respects, the perceived risk is considered as a major barrier to use mobile payments. Therefore, the hypothesis for this study with regard to PR is proposed as: H4: Perceived Risk has a negative effect upon college students' acceptance of using mobile payment services.
Perceived Cost (PC)
In Compass Acceptance Model for the analysis and evaluation of mobile service, Perceived Cost (PC) was thought to be a factor affecting the attitude toward mobile service (Amberg et al., 2004). Customer Perceived Cost, refers to the sum of the expenditures felt by the customer during the actual consumption process. It is the time, money, physical strength and energy involved in the customer's entire process of consuming the product or service. It means the sum of the costs of psychology and so on, not just the price of the commodity actually paid by the customer. Many empirical researches have discussed how perceived cost affect acceptance of network products (Soane et al., 2010;Benazić et al., 2015). This paper only considers expenditure involved in using mobile payment and perceived cost is defined as the extent to which an individual believes that using mobile payment services will cost them extra money. Several studies suggest that PC could be a major barrier to the adoption of new technologies in mobile phone services such as mobile banking and mobile commerce (Luarn & Lin, 2005;Wei et al., 2009;Phonthanukitithaworn, 2015). Given this importance of PC, it has been suggested that PC could be incorporated as an extended construct in TAM when investigating mobile payment. Therefore, the hypothesis for this study with regard to PC is proposed as: H5: Perceived Cost of mobile payment services has a negative effect upon college students' acceptance of mobile payment services.
Personal Innovativeness (PI)
Personal innovativeness (PI) is defined as "the willingness of an individual to try out any new information technology" (Agarwal & Prasad, 1998). Many research revealed that Personal innovativeness was related to adoption decision of various technologies (Citrin et al., 2000, Bauer et al., 2005. Innovative individuals are found to be communicative, curious, dynamic, venturesome, and information seekers, and therefore innovativeness can play a critical role in the intention to adopt the new mobile technologies (Kim et al., 2010;Sadi, 2011).
This construct was included in this study because it was expected to influence young consumers' intention to adopt mobile payment. Given the relative infancy of the mobile payment, it is appropriate to test personal innovativeness as an influencing variable under new circumstances. Therefore, the hypothesis for this study with regard to PI is proposed as: H6: Personal Innovativeness of mobile payment users has a positive effect upon college students' acceptance of mobile payment services. Now, a model for acceptance of mobile payment is established. It can be illustrated by the Figure 1.
Research Methodology
Survey has been selected as the central research methodology in this research. The multiitem scales measure was applied to this research in order to test the proposed research model. The statements are written for each item, and the participants were required to indicate whether they agreed or disagreed with the statements on a Likert scale. Many measures could be used in the UTAUT based research. For example, performance, productivity, effectiveness, usefulness, and time saving can be used to measure perceived usefulness. Moreover, ease of learning, ease of control, ease of understanding, ease of use, and flexibility of use can be measured perceived ease of use. Some of the items in the survey were taken from previously published scales with appropriate psychometric properties research. The items were adopted or adapted to fit the context of mobile payments. After an extensive literature review on the topic, new items were also developed by this research.
College students under the age of 35 who use Mobile payment are the target participants for this survey, which does not necessarily suggest that the participants have adopted the services. Respondents will be invited to participle in the survey by answering papery questionnaire.
The questionnaire collects two major types of information. One-part concerns participants' demographic information, and the other part is about participants' perceptions of each of the constructs in the proposed model. The demographic information includes gender, age and level of education. The rest of the questionnaire asks for participants' the opinions of each item.
A total of about 500 questionnaires were distributed in colleges and universities in Kuala Lumpur, and 423 questionnaires were collected, of which 386 were valid.
Data Analysis
Following the response from survey, the proposed hypotheses will be tested. SEM based analysis techniques will be used to analysis the data. First, the Confirmatory Factor Analysis (CFA) will be employed to assess the validity of the measurement for the model, then the proposed model will be tested using the Structural Equation Modeling (SEM), so that the causal structure of the model can be evaluated. The research will use AMOS to analyse the measurement model and the structural model. The proposed research model was evaluated using structural equation modelling and employed a two-step modelling approach, including the assessment of the measurement model and the assessment of the structural model (Byrne, 2010). The assessment of the measurement model ensures that observed variables are appropriately loaded with regards to the factors they belonged to, with no significant cross-loading to an item of another factor. The assessment of the structural model determines the relationship between independent and dependent variables.
Assessment of Measurement Model
A confirmatory factor analysis using AMOS(v20) was conducted on all the items simultaneously to evaluate the validity of the items and the constructs in the measurement model. The overall fit of the hypothesized model was assessed using eight common model-fit measures: goodness-of-fit (GFI), standardized root mean-square residual (SRMR), root mean-square error of approximation (RMSEA), comparative fit index (CFI), Tucker Lewis index (TLI), normalized fit index (NFI), adjusted goodness-offit (AGFI), and normalized χ2 /df. Table 1 summarizes the results for the model-fit indices, which show that the measurement model exhibits a good fit with the data collected. And the evaluation of the psychometric properties of the measurement model in terms of reliability and construct validity can be processed. Construct validity was examined through the test for convergent and discriminant validity. Convergent validity was evaluated using the attributes of factor loading, average variance extracted (AVE), and construct reliability (CR). The values are provided in Table 2. It shows that all the scale items are highly loaded with respect to their constructs as all factor loadings are above the threshold value of 0.70, and item reliability of each indicator, including CR, had scores above 0.70, indicating good reliability as well as good convergent validity. Notably, the CR values of the five constructs in the model are all above 0.8 which provides evidence that these measures consistently represent the same latent construct.
The AVE values were compared with the squared estimate of the correlation estimates for assessing discriminant validity. The correlation matrix in Table 3 shows that the AVE values are all larger than the squared correlation estimates, confirming that a satisfactory level of discriminant validity has been achieved. This evidence indicates that the measured variables have more in common with the construct they are associated with rather than other constructs in the model. Further, it indicates that all the constructs in the measurement model are significantly different from each other.
Assessment of Structural Model and Hypotheses Testing
Structural model analysis was undertaken to determine the relationships among the constructs in the proposed model, and subsequently determine the direction of significant paths between the constructs. The overall fit of the structural model was shown satisfactory, which was indicative of good model performance. Table 4 shows that almost all the indexes of the structural equation basically reach the ideal value, which shows that the structural equation model is acceptable. The path parameters are shown in Table 4 and Figure 4 which provide a basis for the following hypothesis verification. The results for the structural path analysis presented in Figure 6 indicating the model's structural paths. Four of the model's five paths are statistically significant at the 0.05 level of significance. After cross-matching the results of structural path analysis with the hypotheses, five hypotheses (H1, H2, H3, H4, and H6) were supported and one (H5) were rejected.
All the factors explained 75 percent of variance in young Malaysians' acceptance of mobile payment. Of these five influencing factors, PR was the factor with the most influence, having the highest weight of -0.374, followed by PI (0.277), PE (0.216, SI (0.14) and EE (0.094).
Research Findings and Their Implications
The results from the testing of the hypotheses derived from the research model reveals that Malaysian young consumers' perceptions of cost have no statistically significant relationship with attitude to adopt mobile payment. On the other hand, the factor of consumer perceived risk plays the most important role, followed by personal innovativeness, performance expectancy, subjective norm and effort expectancy.
According to data analysis, perceived risk is proved to be the first factor affecting people's attitude to mobile payment services, and many researches supported this result (Martins et al., 2014;Yang et. al, 2015;Phonthanukitithaworn, 2016;Wong, 2019). The results not only verify perceived risk on mobile payment satisfaction, but also confirm that it is the most significant factor influencing college students' intention to use mobile payment services in Malaysia. Consumers' perceived risk may affect their refusal to provide personal information to mobile payment providers, and consumers may feel more insecure about conducting wireless transactions when money, products and recipients are not visible. The impact of perceived risk on behavioral intention means that avoiding and reducing risk is an important issue to promote the adoption of mobile payment. If young consumers think that mobile payment involves high risk, they are unlikely to participate in mobile payment transactions. In another words, if consumers find there is risk or uncertainty in using mobile payment services, they are likely to refuse it.
The results also show that the two constructs of UTAUT, PE and EE, may influence college students' acceptance of mobile payment. As to performance expectancy, the results show that there was a significant positive correlation between performance expectation and acceptance. This conclusion is consistent with that of some previous researches (Martins et al., 2014;Ramon et al. 2019). As to effort expectation (EE), if a mobile payment service provider provides a user-friendly payment process to use, consumers will feel like to accept mobile payment service. Many researchers have observed the same results, this result is consistent with the findings of some researchers in mobile payment research (Oye, 2014;Martins et al., 2014;Ramon et al. 2019).
Social influence has a significant impact on consumers' willingness to use mobile payment services. Previous scholars have come to similar conclusions (Oliveira, 2016;Mun, 2017;Andre, 2019). This factor is used to measure the influence of important peers and social groups on individual behavior. The influence of friends, parents and colleagues may become a key determinant in the decision-making process for potential users to adopt mobile payment services. In the early stages of mobile payment adoption, information about services among potential adopters will be limited. For people who have not previously used this service, they may rely heavily on other people's opinions to help them make decisions. While for consumers who have already adopted mobile payment, the importance of maintaining their social relations also encourages them to continue to adopt the service.
According to the data analysis, PI is also a factor affecting the adoption of mobile payment services by college students in Malaysia. This result is consistent with some previous researches (Yang et al., 2012;Thakur, 2014;Rakhi & Mala, 2014;Gbongli, 2019). Personal innovativeness plays an important role in affect the use intention of technological innovation. Due to the characteristics of young people, most of them have higher level of innovation and greater courage, and they are more willing to try new technologies. Therefore, they may have positive emotions about intention compared with individuals with low innovation ability.
The results of this study show that there is no statistical correlation between PC and college students' acceptance of mobile payment. The reason is probably that the cost of mobile payment in Malaysia is relatively low at present, which basically involves only a small amount of mobile data and low cost to transfer money from mobile payment platform to bank. Most people think that these costs are reasonable.
Conclusion
This paper reports a study on the acceptance of young people to adopt mobile payment in Malaysia. This paper proposes a theoretical model that includes the relevant psychological measurement factors that affect college students' acceptance of mobile payment, and conducts an empirical test to verify it. It is found that perceived risk, personal innovativeness, performance expectancy, subjective influence and effort expectancy affect college students' acceptance of mobile payment.
The results of this study have some implications for the mobile payment service agents in Malaysia, as it helps them to understand the consumer behavior in the decisionmaking process of adopting mobile payment, and points out the areas they can focus on to encourage people to adopt and use the service. SI4: "If many colleagues of mine use m-payment, I will consider using it." SI5: "If the frequency of appearance on the media is high, I will consider using mobile payments." SI6: "If most vendors or merchants accept consumers to use mobile payments, then I will also consider using mobile payments." D. Perceived Cost PC1: "I'm worried the cost of opening a mobile account will be high." PC2: "I'm worried the transaction fees for using m-payment will be high." PC3: "I'm worried the cost of transferring money from a mobile account to a bank account will be high." PC4: "I am worried that using mobile payments will consume a lot of mobile phone data and increase my spending." | 2022-01-23T16:44:17.890Z | 2021-12-15T00:00:00.000 | {
"year": 2021,
"sha1": "68dd0f0cc94f4d97247eae79b990207b7e25a410",
"oa_license": "CCBY",
"oa_url": "https://oapub.org/soc/index.php/EJMMS/article/download/1192/1777",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b188a58e5f041f61c305da0b0f7936ea0e0ea3c3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
198273661 | pes2o/s2orc | v3-fos-license | Pythiogeton manoomin, a new species causing root and basal stalk rot of wild rice in the United States
ABSTRACT A new species of Pythiogeton isolated from wild rice exhibiting rot and dieback of roots and stalks in California is described. Pythiogeton manoomin sp. nov. is characterized by coenocytic hyphae, club-like appressorium, and terminal or intercalary sporangia, which are often a short distance from the end of supporting hyphae. The protoplasm is discharged through a discharge tube into an elongate transient vesicle, which soon disappears, leaving the naked protoplasm to differentiate into reniform zoospores. Pythiogeton manoomin also produces thick-walled pigmented chlamydospores, not found in other Pythiogeton species. In greenhouse trials, Pg. manoomin did not infect economically important crops such as rice, bean, chard, corn, carrot, lettuce, oat, radish, sweet pepper, tomato, or wheat. Phylogenetic analysis based on ITS data supports the conclusion that this organism is a new species that is most closely related to Pg. ramosum. In this paper, we describe morphological characteristics, temperature–growth relationships, pathogenicity, and phylogenetic relationships that support the description of this taxon as a new species, Pythiogeton manoomin sp. nov. urn:lsid:zoobank.org:pub:4C63AAA4-4D4A-4679-A344-79B75121A5C6
Introduction
The genus Pythiogeton (Pg.), together with Pythium and Phytophthora, belongs to the family Pythiaceae of the Oomycota. Minden first described members of the genus in 1916, and since then, Pythiogeton species have been reported in the USA (Drechsler 1932), Canada (Sparrow 1932), Germany and Denmark (Lund 1934), Japan (Ito and Tokunaga 1935), England (Sparrow 1936), China (Shen and Siang 1948), Africa (Gaertner 1954), India (Dayal 1968), Brazil (Beneke and Rogers 1970;Rocha et al. 2014), Taiwan (Watanabe 1974), Poland (Batko 1971), Pakistan (Lodhi et al. 2006), and Australia (Le et al. 2014). Pythiogeton species are characterized by production of zoospores outside of sporangia from a naked mass of protoplasm, not from a vesicle, which is characteristic of Pythium species. Members of the genus are facultative, anaerobic, ubiquitous inhabitants of soil and water. Pythiogeton can be readily isolated from plant materials submerged in stagnant water, and prefers anaerobic conditions (Huang et al. 2013).
Wild Rice (Zizania spp.) or manoomin, as it is called in the Ojibwe language, is a semi-aquatic grass species endemic to North America and China. Three species of wild rice, Zizania palustris, Z. aquatica, and Z. texana, are native to North America, and one species, Z. latifolia, is native to China. Wild rice is one of America's oldest indigenous cultivated grains, with over 90 percent of cultivated wild rice grown in the United States produced in California and Minnesota (Hayes et al. 1989;Oelke 1993;Steeves 1952). Wild rice-growing regions in California include Shasta, Lake, Modoc, Lassen, Butte, Colusa, Yuba, Yolo and Sutter counties. In the summer of 2012, an outbreak of a newly discovered root and basal stalk rot of wild rice (Zizania palustris L.) cv. Franklin was observed in a 16-ha field in Big Valley, Lassen County, California. Infected plants exhibiting rot and dieback of roots and stalks were in various stages of decline, including death. A pythiaceous fungus was consistently isolated from the diseased plant and was identified as Pythiogeton sp. (Doan et al. 2014). The purpose of this study is to characterize the new Pythiogeton species isolated from wild rice and to confirm its pathogenicity.
Isolation and cultivation
Wild rice (Zizania palustris L.) cv. Franklin exhibiting rot and dieback of roots and stalks were collected from Big Valley, Lassen County, California (GPS coordinates 41°0 8′41.93″ N 121°10′07.49″ W). Symptomatic stem and root tissues from affected plants were placed on 9 cm Petri dishes (four pieces per plate) of PARP agar (17 g Difco corn meal agar containing 5 ppm pimaricin, 250 ppm ampicillin, 10 ppm rifampicin, and 100 ppm pentachloronitrobenzene), which were then incubated at 25°C in the dark for 1 week. Agar blocks 8 mm in diameter containing hyphal tips were transferred onto 9 cm Petri dishes (4 pieces per plate) of potato dextrose agar (PDA) and modified Rye B agar (RBA) containing 60 g rye-grain extract, 20 g of Difco Bacto Agar, and 20 g of sucrose as described by Caten and Jinks (1968).
Colonial morphology and mycelial growth
The effect of temperature on mycelial growth was evaluated on Rye B agar for three Pg. manoomin isolates (BV1, BV2, and BV3) from wild rice. A 3 dayold, 8 mm-diameter Rye B agar plug of each isolate was transferred onto Rye B agar plates for each of the nine temperatures evaluated. The plates were incubated at 4, 12, 15, 18, 21, 25, 28, 31, or 34°C in the dark, and the growth rate was determined daily by measuring the linear colony diameter in three places up to 5 d for each temperature. Three plates were used for each treatment, and the experiment was repeated three times with at least 24 h between each experiment. The average growth rate for each isolate from the three replications was calculated for each temperature. The average growth rate of three isolates was tested using analysis of variance (ANOVA) in R version 3.3.2 (The R Foundation).
Production of sporangia and release of zoospores
The morphology and development of sporangium were observed on Rye B agar using the method described by Chang (1988) and Huang et al. (2013). Hyphal tips were transferred onto modified Rye B agar. Agar blocks (5 x 5 x 3 mm) from 7 d-old Rye B agar cultures were placed in 9 cm Petri dishes (1 piece per plate) containing 20 mL of 10% clarified V-8 juice or deionized water and incubated in the dark at 24°C for 72 hours. The V-8 juice was then decanted and the mycelial mats were rinsed three times, at 20 min intervals, with sterile distilled water. For the development and release of zoospores, the mycelia mats with sporangia were incubated at 24°C for 48 h. Photographs were taken with a Leica DM5000 B microscope (Leica Microsystems, Wetzlar, Germany).
Pathogenicity tests
Pathogenicity of Pg. manoomin to Avena sativa L. cv. Cat Grass (Lake Valley Seed, Boulder, CO), Zea mays L. cv. Golden Bantam Improved (Seed Savers Exchange, Decorah, IA), Triticum aestivum cv. Liquid Sunshine (Botanical Interests, Broomfield, CO), Lactuca sativa L. cv. Great Lakes Iceberg (Lake Valley Seed), Beta vulgaris subsp. cicla L. cv. Fordhook Giant (Burpee Garden Product Co., Warminster, PA), Capsicum annum L. cv. California Wonder (Burpee Garden Product Co.), Daucus carota subsp. sativus L. cv. Short 'n Sweet (Burpee Garden Product Co.), Solanum lycopersicum L. cv. Moneymaker (Everwilde Farms, Sand Creek, WI), Phaseolus vulgaris L. cv. Ejote silvestre Contender (Burpee Garden Product Co.), Raphanus sativus L. cv. Cherry Belle (Burpee Garden Product Co.), Oryza sativa L. cv. Calrose (USDA # C.I. 8988), and Zizania palustris L. cv. Franklin was determined in the greenhouse. Ten seeds of each were planted in sterilize sand in plastic pots measuring 10 cm in diameter (750 ml). After 5 days, six 8 mmdiameter agar discs from the margin of a 7day-old culture growing on modified Rye B agar were placed in each pot. Pots inoculated with six 8 mm-diameter agar discs from a modified Rye B agar plate were used as controls. After 20 d post inoculation, root and crown tissues were examined for symptoms and disease was recorded. The experimental design was a randomized block with ten replications (blocks) in the greenhouse on a 13-h photoperiod provided by high pressure sodium bulbs with daytime temperatures ranging from 80-85°C and nighttime temperatures ranging from 65-70°C. The pathogenicity test was repeated three times. Re-isolation of the pathogen from crown and root tissues was attempted for all inoculated plants, as described above. Damping-off assay was performed as above with the exception that 10 seeds were planted into sterilized sand that was pre-inoculated with 12 8 mm-diameter agar discs from the margin of a 7day-old culture grown on modified Rye B agar. After 10 d post inoculation, germinated seeds were counted, and re-isolation of the pathogen from seeds was attempted, as described above.
DNA extraction, PCR amplification, sequencing ITS rDNA Total genomic DNA from three Pythiogeton manoomin isolates (BV1, BV2, and BV3) was extracted from mycelia using Qiagen® DNeasy Plant Mini Kit ™ (Valencia, CA) according to the manufacturer's protocol. The internal transcribed spacer (ITS) region was amplified by PCR and sequenced using universal ITS5 and ITS4 primers (White et al. 1990). Three μL of genomic DNA were directly added to a 50 μL reaction that consisted of: 10 μL 5× Colorless GoTaq Reaction buffer (Promega Corp., Madison, Wisconsin), 5 μL 25 mM MgCl 2 , 4 μL containing 2.5 mM each dNTP, 0.2 μL GoTaq Taq polymerase (Promega Corp.), 0.1 μL each of a 50 μM concentration of primers, and 27.6 μL sterile deionized water. The thermal cycling parameters were initial denaturation at 94°C for 5 min followed by 34 cycles consisting of denaturation at 94°C for 1 min, annealing at 55°C for 2 min, and extension at 72°C for 2.5 min. A final extension at 72°C for 10 min was done at the end of the amplification followed by 4°C. The PCR products were purified with GeneJet PCR Purification Kit (Cat. No.: K0702, Thermo Fisher Scientific, Waltham, MA). Sequences of the forward and reverse strand were conducted by the College of Biological Sciences UCDNA Sequencing Facility (University of California, Davis, CA).
Sequence alignment and phylogenetic analyses
DNA sequences were edited and assembled using the BioEdit software (http://www.mbio.ncsu.edu/BioEdit/ bioedit.html). The ITS sequence for all three Pythiogeton manoomin isolates was identical. Therefore, the sequences of Pythiogeton manoomin isolate BV1 was submitted to GenBank (KF719169). The ITS sequence (JQ610201) of Pythium aphanidermatum was selected as an outgroup. Sequences were aligned using ClustalW, and a phylogenetic tree was constructed with MEGA 5.03 (Tamura et al. 2011), using the maximum parsimony method with 1000 bootstrap replication and evolutionary distance analyzed according to Tamura-Nei model (Tamura et al. 2011). The maximum parsimony tree was obtained using the Tree-Bisection-Regrafting algorithm with search level 5 in which the initial trees were obtained by the random addition of sequences (Nei and Kumar 2000). All positions containing gaps and missing data were eliminated. The aligned sequence data set was deposited in TreeBASE (No. 24,156
Pathogenicity tests
-Pathogenicity of Pg. manoomin to wild rice was confirmed by inoculation followed by re-isolation.
After 14 days, 84% of inoculated wild rice plants in all tests developed root and basal stalk rot, consistent with the symptoms observed in diseased wild rice in the field. No symptoms were observed on other inoculated plants. In addition, Pg. manoomin caused damping-off of 71% of wild rice seedlings tested. Pythiogeton manoomin was consistently reisolated on PARP from symptomatic wild rice plants and seeds, but not from other inoculated plants or seeds and control plants, thus fulfilling Koch's postulate. Pythiogeton manoomin did not infect the seedlings of Asian rice, bean, chard, corn, carrot, lettuce, oat, radish, sweet pepper, tomato, or wheat. All control plants grown in noninoculated soil remained healthy at the end of the experiment.
Sequence analysis
-The internal transcribed spacers (ITS) 1 and 2 flanking the 5.8S rRNA regions were amplified by PCR and sequenced using universal ITS5 and ITS4 primers. A BLAST search of the 855 bp sequences revealed 98% similarity with a sequence of Pg. ramosum isolate Pg-164 (GenBank Accession No. JQ610190.1). The 21 nucleotide differences suggest that the isolate from wild rice may be an unreported species. Based on previous published phylogeny, all Pythiogeton species belong to the same clade, separated from Pythium, Phytophthora, and downy mildew clades. Within the Pythiogeton clade, 10 subgroups (A1-A10) have been reported (Huang et al. 2013). The 21 nucleotide differences in the 5.8S rRNA region place Pythiogeton manoomin isolate BV1 into a new subgroup with high bootstrap value (Figure 2).
Discussion
The genus Pythiogeton was erected in Germany by Minden in 1916. Since then, many species have been reported around the world (Jee et al. 2000). Of the described species, only three, Pg. zeae, Pg. zizaniae, and Pg. ramosum, are known to be plant pathogens (Jee et al. 2000;Ann et al. 2006;Le et al. 2014). Here, we characterized a new species of Pythiogeton capable of causing root and basal stalk rot of wild rice (Zizania palustris L.). Pythiogeton manoomin can be readily isolated from plant materials (in this case, wild rice) submerged in stagnant water during crop cultivation. Host range tests indicate that the pathogenicity of Pg. manoomin may be limited to wild rice and does not infect seedlings of economically impor-tant crops such rice, bean, carrot, chard, corn, lettuce, oat, radish, sweet pepper, tomato, or wheat. However, Pg. manoomin can possibly infect some wild grasses or weeds not tested here. Morphologically, Pythiogeton manoomin is characterized by ovoid to ellipsoid sporangia in terminal or intercalary positions. Like all members of the genus Pythiogeton, Pg. manoomin sporangia release undifferentiating protoplasm through a discharge tube into an elongate transient vesicle, which soon disappears, leaving the naked protoplasm to differentiate into reniform zoospores. Appressoria form sparsely in contact with hard surfaces, (i.e. bottom surface of Petri dish). Pythiogeton manoomin is morphologically and genetically related to Pg. ramosum. Initially, descriptions of most Pythiogeton species were based exclusively on uncultured material (Minden 1916;Ito and Tokunaga 1935;Batko 1971;Jee et al. 2000). Isolation of some Pythiogeton species is difficult because other soilborne oomycetes outcompete it in culture. Recently, some species have been successfully grown in laboratory cultures (Jee et al. 2000;Ann et al. 2006;Huang et al. 2013). Pythiogeton manoomin and Pg. ramosum can be distinguished from other members of the genus by their readily culturable characteristics, as they can be cultured on a number of media, and have nonspecific require- ments for growth media (Jee et al. 2000;Ann et al. 2006;Huang et al. 2013). In addition, both species are viable after subcultering and under storage conditions for up to two years (Jee et al. 2000;Le et al. 2015). The morphology of both species is variable and dependent on the type of water and medium used, so identification can be problematic if based on morphology alone (Le et al. 2015). However, Pythiogeton manoomin can be distinguished from Pg. ramosum based on pathogenicity and optimal growth temperature. Pg. ramosum, like other described Pythiogeton species, grows optimally at temperatures above 30°C, and does not grow at lower temperatures of 10-12°C (Huang et al. 2013;Le et al. 2015). In contrast, Pg. manoomin can grow at 10-12°C, with optimal growth at 25°C. Unlike other described Pythiogeton species, Pg. manoomin produces chlamydospores, which may allow for longterm survival (Jee et al. 2000). Chlamydospores are important survival structures of many plants of the family Pythiaceae, such as members of the Pythium and Phytophthora species (Hendrix and Campbell 1973;Mitchell 1978). Pythiogeton manoomin can also be distinguished from other Pythiogeton species based on pathogenicity tests. Pg. ramosum is pathogenic on ginger, bean, cauliflower, pepper, and lettuce (Le et al. 2015), Pg. zeae is pathogenic on corn and carrot (Jee et al. 2000), and Pg. zizaniae is pathogenic specifically to water bamboo, and does not infect seedlings of corn, rice, wheat, sorghum, cucumber, tomato, soybean or water spinach. (Ann et al. 2006). Pythiogeton manoomin causes disease only on wild rice among the crops tested here, but may be able to infect some wild grasses or weeds not tested here. Although Py. manoomin is phylogenetically similar to Pg. ramosum, a complete sequence comparison of the ITS revealed 98% similarity with a sequence of Pg. ramosum isolate Pg-164 (GenBank Accession No. JQ610190.1). However, the 21 nucleotide differences suggest that the isolate from wild rice is an unreported species.
Wild rice, the only cultivated cereal native to North America, is a staple food of indigenous peoples (Hayes et al. 1989;Oelke 1993). Today, wild rice is cultivated and commercialized for its unique flavor, texture, and nutritional value (Oelke et al. 1997). Several fungal diseases are considered important factors that limit wild rice production. For instance, fungal brown spots caused by Bipolaris oryzae (Bean and Schwartz 1961;Malvick and Percich 1993), sheath and stem rots caused by Sclerotium hydrophilum and S. oryzae (Punter et al., 1984), Fusarium head blight caused by Fusarium graminearum (Nyvall et al. 1999), crown and root rot caused by Phytophthora erythroseptica (Gunnell and Webster 1988), and damping-off caused by Pythium torulosum (Marcum and Davis 2006) have been associated with 100% losses in fields where disease was especially severe. This is the first report of the pathogenicity of a Pythiogeton species on wild rice. Infected plants exhibiting rot and dieback of roots and stalks were in various stages of decline, including death. Depending on disease severity, Pg. manoomin can cause losses in individual fields, and can vary from slight to near total stand failure. Since Pg. manoomin produces chlamydospores, management may be difficult. In conclusion, based on pathogenicity tests and evaluation of its morphological and molecular taxonomy, we confirm that Pg. manoomin (H. K. Doan and R. M. Davis), isolated from wild rice plants exhibiting rot and dieback of roots and stalks in California, is a distinct new Pythiogeton species.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was partially supported by the Henry A. Jastro Research Award (University of California, Davis). | 2019-07-26T08:07:55.576Z | 2019-06-30T00:00:00.000 | {
"year": 2019,
"sha1": "2a4660336bc826d918d83d8c2cbf25df23d41bf8",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21501203.2019.1635216?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "349407a895aea24376508b5be4d9c05e620e5e19",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
43804469 | pes2o/s2orc | v3-fos-license | Comparison of clinical performance between trifocal and bifocal intraocular lenses: A meta-analysis
Purpose To compare the clinical performance between trifocal and bifocal intraocular lenses in bilateral cataract and/or refractive lens exchange (RLE) surgery. Methods A comprehensive literature search of PubMed, EMBASE, Cochrane Controlled Trials Register and Web of Science was performed through October 2016 to identify randomized, controlled trials (RCTs) and comparative cohort studies. The primary outcomes were uncorrected distance visual acuity (UDVA), uncorrected intermediate visual acuity (UIVA), uncorrected near visual acuity (UNVA), defocus curve, spectacle independence, patient satisfaction and contrast sensitivity. The secondary outcomes were residual sphere, spherical equivalent (SE), cylinder and complications. Results Six RCTs and 2 cohort studies including 568 eyes (278 in the trifocal group and 290 in the bifocal group) were identified. There was a statically significant difference between the two groups in UDVA (WMD: -0.03, 95% CI: -0.05 to -0.01, P = 0.005), but the difference (0.03 log MAR) is not clinically significant. Intermediate visual acuity was better in the trifocal IOL group judging from UIVA and defocus curves. There was a statically significant difference between the two groups in residual cylinder (WMD: 0.11, 95% CI: 0.02 to 0.20, P = 0.02), and subgroup AT Lisa tri 839MP trifocal also showed significant better UNVA than bifocal IOLs (WMD: -0.13, 95% CI: -0.17 to -0.08, P<0.00001). However, no significant differences were observed in UNVA (WMD: -0.04, 95% CI: -0.11 to 0.02, P = 0.19), spectacle independence (WMD: 1.27, 95% CI: 0.89 to 18.15, P = 0.07), patient satisfaction (WMD: 4.01, 95% CI: 0.07 to 22.72, P = 0.87), residual sphere (WMD: -0.03, 95% CI: -0.18 to 0.13, P = 0.74), SE (WMD: 0.04, 95% CI: -0.09 to 0.16, P = 0.55) or complications (WMD: 2.08, 95% CI: 0.35 to 12.43, P = 0.42). Conclusions Trifocal IOL technology (especially AT Lisa trifocal 839M trifocal) had a clear advantage over bifocal IOLs in intermediate visual acuity, while both trifocal IOLs and bifocal IOLs showed excellent performance in distance visual acuity. AT Lisa trifocal 839M trifocal could provide better uncorrected near visual acuity than bifocal IOLs. However, more evidence is needed to compare their spectacle independence, higher satisfaction rate, and photic phenomena.
Introduction
The gradual loss of lens accommodation as a person ages (resulting in presbyopia) or as a result of a surgical procedure (mainly cataract surgery) is a reason why patients seek treatment.[1] A range of surgical procedures are available to restore the accommodation, such as application of laser and corneal inlays [2,3], the implantation of accommodating intraocular lenses (IOLs) [4], etc.Among these procedures, multifocal IOLs constitute the first choice [5][6][7][8] for many surgeons due to their ability to provide functional uncorrected vision over a range of distances, and spectacle independence is expected to be achieved.However, multifocal IOLs (both bifocal and trifocal IOL) are still associated with some drawbacks.For one aspect, with traditional bifocal IOLs, the intermediate distance range is still penalized, compared with the far and near distance ranges; for another aspect, because multifocal IOL designs divide the incoming light into more than one focus, the effect of the light in out-of-focus images reduces the contrast of in-focus images.This effect also reduces contrast sensitivity and modulation transfer function (MTF) [5], and unwanted visual phenomena, including glare and halos [9] , can occur.
One question has inevitably arisen with the advent of trifocal IOLs: whether two foci, distance and near, adequately address visual needs or whether an intermediate foci is required.[5,40] It should be noted that an added intermediate focus results in better intermediate VA, but it also causes two permanent defocus images instead of one [20], as well as greater complexity of the manufacturing process, both of which could degrade the optical quality of the lens.
The answer to these questions mostly relies on which type of IOL obtains better visual acuity, satisfaction of patients and spectacle independence.The answer, however, has not always been consistent; thus, we conducted a systematic review and meta-analysis of the published randomized, controlled trials and cohort comparison studies to compare the clinical performance metrics mentioned above following bilateral implantation of trifocal IOLs and bifocal IOLs after cataract and/or refractive lens exchange (RLE) surgery.
Sources and methods
This meta-analysis was reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement [41,42] and was registered at the International Prospective Register of Systematic Reviews (number CRD 42016048566).
Search strategy
To conduct a systematic search, PubMed, EMBASE, the Cochrane Controlled Trials Register and Web of Science were searched for articles dated through October 2016.We used the following combined text and MeSH terms: (((((trifocal[Title/Abstract]) AND ((((("Lenses, No restrictions were placed on the language of publication.In addition, for potential trials that might have been missed in the primary searches, we also performed a manual search.
Inclusion and exclusion criteria
The inclusion criteria were studies that reported the clinical outcomes of trials implanting trifocal IOLs and bifocal IOLs involving patients who underwent cataract and/or RLE surgery.
The exclusion criteria were studies involving patients with coexisting pathology, amblyopia, or previous IOL implantation or laser refractive surgery.If identified, studies reporting double data were excluded to retain only one.
Screening process
Two independent reviewers (Ze-quan Xu and Dan-min Cao) respectively conducted a preliminary review of the titles and abstracts.Subsequently, the full articles were carefully analyzed to select the studies that met the criteria mentioned above.
Disagreement between Xu and Cao was resolved through careful discussion-resorting to a third reviewer (Xu Chen) when necessary-until a consensus was reached.
Quality assessment
The Cochrane Collaboration's tool for risk of bias [43] was used to evaluate the quality of RCTs.In short, all of the parameters (including sequence generation, allocation concealment, etc.) were graded as having a low risk of bias, high risk of bias, or unclear risk of bias.
For the included cohort studies, the Newcastle-Ottawa Scale (NOS) [44] was used for quality assessment.The NOS requires a total of nine stars, and a score of at least 7 stars indicates good quality.On this scale, a total of four stars are given for patient selection, three for outcome assessment and two for comparability.
Data collection
The available data included study design, IOLs implanted, whether the study group underwent cataract or RLE surgery, the number of patients, UDVA, UIVA and UNVA, defocus curve, spectacle independence, patient satisfaction, contrast sensitivity, residual sphere, SE and cylinder, and postoperative complications.
Outcome measurements
Binocular UDVA, UIVA and UNVA were recorded in the following manner: using a 100% contrast chart and without spectacles, UDVA was recorded at 4 or 6 m, UIVA at 70 or 63 or 66 cm and UNVA at 40 or 33 cm.All of the VA values were recorded as logMAR value, while UDVA, UIVA and UNVA could also be demonstrated in defocus curves.
Spectacle independence was described as having functional vision in different circumstances and at different distances with no use of spectacles.
Patient satisfaction and photic phenomena (side effects) were reported by a validated questionnaire, such as the NEI-RQL, NEI VFQ, and VF-14.
Contrast sensitivity testing was performed under photoptic conditions with luminance of 85 cd/m 2 , as well as mesoptic conditions with luminance of 3 cd/m 2 .
In addition, data on the postoperation residual sphere, SE, cylinder, and postoperation complications were also collected if provided.
Statistical analysis
All of the statistical analyses were performed using Rev Manager Software (version 5.3; Cochrane Collaboration, Oxford, United Kingdom).Statistical heterogeneity was tested using the chi-square and I 2 statistic.When showing significant heterogeneity (I 2 >50%), a randomeffects meta-analysis was used; otherwise, fixed-effects models were used.[45].The weighted mean difference (WMD) with a 95% CI was calculated.A p-value of less than 0.05 was considered statistically significant.
Literature search
The search identified a total of 76 abstracts after duplicates, of which 68 met the above inclusion criteria.Full versions of all 68 papers were scrutinized against the exclusion criteria; 59 papers were excluded for the following reasons: non-comparative studies, n = 32; experimental studies, n = 15; different research question = 8; and no available data: n = 4.Of the nine studies included in the qualitative synthesis, one study was excluded because this study and another study [46,47] reported double data.Finally, there were eight studies [33,[47][48][49][50][51][52][53] included in the qualitative meta-analysis.The trial selection process is shown in Fig 1 .There were two totally different studies both performed by Gundersen in 2016 and two studies performed by Plaza in 2016, and part of the data (the data on trifocal IOL groups) from the two studies are duplicated.
Risk of bias assessment
None of the included RCTs [48][49][50][51][52][53] described the specific methods of random sequence generation or blinding of outcome assessments, except for one RCT that mentioned the blinding of patients [48], and only one study performed randomization using the software provided by www.random.org.[50] Actually, in an IOL exchange surgery, masking of the surgeon is impossible, and persuading the patients to randomly choose between trifocal and bifocal IOLs, which differentiate sharply in price, is difficult.Thus, randomization is quite difficult to realize in these studies.After discussions of our team, we finally downgraded these RCTs to non-randomized comparative studies in our meta-analysis, and their quality was assessed by the NOS as in other cohort studies [33,46], as shown in Table 1.All of the studies described missing patients, but no study had missing cases, and all the of studies reported all of their main results; thus, all of them had two stars for comparability (two stars maximum) and three stars for outcome assessment (three stars maximum).However, some studies had flaws in patient selection (four stars maximum): one study did not match preoperative near visual acuity, [48] one study did not match preoperative uncorrected visual acuity, [33] one study did not match preoperative corrected visual acuity, [51] and three studies did not discuss preoperative VA at all.[49,52,53]
Characteristics of included studies
In the present meta-analysis, Table 1 shows the characteristics of the 8 studies, and Table 2 shows the summary of outcomes (including overall quality of evidence judging from GRADE/ GDT) in the meta-analysis.
Residual SE
The intervention group was 0.05 higher (0.07 lower to 0.16 higher) 185 (5 studies)
Residual cylinder
The intervention group was 0.11 higher (0.02 to 0. Subgroup analysis according to the type of trifocal IOL involved was also conducted.These studies were divided into three subgroups: Fine Vision (subgroup 1); AT Lisa tri 839MP trifocal (subgroup 2); and mixed Fine Vision and AT Lisa tri 839MP trifocal (subgroup 3).The results from subgroup 2 revealed that the AT Lisa tri 839MP trifocal obtained significantly (both statistically and clinically) better UNVA than bifocal IOLs (WMD: -0.13, 95% CI: -0.17 to -0.08, P<0.00001) (Fig 3
).
Defocus curve.There were seven studies [45,[47][48][49][50][51][52] reporting defocus curves (Table 3).One study (Plaza et al, 2016) [46] reported on both the trifocal IOL subgroups (Fine vision group and AT Lisa tri 839MP group), so we compared the trifocal IOLs in the two subgroups with the bifocal group separately, and the general results showed that trifocal IOLs had better performance from -2.5 D to 0 D, and the AT Lisa tri 839MP group had significantly better performance from -0.5 D to -1.5 D.
Spectacle independence.There were only two studies [48,50] reporting spectacle independence.The spectacle independence in the trifocal group was insignificantly better than that in the bifocal group (WMD: 1.27, 95% CI: 0.89 to 18.15, P = 0.07) (Fig 4) and 12 c/d in the trifocal group, and there was a study reporting contrast sensitivity under mesoptic conditions [48], with contrast sensitivity significantly worse at 6 c/d in the trifocal group.There was also a study reporting contrast sensitivity under both photoptic and glare conditions [50], and contrast sensitivity was insignificantly better at 6 c/d and 12 c/d and insignificantly worse at 1.5 c/d in the trifocal group.
Phenomena (side effects).The occurrence of side effects with multifocal IOLs, mainly glare and halos, was also reported in two studies.For glare, Jonker reported [48] a score of 55 in the trifocal group and 61 in the bifocal group on the NEI-RQL questionnaire (a score of 100 refers to the best quality of life); however, Cochener [50] reported that 92% of patients suffered from glare in the trifocal group, compared to only 67% in the bifocal group.Cochener [50] also reported that 58% of patients suffered from halo in the trifocal group, compared to only 50% in the bifocal group.
Secondary outcomes
The secondary outcomes were residual sphere, spherical equivalent (SE) and cylinder, and post-operative complications.
Study
Trifocal IOL Bifocal IOL Trifocal IOL showed better performance Bifocal IOL showed better performance
Discussion
To our knowledge, this study was the first meta-analysis to compare clinical performance between bifocal and trifocal IOLs, and it showed the realistic benefits of the recently developed trifocal IOL technology.As we mentioned above, the answer of the question of whether an intermediate focal IOL (the essence of the trifocal IOL) is required mostly relies on the visual acuity (far, near and intermediate) performance, satisfaction of patients and spectacle independence of the trifocal IOL.[5,40] We found significantly better UDVA following trifocal IOL implants; however, the difference was not clinically significant.It was a very encouraging result since UDVA following bifocal IOL implants was already satisfying: the mean UDVA following bifocal IOL implants was 0.04±0.001logMAR based on 37 studies of 6334 patients in a previous meta-analysis [7]; our counterpart was 0.057±0.129logMAR following bifocal IOL implants, while it was -0.002 ±0.107 logMAR following trifocal IOL implants, based on 6 studies of 102 patients.To conclude, we are totally convinced that both trifocal and bifocal IOLs have excellent performance (approximately 0 logMAR) in distance visual acuity.
The demand for intermediate vision, which is important for daily activities at arm's length or longer, is high for most patients, especially those younger than 65 years old [6].There were only 3 studies reporting the mean value and SD of UIVA.To make it more complicated, different tests and variable distances (66 cm [50,51] or 70 cm [48]) were used for intermediate visual acuity measurements, and the evidence of the meta-analysis of the results was very low.However, binocular defocus curves of seven included studies showed that the trifocal group had statistically significantly better VA at -1.5 D (corresponding to the vision of a target at 67 cm) compared to the bifocal group, especially in the AT Lisa tri 839MP trifocal subgroup.It is worth noting that, in our meta-analysis, the AT Lisa tri 839MP trifocal had statistically significantly better UIVA than bifocal IOLs, and the difference was of clinical significance (0.We found significantly better UNVA following trifocal IOL implantation in a subgroup (AT Lisa tri 839MP group) of implants than after bifocal implants, but no significant differences between the trifocal and bifocal groups were found in other subgroups.In Carson et al.'s study [20], the near foci of the AT Lisa tri 839MP were in the range of 38 to 40 cm, while the near foci of bifocal IOLs were in the range of 44-53 cm, which could partially explain why some trifocal IOLs had better UNVA performance since UNVA in this meta-analysis was measured at 33 cm [33,50] or 40 cm.[46,48,51] What we found in the outcomes of defocus were consistent with UDVA, UIVA and UNVA measured with standard tests.A continuous defocus curve was found in trifocal IOLs: all of the bifocal IOLs had a V-pattern defocus curve with a sharp gap for intermediate vision, while trifocal IOLs had a minimal decrease in the VA at the intermediate range (at a vergence of -1.5 D), consistent with a previous study under experimental conditions.[22] Thus, we can conclude that the additional intermediate focal point in the trifocal lens was effective, and it did not seem to negatively impact distance or near vision.
Spectacle independence in our meta-analysis did not result in significant improvement in trifocal IOLs.There are two probable reasons for this outcome: first, spectacle independence in bifocal IOLs was already very high (ceiling effect), both in our meta-analysis (mean value of 71%) and in a previous meta-analysis (with a mean value of 80.1% reported by 63 studies) [7]; second, the number of included studies was still too limited (only two).To further compare the spectacle independence between them, more studies are necessary, especially studies reporting spectacle use at the intermediate distance specifically.[56] Before then, it is too early to conclude that trifocal IOLs could achieve spectacle independence more frequently.
Patient satisfaction is, without a doubt, a very significant issue to directly decide which multifocal IOL option becomes more prevalent.[1,57,58] In our meta-analysis, no significant difference in satisfaction was observed between IOLs.However, only two studies made direct comparisons regarding patient satisfaction.The limited number could be a reason that we did not obtain significant results.Another reason could be that patient satisfaction for bifocal IOLs was already very good (ceiling effect), both in our results (93% to 100%) and in a previous meta-analysis (61.8% to 100%).[7] Patient satisfaction is a very complex issue, and many aspects, in addition to the technological aspects of IOLs, such as characteristics of patients and the role of the surgeon, could affect patient satisfaction.[55,59,60] Thus, it would not to be an easy task to compare patient satisfaction between different IOLs.
Contrast sensitivity is a test required by the Food and Drug Administration (FDA) in evaluating multifocal IOLs, [58] and the FDA has placed a warning label on the ReSTOR and ReZoom IOLs for exercising caution when driving at night or in other poor visibility conditions because of the decrease in contrast sensitivity.Decrease in contrast sensitivity is a wellknown side effect of bifocal IOLs.[47,61] Further, bifocal IOLs can cause up to a 50% reduction in contrast sensitivity, [62] with two thirds of 31 studies in a previous meta-analysis [7] reporting reduced contrast sensitivity in bifocal IOLs, compared with monofocal IOLs.As in our meta-analysis, under photoptic conditions (with luminance of 85 cd/m 2 ), the trifocal IOL group had significantly better performance at 3 cpd in Mojzis et al.'s study [51].In the meanwhile, all trifocal IOL groups had better performance at 6 cpd and 12 cpd, but the difference is not statistically significant.However, under mesoptic (with luminance of 3 cd/m 2 ) conditions, performance in the bifocal group was statistically significantly better at 6 cpd (P <0.01) in Jonker et al.'s study.[48] In Plaza et al.'s study, [47] also under mesoptic conditions, no significant differences were observed between the trifocal and bifocal groups at frequencies of 1.5, 3.0, 6.0 and 12.0 cpd.More studies are needed to compare contrast sensitivity performance between trifocal and bifocal IOLs, especially under mesoptic conditions.
Only two studies in our meta-analysis made direct comparisons regarding the occurrence of photic phenomena (mainly include glare and halo).Both the trifocal and bifocal groups suffered halo and glare.In Carson et al.'s study, [20] halos surrounding the simulated headlight targets were smaller with the bifocal IOL compared with the trifocal IOLs, and trifocal IOLs were associated with increased background halos.Photic phenomena were usually reported by a validated questionnaire, such as the NEI-RQL, [48]NEI VFQ [49,52], and VF-14 [50]; a varied methodology would make it inconvenient to directly compare photic phenomenon rates or various postoperative follow-up times (follow-up times varied in our study from 3 m to 12 m).The longer that the follow-up term was, the fewer that the number of patients was complaining about halos and glare, and patients have a tendency to become more tolerant of photic phenomenon over approximately 6 months [7].We did not get the conclusive results about which type of IOL that has better performance in photic phenomenon.Further studies using quantitative instruments, such as halometry [63] or light distortion analysis [64], are needed.
Residual sphere and spherical equivalent (SE) did not show significant differences between the trifocal and bifocal groups, but residual cylinder was significantly lower in the trifocal group.However, we found that preoperative cylinder was significantly lower in the trifocal IOL group in Gundersen et al.'s study [49] and Plaza et al.'s study (AT LISA tri 839MP IOL subgroup), [46] and it was insignificantly lower in the trifocal group in two studies.[33] At the same time, two studies failed to mention the precondition of the cylinder.[50,52] Only one study reported an insignificantly lower preoperative cylinder in the trifocal group.[48] It is possible that the surgeon chose patients with less preoperative cylinder to be trifocal candidates, especially in a non-randomized study.
Serious postoperative complications were rare, with most studies reporting no adverse events regardless of IOL type; decentration was the only complication found in our meta-analysis.Current studies have not routinely included adverse complications in their outcome measurements [7]; thus, a larger population study with routine measurement of postoperative complications is needed.
Inevitably, the present meta-analysis had several limitations.First, publication biases can occur; to minimize them, we conducted an electronic search and a manual search of the references of the relevant studies to identify all of the potential relevant articles, while we excluded special reports and unpublished data, which can cause publication bias.Second, it should be noted that not all of the trials were randomized treatments, which can be explained by certain ethical constraints.[5] Even for RCTs, studies have seldom had adequate sequence generation or adequate methods to achieve allocation concealment, and none of the studies mentioned masking.RCTs that evaluate the visual outcomes after implantation of multifocal IOLs are still very rare; thus, more high-quality RCTs, multicenter RCTs in particular, are expected.Third, one study received grants from Alcon Laboratories and Carl Zeiss Meditec, [48] one study was funded by FineVision, [49] and one study received grants from Alcon Laboratories.[52]
Conclusions
In this meta-analysis, the results demonstrated the efficacy of using a trifocal approach, compared with the bifocal IOL.In the aspect of intermediate visual acuity performance, trifocal IOL technology (especially the AT LISA tri 839MP IOL) had a clear advantage over the bifocal IOL, while both of them had excellent performance in distance visual acuity.Near visual acuity was not negatively impacted by trifocal IOLs compared with bifocal IOLs and could actually be positively enhanced by the AT Lisa trifocal 839M trifocal.However, more evidence is needed to decide which type of IOL has better performance in spectacle independence, patient satisfaction rate and photic phenomena.Postoperative complications following implantation both of trifocal and bifocal IOLs are rare and are often amenable to treatment.
Fig 5 .Fig 6 .
Fig 5. Meta-analysis of postoperative residual sphere and spherical equivalent (SE).SD = standard deviation; CI = confidence interval.https://doi.org/10.1371/journal.pone.0186522.g005 23log-MAR).Actually, better intermediate visual acuity is entirely expected with trifocal IOLs since the essence of a trifocal IOL is a true intermediate focal point.A few previous benchmark studies [20,21,38,54] have found that the trifocal lens provided a true third intermediate focal point not found with the bifocal lens.Further in Carson et al.'s study, [20] both the AT LISA tri and Fine Vision trifocal IOLs demonstrated an intermediate focal point at approximately 80 cm, while ReSTOR demonstrated no intermediate focal point.The intermediate-focus MTF values of the AT LISA tri and Fine Vision at 20/20 visual acuity were similar to each other (14.0%and 13.6% for the Fine Vision and AT LISA tri, respectively), while the counterpart values of the ReSTOR IOL were much lower (5.0%-7.2%).Similar results were also observed in other studies.[55]Thus, we are totally convinced that the introduction of a third focal point could effectively enhance the VA in the intermediate distance, especially with the AT Lisa tri 839MP trifocal IOL. | 2018-04-03T05:50:30.483Z | 2017-10-26T00:00:00.000 | {
"year": 2017,
"sha1": "0ab32d2c1dfd9ce406a9afe130c75d0efb16bd23",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0186522&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a21d4d155191280d8dd7e3053c7794e3dae619a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19104316 | pes2o/s2orc | v3-fos-license | Past Results, Present Trends, and Future Challenges in Intrabody Communication
Intrabody communication (IBC) is a wireless communication technology using the human body to develop body area networks (BANs) for remote and ubiquitous monitoring. IBC uses living tissues as a transmission medium, achieving power-saving and miniaturized transceivers, making communications more robust against external interference and attacks on the privacy of transmitted data. Due to these advantages, IBC has been included as a third physical layer in the IEEE 802.15.6 standard for wireless body area networks (WBANs) designated as Human Body Communication (HBC). Further research is needed to compare both methods depending on the characteristics of IBC application. Challenges remain for an optimal deployment of IBC technology, such as the effect of long-term use in the human body, communication optimization throughmore realistic models, the influence of both anthropometric characteristics and the subject’s movement on the transmission performance, standardization of communications, and development of small-size and energy-efficient prototypes with increased data rate.The purpose of this work is to provide an indepth overview of recent advances and future challenges in human body/intrabody communication for wireless communications and mobile computing.
Introduction
Intrabody communication (IBC) techniques make use of the human body as a transmission medium for electrical signals to interconnect devices in wireless body area networks (WBANs). These devices can be both on-body and implanted (in-body) and communicate with each other and with a central device through low power consumption and low data rate body channels [1]. These links connect two on-body devices, two in-body devices, and an on-body with an in-body device [2], as described in Figure 1. These devices are both sensors that record and process physiological variables of interest and actuators that receive information and generate some type of response, for example, electronic drug delivering pumps. The central device, which coordinates and manages the operation of the network, sends data through an external long-distance link to a medical decision center for evaluation. This link is usually implemented with Wireless Local Area Network (WLAN) standards, which support larger amounts of data [3]. This way, this central device acting as an external gateway supports the higher transmission power in the network, thus minimizing the power consumption of the rest of the sensors and actuators [4].
The IEEE 802. 15 Zimmerman in 1996 [5], this standard defines a central frequency of 21 MHz with data rates in the range from 164 to 1312.5 kb/s [6]. In the IBC architecture originally proposed by Zimmerman, a transmitter placed on the skin modulates an electric field, which is driven by small currents through the human body towards the receiver, which demodulates the signal by obtaining the encoded information [7]. The environment provides a signal return path towards the transmitter by electrostatic close-field coupling. In general terms, the IBC communication defined by Zimmerman is based on a transmitter that generates an electric potential, thus disturbing its immediate external environment and a receiver that detects such disturbance. The carrier frequencies used are sufficiently low to be associated with near-field components in the range of distances covered through the human body, thus minimizing energy consumption and interferences with nearby devices. Also, voltage levels are well below the maximum thresholds allowed to guarantee safety. In the near-field coupling, the power decreases with the cube of the distance; however, in the BAN scenario, this represents a double advantage. On the one hand, the distances to be covered in the human body are small enough to guarantee signal reception, and, on the other hand, the signal is not radiated out to the environment, therefore improving the energy efficiency and the robustness of the system.
Generally, IBC communication is established through the human body; nevertheless the signal paths may vary depending on the configuration of the electrodes and the different signal coupling modes. In the literature, in addition to the electrostatic or capacitive coupling technique, the galvanic coupling technique has also been proposed. The first establishes two signal paths, a first path through the human body and a second return path through the air by electrostatic coupling. This feature allows the interconnection of devices that are both deployed on the same body surface or close to it, without the need for direct contact with the skin.
Alternatively, the galvanic coupling technique establishes a single signal differential path through a current flow that penetrates into internal tissues. Due to this characteristic, galvanic coupling has frequently been proposed as a viable alternative for the communication between implanted sensors.
The purpose of this paper is to provide a complete overview of recent advances and future challenges in human body/intrabody communication for wireless communications and mobile computing. The content of the survey is organized in the following sections and subsections: (i) Section 2: IBC physics and modeling (1) Section 2.1: capacitive and galvanic coupling; (2) Section 2.2: bioelectric basis of signal transmission through human tissues; (3) Section 2.3: safety requirements and regulations in IBC; (4) Section 2.4: modeling of IBC channels.
IBC Physics and Modeling
In this section an overview of underlying physics, bioelectric mechanisms, and channel models for IBC is presented. First, the main two coupling techniques proposed in the literature, galvanic and capacitive coupling, are reviewed, compared, and discussed. Then, the bioelectric principles of signal transmission through human tissues, which allow the IBC phenomenon to be explained, are described in this section. Finally, the major safety requirements in IBC, as well as the main channel modeling approaches, are shown.
Capacitive and Galvanic
Coupling. In this subsection, a detailed explanation of the characteristics of the two previously mentioned techniques, capacitive and galvanic coupling, is provided.
Capacitive Coupling.
In the capacitive coupling technique, an electrical potential is generated between the signal and ground electrodes of the transmitter device (TX). This potential is sensed by a receiver device (RX) which also has a pair of signal and ground electrodes. While the signal electrode can be placed directly on the surface of the human body or very close to it, a floating ground electrode is directed towards the external environment. Figure 2 shows a schematic of the distribution of electric fields in the capacitive coupling technique. The electric field induced by the TX signal electrode is coupled to the human body and passes through it establishing a first direct signal path. Part of this field is lost through the feet towards the external ground. In turn, due to the fact that the grounds of the TX and the RX are not physically connected, a second return path through the air appears. In this way, the electric field received at the RX is very weak but sufficient enough to be detected [17]. Finally, the transmission loop is closed through the return path through the air between the TX and RX grounds. Although the human body behaves like a conductor, the couplings between the electrodes, air, and external ground can be modeled as capacitances, hence the name of this technique [18]. The values of these capacitances are very small, often making the signals obtained at the receiving end very weak due to the poor return paths being established through the external ground [19]. The main advantage of this technique compared with other radio frequency techniques is that much of the signal is confined to the body surface as the human body behaves like an electric conductor, thus minimizing the transmission power required.
In summary, the prevalent signal channel is the capacitive return path established through the air, which makes it External ground (GND) Tran smit ter Rec eive r Figure 3: Schematic of the galvanic coupling technique. This technique is based on a differential configuration scheme in which an electric current is applied into the skin, thus leading to two different current pathways: a primary current flow through the TX electrodes and a secondary flow through the inner tissues towards the RX electrodes. strongly dependent on the external environment conditions [20]. In addition, as the frequency increases, the signal radiation into the air is no longer negligible. Some of these disadvantages are minimized with the use of the galvanic coupling mode, which is presented in the following section.
Galvanic Coupling.
In the galvanic coupling technique, four electrodes are attached to the human body, two at the TX site for the injection of an electric current and two at the RX end. Figure 3 shows a schematic of the distribution of electric currents in the coupling technique. Since electric current is injected into the human body, galvanic IBC devices must comply strictly with electromagnetic compatibility regulations, avoiding interference with other biomedical devices. In addition, in order to avoid any adverse health problem, transmitted signals must be well below the limits specified by international guidelines institutions [21][22][23]. Transmission frequencies are selected from 10 kHz to not interfere with other endogenous physiological signals. In addition, the typical values of signal amplitude are in the range of 1 to 2 mA [24]. The experimental results demonstrate that the galvanic coupling technique is an appropriate method for the transmission of signals between devices placed on the chest, or at short distances on the limbs, in the range between 10 kHz and 1 MHz [25]. In addition, since the injected currents are very weak, this mode requires a lower transmitted signal level than those other forms of communication based on radio frequency standards.
Comparison and Discussion.
Each of these techniques presents its own advantages and disadvantages, being more or less viable depending on the application, particular design requirements, frequency range, and distances to be covered in the human body. Capacitive coupling has been extensively used in the literature, thus leading to transceivers with data rates of up to 10 Mbps [26]. However, since this method requires an external ground reference, the viability of the communication sometimes depends on the influence of environmental conditions, isolation strategies used, and so forth, being very sensitive to interferences and subject movements [27,28]. On the other hand, galvanic coupling has proved to be more robust against interference, since the signal is differentially applied through two electrodes without a common reference [29]. Since the signal is mainly confined through the body channel, the transmission will depend to a greater extent on the frequency-dependent dielectric properties of tissues [30]. In contrast, galvanic coupling presents the limitation of a reduced bandwidth and a lower data rate. Table 1 presents a brief comparison of the main characteristics of both techniques.
Bioelectric Basis of Signal Transmission through Human
Tissues. The problem of the characterization of the human body as a communication channel has been addressed in the literature from a theoretical perspective, with the proposal of different models to analyze the main mechanisms of IBC transmission, and, from an experimental viewpoint, with the use of electronic setups specifically designed for the measurement of human body attenuation. As highlighted in previous sections, IBC communication techniques are based on the transmission of electric signals through the human body as a transmission medium, which presents bioelectric properties that must necessarily be taken into account as the fundamental basis of the theoretical models and experimental setups developed. The study of the interaction of electric fields with the human body in IBC transmission implies the need for a consistent electrical characterization of the dielectric properties of biological tissues, that is, conductivity and permittivity, which, respectively, determine the ohmic currents and the magnitude of the polarization effects, and they are in turn frequency dependent. In living tissues, this dependency is intimately ligated to the interaction between the electromagnetic fields and the ions and molecules contained at both the intra-and extracellular mediums. The main dielectric properties of biological tissues were first reported by Foster and Schwan in 1989 [123], where it was shown that the dispersions observed in living tissues are fundamentally four (three main dispersions: , , and at low, medium, and high frequencies; and a minor dispersion, , at very high frequencies). Each of these dispersive regions is the manifestation of a mechanism of polarization governed by a time constant ( ), which in first approximation offers the following expression for the complex electrical permittivity ( * ) of a medium as a function of angular frequency ( = 2 ): This is the well-known Debye equation, where ( ) and ( ) are the real and imaginary part of the complex permittivity, ∞ is the permittivity at frequencies where ≫ 1, is the permittivity at frequencies where ≪ 1, and is the imaginary unit ( 2 = −1). However, the complexity of the structure and composition of biological materials is such that Debye's expression fails to emulate their dielectric properties in a wide range of frequencies, and, therefore, distribution parameters (0 < < 1) in each of the dispersive regions need to be introduced, giving rise to the famous Cole-Cole equation [124]: where is the static conductivity of tissue and 0 is the permittivity of free space. In addition, the value of the complex conductivity can be found according to * ( ) = 0 * .
One of the most relevant works in this field was presented by Gabriel et al. in 1996 [8, 31], where the conductivity and permittivity of some of the most important living tissues were reported from 10 Hz to 20 GHz. In addition, Gabriel et al. also provided the corresponding Cole-Cole model parameters for these tissues. Table 2 shows the parameters reported for some of the human tissues more commonly used in the IBC literature (dry skin, fat, muscle, and bone). Figures 4 and 5 show the permittivity and conductivity values for these tissues as a function of frequency.
It must be noticed that dispersion is mainly due to the polarization of cell membranes, whose behavior can be modeled as a capacitance, which appears due to the existence of regions of high conductivity (extracellular and intracellular mediums) separated by regions of low conductivity (cell membranes). This way, at low frequencies, only a very small current will flow through the extracellular fluid surrounding the cells due to the high impedance of the cell membrane (see Figure 6(a)). As the frequency increases, the "membrane plates" are charged and discharged very rapidly, reducing the membrane impedance and allowing a small amount of current to penetrate into the cell. This way, the current flows directly through the intra-and extracellular medium from tens of MHz (see Figure 6(b)). This effect leads to the increment of conductivity and the decrement of permittivity seen in Figures 4 and 5 at higher frequencies. IBC, which is in part based on the coupling of currents into the human body, can be explained according to this phenomenon of electrical conduction through biological tissues. The selection of the appropriate carrier frequency in IBC arises from a trade-off between several factors: application in use, type of signal coupling, safety regulations to avoid interference with common biological signals, specifications of 6 Wireless Communications and Mobile Computing very low consumption and high tissue conductivity, external noise, and so forth.
Safety Requirements and Regulations in IBC.
The essence of the IBC process is the electromagnetic signal transmission in, on, or around the human body, thanks in part to the near field. Excessive electromagnetic absorption may cause unexpected harmful influence to the human organs functionality [125]. The most referenced regulations are the ICNIRP (International Commission on Nonionizing Radiation Protection) guidelines, which divide the basic limits into occupational exposure and general public exposure according to different group characteristics [22,23]. The occupational exposure is set for occupational groups working in a controllable radiation environment, who are professionally trained to take appropriate measures to protect themselves. The general public exposure refers to the general population of different genders, ages, and health statuses who do not undergo professional training to avoid radiation, such as the elderly, women, and children. The ICNIRP defines three different physical quantities as the basic limits at different frequencies.
When the frequency ranges from 1 Hz to 10 MHz, the main limiting physical quantity is the current density ( ) or electric field ( ), and, from 100 kHz to 10 GHz, the main limiting physical quantity is the Specific Absorption Rate (SAR). The last quantity is the power density ( ) used for measured frequencies that are extremely high: 10 GHz to 300 GHz [22]. Note that the existing IBC research is mainly focused on the frequency range within the range from 10 kHz to 100 MHz. Table 3 lists the basic ICNIRP limits for current density, electric field, and SAR at different frequencies under conditions of occupational and general public exposure [22,23]. It must be noticed that the ICNIRP guidelines also emphasize that the prescribed limits are not the boundaries to distinguish safety from harm, but rather they provide upper limits of acceptable levels of protection [126]. This way, the current density should not exceed /500 or electric field should be lower than 1.35 × 10 −4 , and the SAR should be lower than 4 W/kg in the frequency range of IBC systems, which are taken as the evaluation criteria to analyze the safety effects of galvanic-coupled weak IBC signals with different amplitudes and frequencies acting on the human arm [22,23]. In addition, thermal effects may also appear caused by IBC transceiver's continuous operating mode. According to the JSMEBE (Japan Society of Medical Electronics and Biological Engineering) the temperature below 42.5 ∘ C is safe for the tissues surrounding the electrical circuit [127]. As will be discussed in the next section, existing literature in IBC is mainly focused on the human body channel modeling with analytical and numerical methods, the distributed equivalent circuit modeling, the transceiver design, and the phantom or in vivo experiment. Therefore, the study of electromagnetic safety in IBC is still an outstanding issue. Despite this, the study of the current density distribution and potential amplitude is also involved in the study of human channel modeling. A five-layer concentric cylinder equivalent to the human arm was designed by Callejón et al. in [9], obtaining the current density distribution of each tissue layer for different frequencies and electrode distances. Lučev et al. [128] simplified the human arm to a four-layer concentric cylinder with a radius of 5 cm and a height of 45 cm, and they studied the proportion of current density distribution in different tissues. The results showed that the current density in the muscle layer is the largest. Based on the special working conditions of galvanic-coupled IBC, authors of [129] simplified the human arm model to a fourlayer concentric cylinder and studied the current density distribution in each tissue layer when the muscles electrical conductivity was changed under different frequencies.
Although some works have reported on the effects of the weak IBC electrical signal on the human body, few of them used the electromagnetic safety guidelines to evaluate the effects of IBC signals acting on the body. This is therefore an open issue which needs to be analyzed in greater detail.
Modeling of IBC Channels.
Unlike the empirical approximations usually used in the design of IBC prototypes, mainly based on the optimization of different parameters such as data rate, consumption, position, and size of the electrodes, IBC models contribute to the design process as well as to the understanding of transmission mechanisms through biological tissues. Since IBC emerged as a promising alternative for the development of WBANs, several approaches to the modeling of the human body as a communication channel have been proposed in the literature, which can be grouped in three fundamental categories: body channel Wireless Communications and Mobile Computing 7 electric circuit models, electromagnetic models, and physical models based on phantoms. In this subsection, the main contributions made in the literature regarding these three modeling approaches are reviewed and compared.
Body Channel Electric Circuit Models.
In this respect, the main proposals of channel models in IBC have been directed to the definition of both lumped [24,30,33,36,38,130,131] and distributed parameters circuit diagrams [18,35,37,40,132]. These models easily and intuitively incorporate some of the electrical characteristics of the different tissues, such as tissue resistivity and capacitive properties, as well as their dependence on frequency, thus helping obtain simple analytical expressions for both attenuation and dispersion through the human body. In addition, the distributedparameter circuit models also allow the analysis of a key parameter in IBC communication such as channel length.
Since the first model proposed by Zimmerman in [5], considerable advances have been made and circuit models have become increasingly more detailed. The model proposed by Zimmerman for capacitive coupling consisted of a simplified lumped circuit diagram in which the human body was represented as a single node, respectively, coupled with the transmitter, receiver, and ground plane through eight capacitive impedances. The signal path in this diagram is therefore established between the signal electrode of the transmitter and the receiver signal electrode, passing through the node that models the human body. This model emulated the capacitive couplings arising at this modality but did not consider the resistive properties of the human body. The model later proposed by Partridge et al. in [32] was based on Zimmerman's one-node circuit model, to which some other resistance parameters were added to emulate the resistive losses in the signal path through the human body. A more detailed model was proposed by Hachisuka et al. in [33], consisting of a four-terminal circuit model formed by six impedances, two impedances in the longitudinal path between TX and RX electrodes, two transverse impedances between electrodes of the same pair, and two other crossed impedances. Applying this model to simulate the signal transmission by the arm in the range from 1 to 1000 kHz, it was found that the gain increased considerably if the ground electrodes of the transmitter and the receiver were not placed directly on the skin but slightly separated from it. Subsequently, Wegmueller et al. improved this model by including the effect of the electrode coupling impedance to the skin, thus obtaining a four-terminal circuit model with ten impedances [24] for a galvanic IBC system between 10 kHz and 1 MHz. This model was then developed by considering some of the bioelectric properties of cell membranes and Cole-Cole frequency-dependent dielectric properties for skin and muscle [30]. This type of circuit structure was later broadened by Song et al. [36] by adding the input impedance of the receiver device and the output resistance of the signal generator, with a total of twelve impedances. In order to emulate the attenuation through different signal paths, the torso and limbs were modeled as multilayer tissue cylinders, thus calculating the equivalent impedances according to the area and dielectric properties of the tissues considered.
More detailed circuit diagrams have later been proposed by Kibret et al. in [38], where a circuit model was reported for galvanic coupling which emulates current flow through a set of fourteen impedances modeling different layers of tissue such as skin, fat, and muscle, both in the longitudinal and in the transverse paths. A work by Swaminathan et al. in [130] proposes a lumped circuit model to simulate the galvanic IBC transmission in both on-body and in-body links from 100 kHz to 1 MHz. The proposed model contains six impedances that account for the longitudinal transmission across each individual tissue (skin, fat, muscle, and bone), four transverse impedances emulating the current flow from one tissue to another, and four impedances that simulate the electrode-skin contact. Some of the simulated results were validated with ex vivo experiments on porcine tissue, which showed a similar adjustment with maximum deviations of 7.5 dB over several hours. Among the conclusions derived from the work, the authors highlighted a possible difference of up to 30 dB in attenuation due to the variability of tissue properties, as well as a significant improvement in channel gain when both the TX and RX devices were placed in the muscle, rather than in the skin.
Alternatively, some authors have also proposed distributed-parameter circuit models for IBC transmission. A distributed model based on the cascade of basic RC cells allows the influence of the separation between TX and RX electrodes to be analyzed. In [35], Cho et al. introduced a circuit model formed by the cascade of distributed RC blocks simulating the trunk and the arms together with a coupling capacitance to the external ground. The frequency range of study was from 100 kHz to 150 MHz. The simulations showed a behavior of the human body as a high pass filter below 4 MHz. However, as frequency increased, the deviations between experimental results and simulations became significant, especially at greater distances. In [37], Callejón et al. proposed a distributed circuit scheme based on a transmission line model using infinite periodic structures. This model was formed by the cascade of basic electric cells along a longitudinal axis between TX and RX electrodes. The objective of this type of structure is to study the influence of an admittance that is repeated in the transmission path of a signal. This admittance modeled electrophysiological properties of the skin; in particular, a conductance simulated the conductive pathways, sweat glands, and ionic channels of the cell membranes, and a susceptance emulated the insulating behavior of the keratinized cells of stratum corneum. An improved model was later proposed in [132], obtaining a relationship between the propagation constant of the medium, in this case the skin, with different transverse impedances depending on the coupling type. This way, the same model was adapted to the study of both galvanic and capacitive techniques. Pereira et al. proposed in [18] a circuit diagram for capacitive coupling considering the effect of the cables and devices used in the measurements, which introduce nonlinearities and parasitic effects that can significantly affect body attenuation. The complete scheme consisted of a distributed-parameter circuit model emulating both the intrinsic path through tissues, formed by the cascade of RC cells emulating the current pathways through the arm, chest, and torso; and a lumped One key parameter in IBC is the human body attenuation. In the computational studies addressed in the literature, factors such as the influence of the geometry and the dielectric properties of living tissues have been investigated, together with the optimal configuration of the electrodes to achieve a higher signal level at the RX site. One of the first computational studies was presented by Fujii et al. in [41], where a human arm was simulated as a parallelepiped with dielectric characteristics equal to those of muscle tissue at a frequency of 10 MHz using FDTD technique. They found that if the TX ground electrode was in contact with the skin, the electric field in the RX site was strengthened. However, if the ground electrode of the RX was in contact with the skin, the received signal level decreased. The same authors later presented in [42] a high resolution FDTD model of two Japanese adults, of both sexes, with which they studied the electrical field distribution in different areas of the body and mainly through the arm, where they found that most of the electric field was confined. The model proposed by Hwang et al. in [43] presented a U-shaped geometry to emulate the two arms and shoulders, formed by the superposition of five tissue layers (skin, fat, muscle, and cortical and cancellous bone). The signal losses were simulated in a total of five frequencies between 10 and 50 MHz considering several configurations with and without a second ground electrode attached to the arm. The results showed that it was not necessary to place the transmitter ground electrode in contact with the skin for transmission distances along the arm smaller than 15 cm. In fact, the device's consumption decreased with this configuration. However, for longer distances, the ground electrode was placed in contact with the subject's skin to improve channel gain. Wegmueller et al. presented a 3D FEM cylindrical model of the arm conformed by the five previously mentioned layers, with which the influence of the channel length, the size of the electrodes, and the effect of the joints were studied [34]. Their simulations showed the feasibility of IBC communication in galvanic mode at distances of 5, 10, and 15 cm. They found that an increase of 5 cm in channel length caused an increase of up to 9 dB in attenuation. They also obtained an increase in attenuation of up to 8 dB due to joints. A similar cylindrical FEM model was developed by Lučev et al. in [128], analyzing the current distribution through the human arm at three different frequencies (100 kHz, 1 MHz, and 10 MHz). The ratio between the capacitive and the resistive current components and the influence of parameters such as skin moisture and electrode size were also studied. At these frequencies, it was seen that most of the current density flows through the muscle tissue and that this increased with frequency, skin moisture, and electrode size. Other authors have also considered the influence of trunk and limbs on their computational models. In [19], Xu et al. proposed a FEM model of the human body for capacitive coupling in which the trunk was modeled as a parallelepiped and the limbs as a cylinder formed by several concentric tissue layers. The channel showed a high pass response through the capacitive return path. A study presented in [46] by Song et al. proposed a FEM model of the whole human body for galvanic coupling, presenting the distribution of electric potential in different body areas for ten frequencies in the range from 10 kHz to 5 MHz. The human body was represented as a set of cylinders to emulate the head, neck, limbs, and torso. These cylinders were formed by four layers of tissues comprising skin, fat, muscle, and bone. Different signal paths were studied by placing the electrodes in different positions along the same arm, from the arm to the torso, from the arm to the head, and so on. One of the conclusions drawn by the authors is that the separation between electrodes does not have a great effect on channel attenuation from 10 to 100 kHz, starting to take on relevance from 500 kHz to 5 MHz. In addition, they concluded that the muscle has no great influence on the distribution of the electric potential, which is mainly confined in the upper layers, that is, skin and fat. Callejón et al. proposed in [9] a FEM model (see Figure 7) of the human arm formed by a multilayer cylindrical geometry composed of five concentric tissues such as skin, fat, muscle, and cortical and cancellous bone. The study covered a wide range of frequencies between 1 kHz and 100 MHz. In order to emulate the properties of permittivity and conductivity of human tissues in this band, four-dispersion Cole-Cole models were considered. The simulations comprised results of the bioimpedance of the arm, the electric field distribution both inside the tissues and on the external surface near them, and the distribution and the percentage of current flowing through each tissue as a function of variables such as frequency, channel length, and interelectrode distance. Figure 8 shows simulated results obtained with this model. In addition, simulations of specific body attenuation as a function of these two last variables were obtained, which were validated with experimental measurements. The results showed that the attenuation in the galvanic IBC mode is strongly dependent on channel length, increasing up to 20 dB for increments in distance of 5 cm. It was also found that attenuation is dependent on the transverse length between electrodes of the same pair, obtaining better levels of received signal by considering longer interelectrode distances. Similar computational models have later been proposed to analyze the effect of the parasitic return path [131], the influence of environmental noise [133] and real experimental conditions [134], the differences Wireless Communications and Mobile Computing 9 Figure 7: Image of the FEM model proposed in [9] consisting of a multilayer 3D cylinder composed of concentrical tissues such as skin, fat, muscle, and cortical and cancellous bone.
between surface-to-surface and implanted communications [130], and the use of noncontact electrodes at the transmitter [131]. A hybrid model considering both electrostatic circuit analysis and FEM model simulations has been proposed in [50] by Park et al. to analyze different parameters of HBC channel such as the external coupling characteristics between the environment and the electrodes as well as the transmission performance obtained with different experimental setups comprising VNAs and miniaturized batterypowered wearable transceivers. Recently, in [135], a cylinder numerical model based on that reported in [9] has been used to perform a parametric computational study of different key IBC parameters such as the electrode-skin impedance, the transmitted signal based either on a voltage or an electric current excitation mode, the influence of anthropometrical characteristics such as arm diameter and tissue thicknesses, and the influence of bioelectric properties such as muscle conductivity and tissue anisotropy.
The main proposals for the EM theoretical modeling of IBC communication are based on the definition of basic geometries such as the cylinder and the plane, in which Maxwell's equations can be solved analytically. For example, in [136,137], Gao et al. studied the electric current through the arm for a model based on a cylindrical geometry of 30 cm of length and 5 cm of diameter for galvanic coupling. A crosssectional plane of this cylinder formed by two homocentric mediums with different conductivity properties, respectively, representing the muscle and the skin, was considered. The frequency of work was lower than 200 kHz, which is the reason why the permittivity was considered negligible. It was assumed that there was no source of charge inside the section and that conductivity was constant in the frequency range studied, in such a way that the formulation of the problem was reduced to Laplace's equation by quasi-static approximation. The distribution of current density seemed to be concentrated around the electrodes. A later approximation was proposed by the same authors by adding a layer of bone tissue, showing lower influence on signal transmission. In [138], Pun et al. improved this initial model presenting a 3D cylindrical multilayer model emulating the channel attenuation in the human arm. In [47], Haga et al. proposed the resolution of a problem of boundary conditions for the electrical potential in the human body, modeled as a conductor with losses by assuming the quasi-static approach. The authors then proposed an equivalent electric circuit whose parameter values were derived from the MoM. Finally, this approach was validated with an FDTD computational model based on a rectangular geometry emulating the human arm. Bae et al. presented in [17] an approximation to the theoretical analysis of IBC communication by means of surface waves considering three components: the near-field electrostatic, the inductive, and the far field components, depending on the frequency and channel length. The study was based on the Norton method for propagation on flat earth, with channel dimensions adapted to those of the body limbs. The same authors in [48] use the same model applied to the study of different electrode configurations, both in a horizontal and in a vertical position with respect to the longitudinal axis of the human arm. In this way, they obtained a theoretical expression for the transmission gain of a total of nine possible configurations. In [49], Teshome et al. proposed an analytical EM model for IBC galvanic-coupled implanted systems. They analytically solve Maxwell's equations under quasi-static assumptions by applying Lamé's functions in an ellipsoidal geometry emulating different parts of the body such as head, torso, and limbs. They use the model proposed to analyze the influence of diverse factors in the IBC implanted channel link, such as the effect of frequency, implant size, electrode separation, and tissue conductivity.
Phantom Models.
For the study of the propagation and the effect of electromagnetic fields on the human body, the authors have used experimental prototypes based on liquid and/or solid containers with dielectric properties similar to those of body tissues. These prototypes are called phantoms. The substances more commonly used are water and sodium chloride to model the interstitial fluid and other chemical compounds such as agar for membrane tissues. Fujii et al. proposed in [42] a solid phantom to emulate the human arm by means of a rectangular parallelepiped with a relative permittivity of 81 and a conductivity of 0.062 S/m, similar to those of the muscle at 10 MHz. Although the relative permittivity reported by Gabriel et al. for the muscle at that frequency is 170.73, the authors found that the signal received in both cases (using the phantom and the human arm) was approximately the same. The reason for using a phantom with a lower permittivity is the difficulty to find body phantoms capable of emulating tissues with high relative permittivity. Wegmueller et al. presented in [51] an ellipsoidal phantom for the simulation of a cross-section of the torso, filled with a muscle simulating fluid called MSL27. This liquid had the property of presenting the same conductivity as the muscle at 27 MHz. Gao et al. proposed an experimental setup in [136] based on a circular phantom formed by two homocentric sections filled with two different substances: water with a certain sodium solution and agar. This phantom modeled a transverse slice of the arm, with 30 cm of diameter and 1 mm of thickness. Agar was used to emulate the skin and a saline solution accounted for the interstitial fluid and muscle. The results of the voltage and current distribution in the phantom fitted the results obtained with the theoretical model proposed in the same work in a frequency close to 200 kHz. Later, the same authors in [139] proposed a phantom consisting of a semicylindrical container formed by two homocentric layers composed of different chemical compounds emulating the skin and muscle. Recently, in [52], Callejón et al. proposed a circuital phantom for the study of the influence of the experimental setup on the IBC measurement. In this way, they considered a simplified circuit of four terminals to emulate the flow of the longitudinal and transverse current pathways through tissues in galvanic coupling. The circuit phantom was designed to simulate the bioimpedance and the attenuation of the arm, previously obtained by the authors using a FEM model in [9]. In this way, the phantom circuit was a simple and direct way to perform different tests with different experimental setups, avoiding the inherent uncertainty associated with the human body itself and the difficulty of reproducing identical conditions between different experimental sessions. In [49], Teshome et al. developed a homogeneous aqueous muscle tissue phantom at a frequency of 13.56 MHz to corroborate the simulated results derived from both an analytical and an FDTD model to study the effects of different key parameters in IBC galvanic implanted systems. Table 4 summarizes some of the IBC models described in the previous subsections, according to diverse characteristics such as the modeling approach, type of coupling, and frequency range. An efficient use of the human body as a transmission medium entails the understanding of the underlying signal transmission properties through living tissues. However, this is not a straightforward task. Each modeling approach has advantages and disadvantages, making a trade-off between accuracy and complexity necessary. The simplicity of the electric circuit body models in order to obtain useful analytic expressions for body attenuation justifies their use to guide the design of IBC transceivers and systems. Compared with numerical models, which usually present high computational cost, the electric circuit body models offer a compromise between These effects are no longer emulated by this type of electric circuit models; therefore their validity is usually constrained to low and medium frequency bands. Since the first model proposed by Zimmerman in 1995 [7] formed by eight capacitances to emulate the capacitive couplings between the body and the external ground, models have evolved towards complex RC networks emulating different tissue layers at both longitudinal and transversal directions [38][39][40], thus taking into account the frequency-dependent dielectric properties of tissues [34,46], the cross-impedances between TX and RX electrodes [33], the output and input resistance of the TX and RX device [36], and so forth. Electromagnetic models, both analytic and computational approaches, have attracted more and more the attention of IBC researchers giving rise to a variety of models ranging from simple geometries such as plane, cylinder, and parallelepiped, which emulate parts of the body such as limbs and trunk [9,41,46], to models based on the anatomy of the arm and the whole human body [42,50]. However, there is still a lack of simple analytic expressions that can be used to systematically and accurately guide the design process. Theoretical models easily allow the understanding of basic principles of signal transmission through human tissues, but the simple geometries considered such as plane and cylinder only represent a first approximation to a phenomenon where the effects of a complex anatomy can be the cause of the discrepancy observed between experimental and simulated results. However, the necessity of considering more complex geometries implies the use of numerical methods such as FDTD and FEM. These allow complex anatomy and anthropometrical characteristics to be analyzed, at the expense of time, computational cost, and a high amount of data. Furthermore, it must be said that computational models are difficult to reproduce, since they usually depend on the simulation characteristics and the numerical approach adopted.
Comparison and Discussion.
Physical models based on phantoms of the human body allow experimental measurements to be carried out without temporal or physical limitation, in scenarios where experimentation with living beings would be highly invasive. In addition, they can be implemented in a relatively simple way, choosing the appropriate substances and liquids to model the body tissues of interest. On the other hand, the main limitation they present is associated with the difficulty to find liquids or substances that emulate the electrical properties of human tissues in a wide range of frequencies. Therefore, these models are partial and not very versatile for multifrequency studies.
In this subsection we have discussed and compared the different modeling approaches presented in the literature, concluding that each of them has its own characteristics and range. Possibly, future advances in IBC modeling will require a combination of them: a first electromagnetic theoretical model to obtain closed analytic expressions that contribute to knowledge, along with a second more complex computational model that helps quantify the deviations and the percentage of error with respect to the first, as well as the influence of the complex anatomy of the human body. Finally, and based on these two proposals, a circuit model is usually the most common approach to obtain basic rules for the design of IBC transceivers. In addition, it is worth highlighting the fact that the resolution method is highly dependent on the application that is going to be developed. Thus, the IBC applications based on galvanic and capacitive coupling at low frequencies can be analyzed by means of circuital models that allow the voltage and electric current magnitude to be determined at both TX and RX sites. In this case, the signal is electrically coupled to the human body, and the radiation component is practically nonexistent. These are IBC applications at low frequencies in which the signal is transmitted and received by means of electrodes placed on or very close to the body and the quasi-static approximation is still valid. However, as frequency increases, propagation begins to occur through surface waves and the electrode is relegated by the "electrode-antenna" concept, making an analysis of the propagation with more detailed electromagnetic models necessary.
Measurement Issues
This section comprises some of the more relevant measurement issues in IBC field, respectively, electrodes and body coupling issues, the main experimental setups proposed in the literature to characterize the human body channel, the study of the influence of the anthropometric characteristics and movement of the users, and the main sources of artifacts and noise in IBC.
Electrodes and Body Coupling
Issues. The electrodes in IBC are analogous to antennas in airborne wireless communication systems [140]. This is why they play a very important role in IBC communication. Several authors have studied the influence of their properties on channel attenuation [38,77]. In fact, it has been found that the electrodes represent a source of uncertainty in the measure, since they present highly variable properties, such as the dispersion shown in its impedance values, as well as noise and adhesion problems [135]. Different results have been obtained by researchers, depending on whether they apply galvanic or capacitive coupling.
In galvanic coupling, electrodes provide an electrochemical interface between the IBC transceiver and the human body, both at the TX and at the RX sites. Hachisuka et al. [62] studied the influence of the size of square electrodes, from 10 to 50 mm length. In the frequency region near 10 MHz, they found a maximum in gain for 25-mm × 25mm square electrodes, showing that the gain decreased for larger or smaller sizes. This may be due to a decrease in the electrode capacitance as it becomes smaller and an increase in current leakage between electrodes of the same pair as they become larger. Other authors have studied the influence of electrode-skin contact. According to [141], the contact impedance between the electrode and the body is independent of the type of metal (aluminium, copper, bronze, brass, stainless steel, nickel silver, and silver/silver chloride (Ag/AgCl)). The results of Callejón et al. [29] comparing different types of electrodes (pregelled Ag/AgCl, aluminium, and copper) confirmed that the electrode material does not bear too much influence on IBC performance whenever a conductor is used. However, Ag/AgCl electrodes proved to have better performance even though their active area is the smallest, which could be explained by the fact that they are pregelled, thereby enhancing both the conductive properties and the adherence to the skin. In fact, because of the gel, the electrode-skin impedance is lower than that of the rest of electrodes, and, therefore, a greater current is injected for the same voltage, causing higher signal levels to be received. They also observed that pregelled electrodes were more robust against movement artifacts. According to Kibret et al. [142], the increase in gain at high frequency can be due to the decrease of the skin impedance, dominated at lower frequencies by a strongly resistive stratum corneum. The impedances of other components, like muscle and fat, are approximately constant in the frequency range from 100 kHz to 10 MHz. Okamoto et al. examined four types of electrodes: a stainless-steel electrode, a titanium electrode, and a dry and a pregelled Ag/AgCl electrode. Their results suggest that electrodes showing lower electrode-skin resistance properties improve IBC transmission. Regarding the interelectrode distance (between the signal and ground electrodes), several values have been analyzed in the literature, from 1.4-cm to 7cm [10,46,52,82,95,143,144]. A distance of 9-cm was chosen in the galvanic coupling experiments carried out by Callejón et al. [132], since it was found that the longer this distance is, the lower the attenuation results are.
In capacitive coupling, different configurations of electrodes have been analyzed. In [10], two configurations of capacitive electrodes were compared, with only the signal electrode in direct contact with the skin (see Figure 9), although other electrodes configurations can also been considered [11] (see Figure 10). In this coupling mode, the received signal is affected by the orientation of the transmitter with respect to the receiver, the size and type of the electrodes, the distance of ground electrodes to the external ground, and the surrounding environment [145]. Callejón et al. [29] evaluated different types of signal electrodes (pregelled Ag/AgCl, aluminium, and copper) and their influence on the signal attenuation. The results showed a similar trend with frequency. However, copper electrodes showed better performance up to 60 MHz, and, specifically, a difference of 10 dB in magnitude level was found. Three different electrode types were tested in [146], with minimal influence on the measurement results: bare copper electrodes without a conductive paste, copper electrodes with a conductive paste, and standard self-adhesive Ag/AgCl electrodes with the conductive paste. The size of the electrodes has also been studied in capacitive coupling. According to Zhao et al. [147], the attenuation decreases when the ground electrode is larger but is not affected by the size of the signal electrode when it contacts the body directly. In the same direction are the results obtained by Callejón et al. [29]. They evaluated different sizes of the ground electrode (3-cm × 3-cm, 4-cm × 4-cm, and 7-cm × 7-cm), finding that the greater the size of the ground electrode, the greater the signal received [29]. Using a large ground plane or a special electrode for the return path loop is advantageous to enhance the signal-tonoise ratio (SNR) of the received signal [35]. However, this goes against the miniaturization of a communication device, which means that a trade-off must be sought. Since a large ground plane is difficult to implement on the body channel transceiver, it is useful to find a lower limit on the plane size. Cho et al. [35] proposed an empirical formula for the minimum ground size. Notice that the ground electrode is usually connected to the ground level of the electrical circuit [42]. Besides, greater distance between the signal electrode and the ground electrode reduces the capacitance between them and induces more energy in the body [147].
Additionally, some adjustments can be made to the electrodes to optimize the communication performance. In Ohishi et al. [148], capacitive electrodes are enhanced by inductance coils so that the resonance frequency is equal to the IBC frequency when the electrodes are close to human body surface. Impedance-matching networks at the transmitter output and the receiver input can also increase the transmission performance in capacitive coupling [12,40] (see Figure 11).
Galvanic and capacitive coupling performance has also been compared from the point of view of the electrodes. Alshehab et al. [141] investigated the optimal configuration by considering inclusion and exclusion of a ground electrode touching the skin, different sizes of electrodes, ground circuit board, and the distance between the signal electrode and the circuit board. They concluded that the best configuration, in terms of the lowest path loss, was composed of only signal electrode touching the skin (round copper plate with 2-cm of diameter) and a 10-cm × 5-cm ground circuit board at a distance of 1.5-cm from the signal electrode. Similar results were obtained by Ruiz et al. [149] and Grilec et al. [70]. However, according to [47], the signal quality is raised up by attaching both the signal and the ground electrodes to the human body, so the electric field and the received voltage can be enhanced. Fujii et al. [42] concluded that existence of the ground electrode can be quite effective in transmitting the signal because it enables the impedance matching between the signal generator and the human body [42]. In both capacitive coupling and galvanic coupling, it has been shown that the attenuation of the body channel can be much lower than that of the air channel in frequencies up to 100 MHz [68,150] (below 84 MHz according to Seyedi and Lai [151]). In galvanic coupling the received signal has less dependence on the environment [145]. On the other hand, the results of Hachisuka et al. [33] indicate that capacitive coupling is more suitable for high-speed IBC devices than galvanic coupling. The same results were found in [152].
In summary, there is no uniformity in the use of electrodes for IBC systems and varied sizes and materials have been proposed. Copper has been used as the electrode material in many studies because of its good conductivity [141]: 25-mm × 25-mm [153], 3-cm × 3-cm [132], 2-cm × 2cm [20,40,152,154,155], 6-cm × 8-cm [156], or circular with a radius of 10-mm [36]. Stainless-steel electrodes of 5-mm × 15-mm was employed in [157]. Asogwa et al. used [143] selfadhesive silver/silver-chloride electrodes (Ag/AgCl) because they are designed for both research and clinical use, contain hypoallergenic gel, can be used for two hours of measurement, and reduce the effects of motion artifacts. The same electrode type was employed in [16,24,27,68,95,142,152,158].
Measurement Issues and Experimental Characterization.
The experimental characterization of the human body as a communication channel has been shown to be strongly dependent on the environmental conditions and measurement methods under which the experiments are performed, leading to important deviations between results reported by some authors and others, thus not allowing a formal comparison to be drawn. There is also a heterogeneous set of proposals to address different technical issues, for example, various strategies for the isolation of the internal ground from electronic equipment, differential signaling, electrodeskin impedance matching, and so forth, giving rise to a wide variety of experimental schemes and measurement setups. Likewise, the reported measurements have been made using a wide variety of electrodes of different shapes, materials, and sizes, as highlighted in Section 3.1. With respect to the body areas studied, most of the published results have been obtained in the human arm, with some works also reporting measurements in other parts of the body such as legs, back, torso, and head [153,159]. Regarding operating frequencies, there is no consensus about the optimum frequency band for IBC communication in its two main categories of galvanic and capacitive coupling, in such a way that there is no common design methodology for IBC transceivers, which often present different operating frequencies. On the other hand, it is also worth noting that the IBC channel depends on the electrophysiological and anthropometric properties of the subjects under study. In fact, there is an implicit dependence on the body position and movements of the subject, which makes the measurement process even more difficult, leading to variations throughout different experimental sessions. The experimental setups proposed in the literature have been developed to perform measurements to analyze key issues such as optimal frequency bands, electrode type, channel length, and interelectrode distance, as well as different body parts and postures of the subject.
On the other hand, it must also be noticed that there is an important source of discrepancy in IBC measurements which has not sufficiently been studied in the literature and is in turn related to the effects produced by the electronic devices and experimental conditions. Thus, experimental schemes using different equipment such as oscilloscopes and signal generators, spectrum analyzers [48,160], and vector network analyzers (VNA) [161], as well as customized transceivers, have been used interchangeably [144,162]. Recently, in [50], an exhaustive study about optimal measurement configuration for HBC channel characterization using miniaturized battery-powered transceivers has been reported by Park et al. In addition, another important issue in the characterization of IBC channels is the determination and quantification of environmental noise, as modeled in [133]. It is therefore necessary to harmonize and clarify the proposed setups and measurement procedures to establish an experimental methodology that allows the characteristics of the IBC channel to be identified in a systematic and accurate way. Accordingly, some technical aspects that need to be studied with greater detail are those that relate, on the one hand, to the isolation of the internal ground of the measurement circuits, which are often parasitically coupled to the experimental scheme, giving rise to configurations that do not correspond to real capacitive and galvanic schemes, and, on the other hand, to the mismatching of the internal impedances of the equipment and that of the skin-electrode impedance [163]. Regarding this issue, an important source of disparity might be due to the input resistance of the measuring devices and transceivers used. In the literature, devices with input resistances of 1 MΩ and 50 Ω [29,38,139] have been considered indistinctly. In the specific case of galvanic coupling, the impedance seen before and after the measuring device is that of the electrode-skin interface, which is frequency dependent [9,38,140,164,165]. Therefore, the use of electronic equipment with input resistances of 50 Ω may not be an optimal option since an impedance mismatch with respect to that shown by the skin could exist, therefore leading to lower values of received signal and channel gain. Figure 12 shows a set of attenuation measurements performed to the same subject with the same experimental conditions by varying the measurement devices used at both TX and RX sites. The discrepancy due to the use of one setup to another is highlighted.
Influence of Anthropometric Characteristics and Position
Issues. In IBC communication many overlapping physical mechanisms occur at the same time, making channel characterization and measurements a challenging task. Besides different measurement setups and IBC coupling approaches, described in previous sections, anthropometric characteristics and body positions of test subjects have been suggested as the main causes of the differences between the measurement results found in the literature [29,38,146,166].
Hachisuka et al. analyzed optimal placement of proprietary capacitive IBC transmitters and receivers for different body positions and distances [33]. The two-electrode arrangement, in which only the signal electrode is attached to the body, was found to be 20 dB better in the kilohertz band than the four-electrode arrangement in which all four electrodes are in contact with the body. In [65] authors experimentally characterized the body-coupled (i.e., intrabody) communication channel using a specifically designed measurement system in the frequency range from 100 kHz to 60 MHz. They showed that the maximum propagation loss for the whole body channel was below 80 dB. The frequency dispersion and the influence of body movement on channel attenuation were shown to be much smaller than that for RF wireless body area network channels. They performed measurements on a test subject while sitting on a chair, standing, walking through the room, and standing while moving the right arm up and down. The measured body movements were shown to result in only small variations in channel attenuation, and the maximum observed standard deviation was 2.5 dB. In [167] the authors proposed a statistical model for the intrabody propagation channel based on experimental data while the user was both still and walking, in terms of most fitting probability density function. In [24,34,93] the authors performed measurements on 20 test subjects and showed that the more body activity there is, the better the galvanic coupling is and the more stable the signal transmission become in the frequency range from 10 kHz to 1 MHz, which was also indicated by numerical simulations. Excellent transmission was noticed on thorax, while the extremities and joints lowered the quality of transmission over larger distances. Nie et al. performed in situ characterization of the Human Body Communication (HBC) dynamic propagation channel and investigated the statistical model of the HBC [79]. Measurements were performed using battery-powered equipment (45 MHz carrier signal) in an anechoic RF chamber, on five subjects, and 33 scenarios. They demonstrated that the HBC fading channel is motion-insensitive by means of intensive in vivo experiments and proposed a three-state Fritchman model to describe the burst feature of the HBC channels. Later, using the same equipment, measurements were performed in five real environments for random body motions: in a conference hall, laboratory, grove, playground, and an anechoic chamber [80]. The received power for nine HBC channels was acquired, and analyzed statistically. HBC demonstrated stable propagation as well as small standard deviation and small range of the received power in all investigated environments. They confirmed that the HBC link is almost insensitive to body motions as well as that the environments tested exerted little influence on HBC propagation channels.
A comprehensive study of the capacitive IBC channel for static and dynamic body positions was performed by Lučev et al. [10,28,146,152]. During the in vivo measurements of IBC transmission characteristics for static body positions [10,146,152], different test subjects were asked to maintain a predefined position during the measurement. Meanwhile, the overall geometrical parameters and the return signal path did not change for a specific measurement session. During the measurements of the IBC transmission characteristics for dynamic body positions [28,146], the subjects under test were asked to move according to the predefined protocol, thus changing the overall system geometry, which affected the return signal path. Preliminary measurements [10] undertaken in an anechoic chamber on four persons in a single still position for five transmitter-receiver distances (4, 16, 28, 36, and 120 cm) were expanded in [146], where measurements on 11 test subjects were performed for four body positions and three types of movements. Despite variations in body proportions and anatomical structures, the results obtained on different subjects for the same test condition were comparable to each other and differed less than 1.5 dB up to around 10 MHz [146]. In [28] the results were further quantitatively examined and it was shown that up to 10 MHz variations in the gain among different test subjects and the variations caused by different body positions and movements are comparable and change the gain up to around 5 dB for a given transmitter-receiver distance. For frequencies around 40 MHz this change increases up to around 20 dB, mainly influenced by the body geometry. Finally, the same findings were confirmed in time-domain measurements on 10 test subjects [152].
Limb joint effects on IBC communication were extensively studied by Seyedi et al. [27,151,168]. In [27,168], the effects of the joint presence and joint angle (45 ∘ , 90 ∘ , 135 ∘ , and 180 ∘ ) on the IBC were studied. It was concluded that the increase in attenuation was proportional to the angle between the forearm and upper arm and the minimum attenuation occurred in the range 80-100 MHz for capacitive coupling. When the joint angle changed from 45 ∘ to 180 ∘ the maximum attenuation difference was 4.2 dB and 4.7 dB in the capacitive coupling and galvanic coupling methods, respectively. As expected, capacitive coupling was more sensitive to limb joint position, but galvanic coupling was more dependent on body composition (intrasubject variability). In [151], the study was extended to include an elbow and knee joint, at 20 cm transmitter-receiver distance, and similar conclusions were drawn. For the capacitive coupling, the minimum attenuation was observed at around 57.4 MHz and the differences between joint and no-joint conditions at this frequency were 2.1 dB and 1.5 dB for the elbow and knee joints, respectively. For the galvanic coupling, the minimum attenuation was found at 40-51 MHz frequency range, at which the presence of joints led to around 2.0 dB higher propagation loss compared to that without joints in the signal path. In summary, it was shown that capacitive coupling is more susceptible to body movement (particularly in the upper limbs) at frequencies less than 50 MHz, but less susceptible when deployed on the lower limbs. Above this frequency, signal attenuation is more influenced by the environment and it is less dependent on human body composition, indicating that the signal is less coupled through the body.
Influence of body positions and movements on both galvanic and capacitive IBC was also investigated by Callejón et al. in [29]. Galvanic coupling analysis was performed using a signal generator, an oscilloscope, and a pair of balun transformers, in the frequency range from 10 kHz to 2 MHz. The galvanic coupling results obtained for three different positions of the same subject (sitting, standing, and walking) were approximately identical, highlighting that neither the position nor the movements of the subject have much influence on IBC galvanic coupling performance; the distance between the floating ground electrodes and the floor was carefully monitored and set between 15 cm and 135 cm. The measured attenuation was better when the subject was sitting compared to standing or walking, due to the difference in distance between the ground electrode and the external ground plane: the lower this distance is, the better the signal values are. Measurements of galvanic coupling IBC channel on 7 subjects were performed by Gao et al. in [169,170]. It was shown that the thicker forearms had lower attenuation, which indicates that the main path of the IBC signals was muscle. The most stable results were obtained in the 20-50 kHz frequency band.
Dependence of signal propagation loss on several poses in capacitive IBC was analyzed by Yokota et al. in [171]. Proprietary hardware based on electrical-to-optical probe [56] and 6.75 MHz frequency signals were used for testing several body positions and poses (raising an arm, extending arms forward, opening the arms, and standing on one leg). The signal losses on the torso were within ±2 dB, but the difference between signal loss on the sides and fronts of the arms was approximately 9 dB. Similarly, gain fluctuations due to the different poses were 4 dB or less, except for the extending hands forward position (13 dB for the down-link and 16 dB for the up-link). Consequently, they concluded that the best positions for the wearable TX are around the navel or in pants pockets, and wearing them on the arms should be avoided.
As an alternative for measuring influence of body positions and movements on the capacitive IBC, Kazim et al. developed full-wave electromagnetic (EM) models which realistically analyze capacitive body channel communication (BCC), that is, the interaction of capacitive coupler, the human body, and the environment all together [172]. Models were validated with the measurement results of the Philips group [65] and evaluated for numerical uncertainties (boundary conditions, mesh cells) and human body variation uncertainties (12% for dielectric properties and 17% for thicknesses of skin, fat, muscle, and bone layers of the body model) independently. The propagation loss for twenty different body positions in the midfrequency range of 1 MHz to 60 MHz with communication distances of 53, 102, 116, 135, and 155 cm was simulated. The simulation results showed that the vertical coupler configuration (one electrode on the skin, another in the air) is less susceptible to physiological variations of underlying tissues compared to the horizontal coupler configuration (both electrodes on the skin). It was shown that the propagation loss in the arm-torso-arm region is the lowest when arms are not touching the torso region irrespective of the distance. Developed methodology has been used in [173] to estimate the two-port complex path impedance matrix [ ] of a capacitive BCC channel in 10 body positions and calculate input and output impedances of a channel, which proved to be capacitive in nature. Therefore, the resistive matching below 1000 Ω and inductive matching between 0.5 H and 5 H either on the transmitter or on the receiver side is sufficient for maximum power transfer in a capacitive IBC system.
The experiments to evaluate the effect of human limb gestures on galvanic coupling IBC channel were carried by Chen et al. in [174]. IBC channels included upper arm, thigh, elbow joint, and knee joint channels. For the upper extremity channels, different elbow joint angles (i.e., 180 ∘ , 135 ∘ , 90 ∘ , and 45 ∘ ) and hand conditions (i.e., empty-handed, loading with dumbbell, and gripping a force transducer) were considered. For the lower extremity, different knee joint angles (i.e., 180 ∘ , 135 ∘ , and 90 ∘ ) were evaluated. The experiment results show that channel gain is significantly influenced by the joint angle (i.e., gain variation 1.09-11.70 dB, < 0.014). The extension of the channel, as well as the appearance of joint in IBC channel, increases the channel attenuation. The hand loading, gripping force, and muscle fatigue have negligible effect (gain variation < 0.77 dB; > 0.793) on human limb IBC channel. Moreover, the change of joint angle on human limb IBC channel causes significant variation in bit error rate (BER) performance. This is explained by the fact that, in galvanic coupling IBC channel, the majority of electric current is conveyed by muscle tissue (larger than 90% for frequency lower than 1 MHz). When the elbow joint flexes (angle decrease), the muscle performs the concentric contraction, and the length of muscle, as well as the channel length, is shortened significantly, resulting in lower channel attenuation. For hand loading or gripping force, the muscle performs the isometric contraction, and the reduction of muscle length is negligible, so that is the change of the channel attenuation. The contribution of the ground loop through the floor in a capacitive IBC system was evaluated by Sasaki et al. in [175]. The received signal strength was measured for two cases: two subjects shaking hands and a subject touching an off-body receiver placed on a stand. Each of the subjects wore a battery-powered transmitter or a receiver on their wrist, and a 10.7 MHz signal was generated. The measured variation of the signal attenuation was approximately 40 dB depending on which hand the subject used to shake hands or to touch the offbody receiver, while the variation caused by the different floor types (carpet-covered metal floor, concrete floor, hardwood floor, and wooden chair to be above the floor) was less than 5 dB.
In summary, since in the galvanic coupling the signal is confined to the human body, for the same measuring setup the communication is mainly influenced by the anthropometrical and bioelectric properties of a subject, that is, intersubject variability [29,151,170]. For example, for IBC units placed on the arm these characteristics are arm diameter, tissue thicknesses (such as skin, fat, and muscle), tissue anisotropy, presence of joints in the communication channel, and so on [29,38,93,174]. In the capacitive coupling approach, body positions and movements of the subject, as well as change of the environment, can lead to different measurement results for the same setup, while the influence of the anthropometrical characteristics on the transmission are almost negligible [28,29,38,146,151]. This is because in the capacitive intrabody approach the signal transmission path is closed through the human body, and the return signal path is closed capacitively through the environment [29,176]. The return signal path is always the path with the lowest impedance between the receiver and transmitter electrodes. It can be closed through an environment, especially if there are metal objects in an immediate vicinity of the IBC system [20,146], or it can be closed directly from the receiver to the transmitter if they are in line of sight, near each other. For a fixed position of the transmitter and receiver on the body, the signal transmission path is of constant length, regardless of the actual body position. However, if the body position or the environment conditions change, the capacitive return path can change as well [79,89]: this means that the body can partially block the capacitive return path, which decreases the channel gain [20]. One of the proposed techniques for compensating a change of the return signal path due to the change of body posture was presented in [89]: by estimating current distance between transmitter and receiver ground electrodes, a capacitance between them is estimated and a digitally controlled tunable inductor is employed to dynamically compensate the signal path change, in order to reduce the capacitive coupling loss. Influence of a different environment was reported for higher transmitterreceiver distances: it was negligible for 20 cm distance in electronic laboratory, classroom, and rugby field [151], and for distances up to 36 cm in a laboratory and an anechoic chamber [146], but at 120 cm the gain was up to 10 dB higher in a laboratory than in the anechoic chamber [146], because the return signal path was coupled through the objects in the vicinity. The distance from electrodes to ground/floor was proved to be an influential factor in [29,151,175]: for lower distances, measured gain was up to 5 dB higher. This was shown in measurements for sitting versus standing positions [29], signal path through the knee versus through an elbow [151], and standing on a floor versus standing on a wooden chair [175]. Also, in the measurements of the capacitive IBC one of the main influential environmental factors is a stray capacitance between the human body and the environment ground [177], which acts as a source of a common voltage coupled into the system. Stray capacitance had also been pointed out by different authors as a possible source of error in tetrapolar bioimpedance measurements above 100 kHz [178,179]. By considering only the capacitive coupling from the body segment between potential electrodes to ground, and assuming common values for electrode and body impedances, the authors in [178] obtained a simplified circuit model that predicts gain and nonlinearity errors at any frequency band and resonance at frequencies above 10 MHz. They observed a frequency-dependent gain error that increased with increasing frequency [178,180].
Sources of Artifacts and Noise in IBC.
In galvanic coupling the signal path is formed within the human body and the influence of external interfering signals is negligible [96,181]. However, since in capacitive coupling the environment is an integral part of the signal path, capacitive IBC signal is highly susceptible to external interference such as from power lines (50 Hz) and other nearby devices [96,181]. The main cause of the interference signal in an IBC channel is the body antenna effect [98,107,182]: the human body exposed to an electromagnetic (EM) field behaves as an antenna with its resonance frequency determined by the wavelength equal to twice the human height. In case that the body is grounded, the resonance wavelength is equal to four times that of the height [98]. The body resonance may occur at different frequencies, depending on various environmental conditions, so the human body operates as a wideband antenna in 40-400 MHz frequency range [98]. The IBC channel is especially affected by interferences below 1 MHz, while for higher frequencies the observed interference level is below −75 dBm [107]. However, in the frequency modulation (FM) band this level may rise to −30 dBm [98,107]. Schenk et al. performed interference characterization measurements at different locations in a laboratory using a battery-powered spectrum analyzer [65]. The interference was almost negligible for a subject located in the middle of a room. However, as the subject approached a monitor, a measuring equipment, or a table with metal base, the interference was considerably increased. This was explained by the fact that nearby devices and furniture act as an antenna for sources of interference and in turn couple this to the electrodes/body. Xu et al. studied the influence of an environment and interference from nearby capacitive IBC devices, as well as overall interference over air channel [20]. They used proprietary battery-powered IBC devices with signal electrodes connected to a test subject's wrist (120 cm distance) in the 20-100 MHz frequency range. The influence of the environment was studied with a wall, another person, and an external copper plane close to the test subject. Interference from IBC devices worn by another person or placed on a nearby table were tested for several body positions, and interference coupled through air was studied using only the receiver end, with the transmitter turned off. Generally, the environment does not affect the transmission unless the objects are closely coupled to the transmitter or receiver ground planes. With a closely coupled conductive object, whether grounded or floating, the capacitive channel gain will increase. On the other hand, the strongest measured interference from the capacitive IBC devices placed 80 cm away was only 8 dB smaller than the received signal in the current IBC channel. The air-channel interference inside and outside is generally lower than −120 dBm/Hz in the frequency range from 35 to 88 MHz. Within this band, the capacitive IBC channel interference was mainly coupled from the receiver board. At higher frequencies, the interference was mainly from the FM radio signal and at lower frequencies from indoor electronics. Effect of electromagnetic interference (EMI) on IBC communication was studied by Hwang et al. in [182]. The human subject held the receiver module while touching the electrode during exposure to a general EMI environment at a total of 100 measurement sites in both indoor (office, commercial building, and train/subway station) and outdoor environments. Analysis results showed that the interference signal received at the receiver module was primarily caused by the man-made noise sources. This corruption leads to severe bit-error-rate (BER) degradation of the IBC device. Based on the measurements, the interference model was developed and used in the design of an interference rejection filter placed just after the receiver electrode.
A comprehensive noise analysis for capacitive IBC was reported by researchers from several Japanese institutions [13,14,56,133,180,[183][184][185]. They showed that IBC systems are affected by large common-mode noise from various kinds of electronic equipment in living spaces, such as lighting devices, air conditioners, liquid crystal televisions, and refrigerators [13,14]. In their measurements they used a battery-powered wearable transmitter generating 6.75 MHz frequency and an AC-driven embedded receiver, which was shown to be very prone to the environmental noise (noise power spectrum with AC power is more than 20 dB larger than that with battery power). A capacitance model of such a system was developed in [185]. Kado et al. showed that packet error rates (PERs) can be reduced by implementing a common-mode choke coil to the embedded receiver [56]. Due to the noise, a parasitic impedance is added to the transmission line, which results in degradation of the transmission line's impedance balance between the signal and the ground line. As the transmission line becomes longer, the differential-mode noise damps, so just a common-mode noise between the earth and the floor ground was recognized as the main noise factor in an IBC system. An effective way of improving impedance balance and reducing common-mode noise in the system was connecting an additional electrode or a compact capacitor to a ground electrode, according to [13,14] (Figure 13). Two types of noise, excluding transceivers noise, are coupled to the embedded receiver [183]: radiated noise via human body [133,183,184] and conducted noise via floor ground [180]. Noisy-channel models of a capacitive IBC system with an embedded receiver situated in the vicinity of a wall was investigated by Sasaki et al. in [180]. In a model based on a Wheatstone bridge a virtual voltage noise source, assumed to exist between the wall and floor, emulates the effects of the noise transmitted through the grounding systems (conducted noise). It is found from the model that the noise is suppressed when the Wheatstone bridge is balanced, which can be realized by tuning a variable capacitor in a developed noise-reduction mechanism (NRM) consisting of an auxiliary electrode and a variable capacitor. It was observed that, by tuning the variable capacitor with the NRM, the noise voltages were successfully reduced, while the signal voltages were almost unchanged. As a result, the SNR was increased. Another noise component, radiated noise, was explained in [133,184] by means of electrostatic analysis based on the method of moments. The proposed circuit model contains a total of 20 parameters, namely, 15 capacitances and 5 equivalent noise sources. It was found that since it is difficult to avoid the radiated noise via the human body, the reduction Ground electrode Additional electrode Figure 13: Setup for an additional electrode effect. Connecting an additional electrode (365 mm × 500 mm × 0.3 mm) to a ground electrode causes a common-mode noise reduction and improves the impedance balance [13,14]. of noise sources in intrabody communication is mandatory. One of the proposed ways to reduce the common-mode noise measured at the IBC receiver is using an electrical-tooptical (EO) sensor for picking up the received signal [15,73] (Figure 14). Thanks to the sensor's extremely high input impedance, the measured electrical noise can be reduced. Moreover, the ground electrode of the EO sensor is electrically isolated from the electronic circuits, thus eliminating the influence of the floating ground potential. As a result, both the noise and the distortion of the receiving signal can be greatly decreased [73].
IBC Transceivers and Devices
Three fundamental stages of any communication system are transmitter, communication channel, and receiver [176,186]. A typical IBC transmitter consists of a sensor, an analog-todigital converter (ADC) for analog preprocessing, a modulator, and a coupler with electrodes, adapted to the type of the IBC technique used. Choice of sensors depends on a desired application, that is, sensors of physiological functions for obtaining biomedical signals (like EMG, ECG, and EEG) in medical applications, or cameras and microphones for media devices. A communication channel refers to a physical transmission path between the transmitter and receiver electrodes and comprises a human body and a surrounding environment. Transmitter and receiver electrodes can be connected to the body but can also be left floating, depending on the signal frequency, coupling technique, and application [146,187,188] (see Section 3.1). A typical IBC receiver consists of a detector with electrodes, demodulator, signal decoder, and a signal processing unit which provides desired information. Main components of an intrabody communication system are shown in Figure 15. In some cases, transmitter and receiver units can be embedded in a single IBC transceiver with a common control unit. There is no definite principle for the electronic design of the IBC transceiver units. They should be small and light and have full integration ability and energy efficiency and have the ability to transmit at low power. Therefore, main parameters that need to be considered during the IBC transceiver design are transmitter output power, receiver sensitivity (minimum signal power required to receive data correctly), carrier signal frequency, data rate, modulation method, and communication interface (transmitter output and receiver input comprising both couplers and electrodes) [176,186]. Transmitter output power should be kept as low as possible due to the health and safety reasons (see Section 2.3) and battery longevity issues, yet high enough in order for signal to be detected by the receiver placed at the desired distance from the transmitter. Carrier signal frequency, data rate, modulation method, and communication interface all depend on the choice of the coupling technique and application.
A general overview and comparison of the design and implementation of IBC transceivers is given in Section 4.1, separately for the IBC systems developed using discrete components (Section 4.1.1) and in CMOS technology (Section 4.1.2). A discussion on modulation techniques and power consumption is presented in Section 4.2.
Design and Implementation of IBC Transceivers.
Several groups from all parts of the world are involved in the development of IBC devices. Various developed IBC systems can be found in the literature, some of which are more oriented to medical applications, while the others are oriented to various nonmedical purposes. The developed systems differ by the coupling method, the coupling amplitude, the chosen frequency range, the signal modulation method, and the achieved data rates [176,186].
IBC Systems Developed Using Discrete Components.
In the early days of IBC development, the researchers were mostly concentrated on developing proprietary IBC devices and testing their characteristics and functionality [5,7,32,44,53,68,69,90,91,93,94,189]. The concept of Personal Area Networks (PANs) was presented in 1995 by Zimmerman to demonstrate how electronic devices on and near the human body can exchange digital information through near-field electrostatic coupling [7]. Independently of Zimmerman, two galvanic IBC systems were developed: wireless system with very low consumption designed for monitoring the ECG signal by Handa et al. [90] in 1997 and a system for communication with implantable transducers by Lindsey et al. in 1998 [91]. A group from NTT Human Interface Laboratories was focused on connecting electronic devices in everyday life by a simple touch [15,55,57] and developed an indoor wireless-like networking and positioning system for connecting portable and wearable devices (home and office appliances) to the network while the user stands or walks on the floor [54,56,[58][59][60]. In biomedical applications field, IBC was employed in various general biomedical systems [66,67,70,71,85,86,95,96,190] or specifically for ECG [87,93,94], EMG [68,69], and human posture [83,84] monitoring, as well as for monitoring and controlling artificial hearts and other artificial organs in the body [77,78]. IBC systems developed for the characterization of IBC channel and human body as a signal transmission medium [19, 20, 33, 41, 61-65, 72-76, 79-82, 88, 89, 93, 94] are mostly built around DDS or FPGA circuits for signal generation at one or in the range of frequencies and for the detection of received signal power at the receiver end.
A comparison between several proprietary capacitive and galvanic IBC systems found in the literature and developed using discrete components is given in Tables 5 and 6, respectively. The institution and first authors of each system, the amplitude and frequency of carrier signals, the modulation method, the maximum achieved data rate, power consumption, and the proposed application are all provided.
IBC Systems Developed in CMOS Technology.
In parallel to the IBC systems developed using discrete components, application-specific custom designed integrated circuits (ASIC) transceivers implemented in CMOS technology have also been developed by several research groups. A pioneering work on several generations of CMOS-based IBC systems has been performed by a large research group from KAIST (Korea Advanced Institute of Science and Technology). An overview of the IBC transceivers evolution implemented with CMOS technology by the KAIST group can be found in [102,191]. A first HBC transceiver [97], based on a wideband signaling (WBS) technique with a directcoupled interface (DCI), was developed in 2006 and used in a prototype wearable audio MP3 player system to play back the digital audio signal transmitted through the wearer's body without any wire [192]. An improved empirical channel model of the human body [35] was used in the design of a transceiver which supports communication between one transmitter and a number of receivers in a network [193], and the body antenna effect (signal distortion due to the cellular or radio signal absorbed by the body) has been taken into account in IBC design since 2008 [98]. Later, a dualband IBC transceiver integrated with the Medical Implant Communication Service (MICS) circuits, sharing the same front-end, was designed to cover the implantable network communication as well [99]. The transceiver operates in 30-70 MHz band for body channel communication (BCC) and 402-405 MHz band for MICS. Next, more accurate body channel model based on Maxwell's equations [17] was actively used in the design of a high performance energy-efficient transceiver [26,194]. Based on the channel analysis, the resonance matching and contact impedance sensing (CIS) techniques are proposed to enhance the quality of the body channel. To overcome power and cost issues regarding the use of crystal oscillator, a crystal-less transceiver was implemented [100] by using an injection-locking digitally controlled oscillator (IL-DCO) with autofrequency calibration technique, and a scalable double-FSK modulation scheme with divider-based switching transmitter. In 2013 a worldfirst WBAN transceiver satisfying all of the specifications for IEEE 802.15.6 HBC standard [6] was presented in [101]. Stringent transmitter spectral mask requirement was achieved implementing the driver active-digital-bandpass filter (ADF) without using external components. Addition of a duty cycle control (DCC) to the chip [101] resulted in a new low-power transceiver [102] optimized for the network system operation rather than just the TX circuit operation, like the previous generations of HBC transceivers. It can provide other additional functions including the channel access control with the MAC (Medium Access Control) scheduler, the encryption with the zero delay cipher (ZDC), and the energy detection with the received signal strength indicator (RSSI). Next generation fully IEEE 802.15.6 HBC compatible transceiver [103,195] was optimized for low power consumption. Most of the described BCC research at KAIST used only the frequency band below 100 MHz and the group was mainly focused on low data rate (<10 Mb/s) healthcare applications [26,[97][98][99][100][101][102][103][193][194][195]. In 2015, a possibility of achieving high data rates for multimedia transfer was investigated [104][105][106] and two kinds of BCC transceivers were implemented in a new single chip: a low-energy 40/160 MHz dual-wideband full-duplex BCC transceiver for entertainment applications (ET-mode) and a 13.56 MHz RC oscillator-based superregenerative transceiver for healthcare applications (HC-mode). Also, new body channel measurements corroborated that the body channel bandwidth can be extended up to 200 MHz.
CMOS transceivers were also designed by other research groups, like [107-114, 119-122, 196, 197]. Saadeh et al. presented several chips for binaural hearing aids application that mitigate some or all body channel impairments: interference and signal multipath in [115,116], together with variable ground effect and variable skin-electrode impedance in [117,118]. One of the rare galvanically coupled (GC) IBC receivers was designed by Chen et al. [198]. It is resilient to frequency misalignment for short-distance communication due to the carrier tracking technique. A concept of transmitting IBC signals through a mouse or rat brain was reported and tested in vivo in [199][200][201], while no effect on normal neural activities was noticed.
Detailed performance specifications of the CMOS-based IBC transceivers are listed in Tables 7 and 8.
IBC Modulation and Power Consumption.
Due to the distinct characteristics of the galvanic and capacitive IBC, different types of modulation and demodulation methods have been utilized in the IBC systems. Zimmerman proposed the first prototype of capacitive coupling IBC in 1995 [7]. Both On-Off Keying (OOK) modulation scheme and direct sequence spread spectrum (DSSS) were investigated in this preliminary design. Despite the accuracy of the results achieved from DSSS, the OOK scheme was selected due to its simple hardware and implementation. The achieved data rate and power consumption were 2.4 kb/s and 333 mW, respectively, in the first OOK-IBC method proposed in [7]. In 2006, Ruiz and Shimamoto [202] evaluated various digital modulations in the IBC system. The achieved results from their experiments demonstrated that the most suitable modulation schemes for IBC were binary phaseshift keying (BPSK) as well as minimum shift keying (MSK) [202]. However, all experiments were carried out in the frequency range of 200-600 MHz. The wideband signaling (WBS) communication technique was used to implement the capacitive coupled IBC system by Song et al. in 2007 [97]. This method distributes the non-return-to-zero (NRZ) digital data through the body channel without utilizing any signal modulation block. The implementation and fabrication of the transceiver in such a system is of less complexity compared to the methods that are employed in the modulation schemes. Yet, according to a recent study on the capacitive coupling, the WBS was introduced as the most reliable technique in IBC system [122]. In this study, the proposed WBS-IBC system obtained data rates up to 5 Mb/s, which are the most satisfactory results among the nonmodulation capacitive coupling IBC systems [122]. However, the direct voltage or power transmission through the body might lead to hazard and body damage and it is therefore not suitable for the IBC medical applications. Recently, the Impulse Radio (IR) technique was employed to develop an electrocardiogram (ECG) with capacitive coupling IBC [87]. In the IR technique, short pulses using either OOK or pulse position modulation (PPM) scheme are propagated through the communication channel. The possibility of designing a low power consumption, high data rate, and short-
IBC Applications and Systems
In this section an overview of IBC applications and systems is presented. The first part of the section shows some practical
IBC in Medicine, Systems, and Applications.
Generally, the IBC system applications could be divided into two main categories: biomedical applications and nonbiomedical applications (see Figure 16). The IBC biomedical applications are used to communicate healthcare information among subjects and clinical experts. Yet, nonbiomedical applications of IBC are employed for information communications without clinical purposes. The initial IBC proposed by Zimmerman was used for a nonbiomedical application when he designed a personal area network (PAN) prototype [7]. The prototype was used to exchange the electronic business card information by shaking hands by means of the proposed IBC system. In the PAN, the wireline communication was replaced by the human tissues and the signals were propagated through the body itself. Based on the positions of IBC devices on the body, intrabody communication systems can be divided into two large groups: (1) at least two IBC devices are constantly on, in, or near the same user and (2) one IBC device is worn by the user, and the other is on a different user or embedded in an environment. First group of possible applications include, but is not limited to, continuous physiological signal monitoring (IBC devices can be on or implanted in the human body) or continuous data transmission (like data, audio, and video streaming). In the second group the communication is established when the user touches or comes very close to the other IBC device, as in data exchange between two users during the handshake or user identification at the IBC gateway.
The features of IBC such as being interference-free, energy efficiency, antenna-less transmission, and high security have introduced IBC as a communications support technology in emerging biomedical applications. Several studies have investigated using the IBC technique in medicine [186]. Hachisuka et al. [63] successfully measured the heart rate as well as a person's oxygen saturation (SpO 2 ) using IBC technique. However, their system suffered from the lower data rate, that is, 9.6 kb/s, compared to the newer IBC systems. In 2009, due to the appropriate results of IBC, the galvanic coupling of IBC technology was employed in the wireless implant communications [51]. In this study, Wegmueller et al. carried out the phantom measurements using miniaturized IBC transceiver pills [51]. Although the achieved results verified the performance of the proposed IBC system, the results were obtained from the homogeneous phantom medium rather than the multilayer body issue. Recently, some in vivo experiments were carried out on a pig body to examine the use of IBC technique in biomedical implant applications [82]. In this study, five electrodes were utilized including one implanted transmitter and four surface receiver electrodes. The authors investigated the influence of various electrodes on the channel gain. The channel length varied from 280 to 850 mm. The results indicated that the electrode type effects on the channel gain could be neglected using IBC technique.
Another application of IBC communication technology is the human area networking system called RedTacton developed by Nippon Telegraph and Telephone Corporation (NTT) [205]. This system uses the skin surface to communicate data with the rate up to 10 Mb/s. RedTacton provides highly secure data propagation through the body channel just by shaking the nearby person's hand or touching the electronic devices around the body such as printers. The features of RedTacton could be also led to new trends in employing IBC in daily life such as the security and E-payment applications. Song et al. [192] proposed a novel digital wearable audio player system by applying the capacitive coupling IBC approach. In their system, the audio signal communicates between the audio player and the earset receiver through the user's skin when the user touches the backside single electrode of the audio player. The reported data rate of the transceiver was 2 Mb/s while the system power consumption was around 5 mW. According to Song et al. [192], not only does the IBC-based audio player system indicate the higher energy efficiency but also the better audio quality is achieved compared with the Bluetooth technique.
In the image-guided brain surgery, the real-time display of a tracked instrument is possible through navigation technology. Nowadays, electromagnetic tracking (EMT) devices are the most common technology used for position tracking [206]. Electromagnetic sensors are utilized in these systems to collect the position of the tracked surgical instrument by cable. The replacement of these sensors by wireless ones could lead to data transmission outside the body using the IBC technique.
A new concept of wireless transmission of neural signals within the brain, called intrabrain communication (IBCOM), was presented by Al-Ashmouny et al. [199]. In this work, two miniaturized IBCOM chips were designed and tested in vivo on a rat's brain, with no effect on normal neural activities. Transmission distance, limited by the size of the rat's brain, was 15 mm. Similar research on wireless image data transmission through a mouse's brain was reported in [200] and called wireless intrabrain communication (WIBCOM) method. Another demonstration of wireless data transmission through a mouse's brain by intrabody communication employed an implantable microsized image sensor [201].
Intrabody nanonetworks bring together the concepts of intrabody communications and nanotechnologies for application in new treatment and diagnostic techniques [207]. Research in this field mainly focuses on the study of the terahertz band as well as the investigation of new nanoantennas based on nanomaterials [208]. The miniaturization of a conventional antenna to meet the size requirements of a nanosensor results in very high resonance frequencies, in the order of several hundred terahertz [209]. One of the applications of intrabody nanonetworks has been the monitoring of the human nervous system. Communication with nanomachines implanted in the living human brain has already been used in the treatment for drug-resistant epilepsy [210]. In [211], a nanoscale stimulator device called synaptic nanomachine (SnM) was presented, which had effects on the synchronization of neurons and their oscillatory behaviors. These effects are the basis of cognitive and behavioral functions, such as brain learning and plasticity. In addition, the results presented in [212] focus on the design of appropriate stimuli to evoke the desired synaptic modifications in terms of strengthening and weakening. On the other hand, the authors of [213] proposed the use of natural neurons as elements of access to biosensors and bioactuators implanted at nanoscale.
IBC for Implanted Systems.
Implanted sensors can be used to continuously measure relevant physiological parameters, paving the way to novel approaches for health monitoring and care delivery [214]. Implanted miniaturized devices could enable efficient healthcare and clinical applications, such as monitoring, diagnosis, and treatment [76,155].
Currently, medical implants use narrow band technology with a carrier frequency at the MICS band between 402 and 405 MHz [215]. Inductive coupling and radio frequency (RF) telemetry are frequently used for implanted communications [216]. However, one of the main drawbacks for achieving an efficient communication link between implanted devices is posed by the fact that human body is primarily composed by water, a medium through which RF electromagnetic waves do not propagate well, even at relatively low frequencies [217]. The design of antennas for medical implants is very challenging due to the small size, low power, biocompatibility [215], and safety requirements [49], among others.
This way, IBC has often been proposed in the literature to overcome some of these issues. Compared with other communication methods, implanted IBC has the main advantage of low transmission power through the use of miniaturized electrodes [75]. Therefore, IBC is less susceptible to eavesdropping and provides higher security to implanted devices [218]. Because of the nonconductive nature of the free space outside the surface of the skin, the signal is confined within the human body making it an inherently secure communication channel [49]. It must be noticed that patient safety must be ensured and induced currents should not cause nerve stimulation and not interfere with body signals in the operating frequency range [24]. Therefore, signals at frequencies below 10 kHz have to be avoided.
Due to the inherent difficulty of implant experimentation, physical models and simulation approaches are common methods in research studies. Wegmueller et al. [34] used MSL27 liquid in their experiments to emulate muscle tissue and securely characterize implanted IBC transmission in the thorax. A galvanic transmission system for a pacemaker and an implanted pill were evaluated in a series of in vitro experiments according to this method [51]. Simulation results for implanted sensors at 60-mm deep from the body surface predicted 35 dB and 50 dB signal attenuation for galvanic coupling and RF techniques, respectively [186]. The floating electrode in capacitive coupling makes its implementation in implantable devices challenging [174]. Li et al. [76] proposed an implantable intrabody communication system based on capacitive coupling (see Figure 17). They found that the signal attenuation decreased as the interelectrode distance increased from 5 to 9 cm [82]. It has also been reported that the signal attenuation decreases with the increase of the height between the body and the ground up to a certain value from which it remains approximately constant [76]. Another approach is the one proposed by Anderson and Sodini in [16] (see Figure 18), integrating the internal conductive tissues of the body as active elements in the capacitive coupling. This technique was experimentally validated in the frequency range between 10 MHz and 150 MHz.
A major problem when dealing with implants is the possibility of rejection. This issue was analyzed by Okamoto et al. considering two different implanted electrodes such as platiniridium (diameter of 0.3 mm and length of 15 mm) and titanium mesh electrodes [77]. The titanium mesh electrodes showed improved transmission performance together with excellent histocompatibility properties [219]. Energy efficiency has also been analyzed in the context of IBC implants. Sun et al. [220] implemented a surface implant transmitter with x-shaped electrodes to improve current flow, thus reducing energy consumption. The transmission performance in a multihop scenario can be enhanced if the implants are placed in the boundary of different tissue layers exhibiting different reflection coefficients [221]. The power consumption has been reduced to only 10.8-mW for a medical implant transceiver, as it was reported in [20].
IBC implanted systems have been applied to ECG monitoring [90], wireless intrabrain communication [186], or monitoring of artificial organs inside the body [78]; however, its expectation of use is increasing due to the numerous advantages that this communication method offers. Moreover, the recent advances in micro-nano-technology are enabling a new generation of nanoscale implantable devices inside the human body [222,223] to detect specific molecules in vivo, and IBC technique can play an important role in this regard.
Nanoscale Intrabody Communication.
Most of the successful medical devices for physiological signs detection (e.g., blood pressure, photoplethysmogram, electrocardiography, and electromyography) are placed on the body surface or near it [223]. With the rapid development of nanotechnology, such as graphene or metamaterials, the implementation of nanoscale implantable sensors for the specific molecules in vivo became possible [224]. The nanosensors may be implanted into the organs, detecting specific symptoms or virus and forwarding the sensing data to the router node in body [225]. However, so far, researchers have paid more attention to the biophysical or biochemical abilities of these implantable nanosensors. Communication techniques are urgently needed inside the body for these nanosensors to enable the coordinated sensing and actuation of biochemical implants, to extract information about deep tissues and cells, and to export it through an external gateway out the body for further processing.
Intrabody Molecular Communication.
Several communication paradigms are considered for intrabody communications in nanoscale. The most promising one is the intrabody molecular communications (MC), which employs molecules to encode, transmit, and receive the chemical or physical information within the biological scope and exchange it with the electrical or information scope of the nanonetworks [226]. MC inspired by the communication among natural living entities, has the advantages of efficiency and safety in nature. Although it has the ability to transfer information over long ranges (nm-m), the speed of MC is considerably slow when the range between transceivers is longer than a few m [224]. However, from an engineering perspective, how to control, modify, or reengineer the transmission of information in the natural solutions is one of the major challenges. Fortunately, engineers have recently started to analyze several different possibilities to realize MC systems. Author of [227] derived the systems-theoretic models for a diffusion-based molecular communication system design based on biological circuits. Authors of [228] proposed a new artificial molecular network architecture and employed flagellated bacteria and catalytic nanomotors as the carrier to transport the encoded DNA sequences between transmitter and receivers. Authors of [226] depicted the single-input single-output (SISO) intrabody communication between two neurons, and the multi-input single-output (MISO) synaptic communication channel. Authors of [229] investigated the transmitting rate per spike at the neuron and how to enhance the rate compared to SISO channel. A multipleaccess channel model was shown in [224], which mainly consisted of four stages to determine the communication performance.
Intrabody Communication within THz
Band. The terahertz (THz) band can be the potential communication solution in the future electromagnetic (EM) nanosensors. A 3D numerical model is set up in [230] to investigate the EM channel characteristics, including the path loss and noise level at THz frequencies which propagated within the different human tissues. The results showed that the THz channel attenuation versus distance was dramatically lower than the molecular situation. Then the channel capacity was studied to emphasize the possibility of applying intrabody THz communication systems for nanonetworks. At a distance of millimeters, the capacity can reach 100 Terabits per second (Tbps) depending on the different transmitted signal. In addition, due to the very limited capability of nanosensors, they must have simple communication and simple medium sharing mechanism. Based on On-Off Keying (OOK) protocol and Time Division Multiple Access (TDMA) framework in the THz, [225] proposed a conceptual nanonetwork model, which assumed hexagonal cell-based nanosensors deployed in cylindrical shape 3D hexagonal pole. Data transmission efficiency, for the various combinations of transmission methods, exploiting hybrid, direct, and multihop methods was analyzed in this work.
Future Challenges of IBC
From the original concept originally proposed by Zimmerman in 1996, intrabody communication has developed for more than two decades. Till now, the researchers have paid more attention to the electrical field model with analytical or numerical methods, distributed circuit model with Cole-Cole parameters, transceiver design and applications, in vivo and phantom experiments, and so forth. There have been important advances not only in the understanding of the bioelectric mechanism underlying IBC communication, but also in the design of more sophisticated prototypes. However, there are still some important scientific and technical challenges remaining for the successful deployment of IBC technology, such as the effect of long-term use in the human body, communication optimization through more realistic and personalized anatomical models, the study of the influence of both anthropometric characteristics and the subject's movement on the transmission performance, standardization of communications, and implementation of miniaturized and energy-efficient transceivers. IBC research has been focused on the transmission of low-rate biomedical data (ECG, blood pressure, or pulse oximetry). However, the future IBC implanted sensors will require higher data rates. For example, it has been estimated that an endoscopic capsule will generate data at a rate of 2 Mbps while neural recording will need a sampling frequency in the range of MHz [231]. This remains a technical challenge for IBC researchers and designers. In addition, new IBC applications may further develop along with modern medical and health technologies, such as nanomaterials, biochemical reaction, and flexible electronics.
Implantable medicine is still an immature area, which has the potential to synthesize all these technologies together. The implanted nanosensors can gather various specific signals less invasively from organ, tissue, neuron, and even cell level. Once IBC transceivers with the nanoscale are designed to set up the nanonetworks, these nanosensors can exchange information with each other to intelligently control the release of molecular structures or certain biochemical compounds or even to stimulate certain neurons in the body. Hypothesizing, such information will be then transmitted to the wearable device outside the body which can be connected to a public network for healthcare professionals to assess, diagnose, and prevent diseases or issues at very early stage. Firstly, the fundamental work is to build the physical model in the nanoscale. The model may not be limited to the electrical scope but even combine physiological and biochemical principles in order to comprehensively clarify the mechanism for the information communication. Secondary, the nanosensors with bioelectronics interface, which can translate information from the biochemical domain to the electronics domain for simple signal processing, encoding, and translating using the IBC method, should be synthesized and realized by the nanotechnology, flexible electronics, and integrated circuit with mixed signal. In addition, how to harvest energy for the nanosensors is another key issue. Another alternative approach is to make use of the IBC weak electrical signal from the wearable device attached on the skin to charge the sensors between the biological environment and the external world. These novel approaches may open the door to an immense range of applications in medicine. Finally, the advances in the field of nanoimplantable devices will require an extensive effort in the definition of new MAC layer protocols enabling communication between implantable sensors and nanonetworks.
Acronyms AC: Alternating current ADC: Analog-to-digital converter ADF: Active-digital filter ASIC: Application-specific custom designed integrated circuits | 2018-05-09T00:43:47.525Z | 2018-03-11T00:00:00.000 | {
"year": 2018,
"sha1": "9637ccb13f12158bb99cff7966fe6b336a83404b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/wcmc/2018/9026847.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9637ccb13f12158bb99cff7966fe6b336a83404b",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
256752058 | pes2o/s2orc | v3-fos-license | Liposome–trimethyl chitosan nanoparticles codeliver insulin and siVEGF to treat corneal alkali burns by inhibiting ferroptosis
Abstract Alkali burns are potentially blinding corneal injuries. Due to the lack of available effective therapies, the prognosis is poor. Thus, effective treatment methods for corneal alkali burns are urgently needed. Codelivery nanoparticles (NPs) with characteristics such as high bioavailability and few side effects have been considered effective therapeutic agents for ocular diseases. In this study, we designed a new combination therapy using liposomes and trimethyl chitosan (TMC) for the codelivery of insulin (INS) and vascular endothelial growth factor small interfering RNA (siVEGF) to treat alkali‐burned corneas. We describe the preparation and characterization of siVEGF‐TMC‐INS‐liposome (siVEGF‐TIL), drug release characteristics, intraocular tracing, pharmacodynamics, and biosafety. We found that siVEGF‐TIL could inhibit oxidative stress, inflammation, and the expression of VEGF in vitro and effectively maintained corneal transparency, accelerated epithelialization, and inhibited corneal neovascularization (CNV) in vivo. Morever, we found that the therapeutic mechanism of siVEGF‐TIL is possibly relevant to the inhibition of the ferroptosis signaling pathway by metabolomic analysis. In general, siVEGF‐TIL NPs could be a safe and effective therapy for corneal alkali burn.
| INTRODUCTION
Corneal alkali burns are one of the most common emergencies in ophthalmology, accounting for 11.5%-22.1% of all ocular traumas. 1 As a result of corneal alkali injury, the ocular surface and anterior eye segment are extensively damaged, causing permanent vision impairment or even complete blindness. 2 It has been reported that corneal oxidative stress occurs immediately after alkali damage, which precedes the corneal inflammatory response. 3 During alkali burn-induced injury, excessive oxidative stress in the cornea, oxidative changes occur in cellular macromolecules, and lipid peroxidation occurs in the membrane, 4 leading to an antioxidant/pro-oxidant imbalance in corneal tissues. On the other hand, the activity of antioxidant enzymes is decreased, while the expression and activity of catalytic enzymes running at physiological levels or even increases, leading to an increase in reactive oxygen species (ROS) production and a decrease in ROS decomposition. 5 These factors can cause a high level of oxidative stress, eventually resulting in excessive intracorneal inflammation, scarring, and corneal neovascularization (CNV). 6 Similar to oxidative stress, CNV plays a critical role in the pathophysiology of corneal alkali burns. CNV increases vascular permeability, which exacerbates inflammation, chronic edema, lipid exudation, and corneal scarring, potentially resulting in permanent vision loss. 7 Currently, topical corticosteroids and nonsteroidal anti-inflammatory drugs (NSAIDs) remain the top priorities. However, these treatments can delay wound healing, and long-term use of corticosteroids can lead to increased intraocular pressure (IOP), cataracts, and an increased risk of infection. 8 Even though various other treatment options have been available in the clinic, such as amniotic membrane transplantation, their effectiveness has not been optimal in the past two decades. 9 Consequently, it is urgent to explore a more efficient and safe treatment for severe corneal alkali burns. Cruz-Cazarim et al. 13 showed that INS could treat dry eye syndrome and corneal injuries. Morever, a recent clinical study suggested that topical INS was an effective way to safely promote the healing of persistent epithelial defects in patients who were unresponsive to standard treatment. 14 However, the use of INS in the treatment of corneal alkali burns has been rarely reported. We hypothesize that the local application of INS may be a promising strategy for the treatment of corneal alkali burns due to its antioxidant capacity.
In addition to inhibiting oxidative stress, the treatment of CNV is also essential for corneal alkali burns. CNV can be effectively treated by inhibiting vascular endothelial growth factor (VEGF) and its receptors, which modulate angiogenesis. 15 Anti-VEGF antibodies are therefore used to treat CNV, either through topical or subconjunctival applications. 16 Unfortunately, anti-VEGF antibodies are generally limited due to their poor efficacy, side effects, and drug resistance. 7 RNA interference is a powerful approach to knocking down target genes. 17 VEGF small interfering RNA (siVEGF) reduces VEGF expression and CNV. [18][19][20][21] Accordingly, it is reasonable to speculate that siVEGF could enhance the therapeutic effects of INS on corneal alkali injury, and codelivery of INS and siVEGF may provide a new combination therapy for corneal alkali injury.
It is well-established that corneal physiological and physical barriers impair drug and siRNA penetration. To improve bioavailability, nanopharmaceuticals have been extensively developed to deliver siRNA or ocular drugs to treat ocular diseases. [22][23][24] A wealth of studies have suggested that INS-loaded liposomes could increase the bioavailability of INS. 25,26 However, the poor stability of liposomes leads to the rapid release of the encapsulated drugs, which impairs the therapeutic effects of drugs. Chitosan (CS) is a deacetylated pyran polysaccharide isolated from chitin that is biocompatible, nontoxic, and biodegradable and has been widely used to prepare nanocarriers such as micelles and nanoparticles (NPs). CS can also be used as a coating for liposomes to improve their stability in vitro and in vivo. 27 Furthermore, CS can form NPs and be loaded with negatively charged nucleic acids and have been considered promising carriers for gene delivery. 28 Morever, this encapsulation protects nucleic acids from host nucleases. 29 As a quaternary CS derivative, trimethyl chitosan (TMC) also possesses these properties, and it is preferred due to its high water solubility, ionic stability, and cationic density. 30 In this study, we developed a novel eye drop formulation based on liposomes and TMC to encapsulate and deliver INS and siVEGF.
We expect these NPs to enhance the treatment efficacy of corneal alkali burns through the cooperative effects of INS and siVEGF, as well as their enduring effects and high bioavailability. We also explored the potential mechanism of the siVEGF-TMC-INS-liposome (siVEGF-TIL) NPs in treating corneal alkali burn.
The mixture was ultrasonicated in a water bath to form w/o emulsion and then transferred into a 100 mL round-bottomed flask, which was subsequently evaporated under reduced pressure with a rotating speed of 50 rpm at 30 C for 3 h to remove the organic solvent. Afterward, 4 mL citric acid-Na 2 HPO 4 buffer (pH 5.6) was added to hydrate the films until a homogeneous dispersion and this mixture was transferred to a 10 mL EP tube. Then, 0.5 mL PFOB was added to the mixture, which was sonicated (55 W, four 3 min) with a sonicator (Sonics & Materials Inc.) in an ice bath, and then centrifugation was performed at 6000 rpm/min for 5 min. Supernatants were removed and sediments were collected and resuspended by phosphatebuffered saline (PBS; pH 7.4), then stored at 4 C for further use. Subsequently, an aliquot of INS-lip was mixed with the same volume of TMC (0.5 mg/mL) solution in PBS and then shaken and incubated at 4 C for 1 h to prepare TMC-INS-lip (TIL). Finally, siVEGF was loaded by electrostatic adsorption with an optimal ratio to obtain siVEGF-TMC-INS-lip (siVEGF-TIL). siVEGF-TMC-lip (siVEGF-TL) was made using the same protocol but without INS. Similarly, Empty-TMC-lip (TL) was also prepared using the same protocol but with the omission of INS and siVEGF.
| Characterization of NPs
The morphology NPs was observed by light microscope transmission electron microscope (TEM) (Hitachi H-7600). The particle size and zeta potential were measured using a laser particle size analyzer system (Nano, ZS90; Malvern Instrument Ltd were calculated by the 2 ÀΔΔ C t method using GAPDH as a control.
| Cellular uptake of NPs
Each gene was analyzed in triplicate to reduce randomization error.
| CCK-8 assays
HCECs were plated into a 96-well plate at the density of 5 Â After being cultured for 24 h, the cells were washed thrice with PBS, and then freshly prepared CCK-8 solutions were added to each well.
The CCK-8 was used to detect cell viability in vitro according to the manufacturer's instructions for CCK-8. Absorbance at 450 nm was measured by a microplate reader (BioTek Instruments Inc.).
| In vitro inhibition of oxidative stress, inflammation, and neovascularization by NPs
To investigate the antioxidant stress, anti-neovascularization, and anti-inflammatory capacity in vitro, HCECs cells were exposed to immediately. [31][32][33] The depth of corneal injury was involving corneal epithelium and superficial stroma which was confirmed by H&E ( Figure S1).
| Clinical evaluations
After alkali burn, the SD rats were randomized into six groups (PBS, siVEGF-TL, INS, INS-lip, TIL, siVEGF-TIL). Five microliters different reagents were dropped into the right eye twice a day respectively. No treatment for left eyes. To observe the degree of corneal opacity, corneal epithelial repair, and CNV, alkali-burned corneas were examined by portable slit lamps before and after fluorescein sodium staining every day and photographed on Days 1, 3, 7, and 14. The IOP was measured using a handheld tonometer (iLab tonometer; iCare). Corneal opacity was scored using a scale of 0-4 (Grade 0 = completely clear; Grade 1 = slightly hazy, iris and pupils easily visible; Grade 2 = slightly opaque, iris and pupils still detectable; Grade 3 = opaque, pupils hardly detectable; and Grade 4 = completely opaque with no view of the pupils). The corneal epithelial healing rate was calculated according to the following formula (k represents the corneal epithelial healing rate, S 0 represents the 0-day staining area, and S t represents the observed staining area): For CNV, the total corneal area and vessel area were manually selected with ImageJ. The CNV area was presented as the percentage with the following formula:
| Antioxidant stress and anti-inflammatory activity in vivo
Corneas of each group at 14 days were harvested and the levels of SOD, GSH, and MDA were quantified by commercial kits according to the instructions. Also, the corneal tissues were collected and homogenized with RIPA lysate, followed by centrifugation at 15,000 rpm for 15 min. Then, the levels of Glu, TNF-α, IL-6, and MMP-9 in the supernatant were detected by commercial ELISA kits according to the procedure provided by the manufacturer.
| Histological and immunohistochemical analysis
At 14 days after treatment with different reagents, the normal and alkali-burned corneas were enucleated for histological and immunohistochemical analysis, fixed in 10% buffered formalin, and successively dehydrated in a series of concentrations of ethanol and dimethylbenzene. Afterward, the treated tissues were fixed in paraffin, and tissue slices (thickness 8 mm) were stained with H&E. In addition, the level of CD31 in the corneal tissues was identified using IHC.
| Statistical analysis
Statistical analysis was performed by the GraphPad Prism 7 program.
Quantitative data were reported as mean ± standard deviation.
Two-group comparisons were conducted using a two-tailed Student's t-test. One-way analysis of variance followed by Tukey's multiple comparisons test was used for multigroup comparisons. p < 0.05 was considered statistically significant. (p > 0.05), which indicated that neither TMC nor siVEGF affected the EE or DL of INS liposomes (Table S1). To confirm the siRNA binding capabilities of TIL, agarose gel electrophoresis was performed after mixing the TIL with siVEGF at different TIL/siRNA ratios. As shown in Figure 1f, the migration of siVEGF in the gel gradually slowed as TIL ratios increased.
| Preparation and characterization of NPs
Almost no free siVEGF could be detected at mass ratios above 5, demonstrating the complete binding of siVEGF by TIL conjugates. The capability of TIL to protect siRNA from nuclease degradation was verified by incubating siVEGF-TIL with RNase A for 30 min. As shown by agarose gel electrophoresis assays, the naked siVEGF RNase (À) group had a free RNA band, while the naked siVEGF RNase (+) group had no visible bands, indicating that siVEGF had been degraded in the presence of RNase.
Regarding siVEGF-TIL, both the RNase (+) and RNase (À) groups showed no apparent bands, but siVEGF-TIL NPs shook at 4 C for 2 h with or without RNase both could observe the bands, indicating that siVEGF could be released from the NPs and siVEGF-TIL could protect siVEGF from RNase degradation (Figure 1g).
| Sustained release of INS and siRNA in vitro
As shown in Figure 2a, (Figure 2b).
Overall, siVEGF-TIL and TIL showed ideal sustained release, which was conducive to maintaining concentrations of drugs and genes in the cornea and thus provided potent and prolonged therapeutic efficacy. 35 In addition, the sustained-release system can decrease the side effects of drugs on the cornea and significantly improve medication safety. 36 Furthermore, the sustained release of drugs can reduce dosing frequency, which is one way to enhance patient adherence. 37
| Efficient delivery of NPs in vitro and in vivo
Efficient intracellular uptake of NPs is required to improve the therapeutic efficacy of drugs. 38 Therefore, a CLSM was performed to examine the intracellular uptake of NPs in this study. As shown in 63.92 ± 5.36% for siVEGF FAM -TIL and 68.20 ± 5.90% for siVEGF FMA -Lipo2000 (Figures 4c and S2). Compared with siVEGF FMA -Lipo2000, siVEGF-TIL exhibited almost the same transfection efficiency. These results showed that siVEGF-TIL had adequate transfection efficiency in vitro.
To identify the efficiency of siVEGF-TIL in downregulating VEGF expression, qRT-PCR was performed. As shown in Figure 4d, qRT-PCR demonstrated that compared with that in the normal group,
| NPs improve the viability of H 2 O 2 -stimulated HCECs
As shown in Figure S3A Alkalis saponify the fatty acids in cell membranes, which results in membrane disruption and dissolution; alkali quickly penetrates through the cornea into the deeper parts of the eye, and hyphema is present in the anterior chamber, followed by increased IOP. 40 On the other hand, early direct chemical injury can cause tissue shrinkage and disruption of the trabecular meshwork and outflow channels. Subsequent chronic inflammation may lead to synechiae and angle closure, which contribute to secondary increased IOP. 41 Figure 6g shows that the baseline IOP of the rats did not significantly differ among the groups. Statistically significant differences in IOP were first noted on the third day, from then, the median IOP was significantly increased in
| UHPLC-MS metabolomics analysis
The mechanism by which siVEGF inhibits VEGF expression and CNV is currently well understood. Briefly, siVEGF binds with RISC, causing the decomposition of the target mRNA to prevent it from being translated into a functional protein. However, the mechanism by which INS affects corneal alkali burn is unclear and was investigated in this study. 42 INS is an anabolic agent; therefore, we hypothesized that INS could treat alkali-burned corneas, through metabolic regulation.
Metabonomics is the accurate metabolomic analysis of dynamic metabolic changes in cells, tissues, and whole organisms. 43 Principal component analysis showed a trend in metabolites that were partially separated between the PBS group and INS group, indicating differences among them ( Figure S4A,B). To further determine the differences in metabolic profiles between the two groups, orthogonal projection to latent structure-discriminant analysis (OPLS-DA) score plots were constructed. As shown in Figure 7a Table S2). Based on KEGG analyses, 27 essential signaling pathways associated with these altered metabolites were identified, with 17 associated with significantly higher levels of glutamate in the PBS group than the other group (Table S3).
Glutamate is a nonessential amino acid that naturally occurs in the L-form and plays an important role in protein and carbohydrate metabolism, boosting resistance to hypoxemia, stimulating oxidation processes, preventing potential redox decreases, affecting glycolysis in tissues, and exerting hepatoprotective effects. 44 In addition, glutamate is a pivotal regulator of ferroptosis. 45 In this metabolomic analysis, the ferroptosis pathway was significantly enhanced in the PBS group compared with the INS group, and the differential abundance score was 1 (Figure 7f,g). Ferroptosis is closely associated with oxidative stress. Therefore, we hypothesized that INS and all the INS-loaded NPs in this study could treat alkali-burned corneas by decreasing glutamate levels and inhibiting the ferroptosis pathway.
| NPs may treat corneal alkali burn by inhibiting the ferroptosis pathway
Ferroptosis is a form of regulated cell death that is driven by peroxidative damage to polyunsaturated fatty acid-containing phospholipids in cellular membranes. Specifically, ferroptosis is induced by suppressing xCT and GPX4 activity and promoting the accumulation of ROS and a reduction in GSH. 46 Excessive levels of extracellular glutamate can impair or inhibit cysteine uptake via xCT, resulting in GSH depletion. 45 GSH depletion decreases GPX4 activity, and lipid peroxides can not be suppressed and metabolized, ultimately accelerating ferroptosis. 47 F I G U R E 8 NPs may treat corneal alkali burn by inhibiting the ferroptosis pathway. Glu content (a), and GSH concentration (b) in normal corneas and alkali-burned corneas that received different treatments (n = 3 per group). Results were presented as the mean ± SD. **p < 0.01; ***p < 0.001. Comparison between each group and the normal group ( # p < 0.05; ## p < 0.01; ### p < 0.001). (c) Representative Western blots showing xCT and GPX4 in normal corneas and alkali-burned corneas that received different treatments. The quantification of the Western blot assay for relative expression of xCT (d) and GPX4 (e). Results were presented as the mean ± SD. *p < 0.05; **p < 0.01; ***p < 0.001. Comparison between each group and the normal group ( # p < 0.05; ## p < 0.01; ### p < 0.001). n = 3per group. SOD activity (f) and MDA content (g) in normal corneas and alkali-burned corneas that received different treatments (n = 3 per group). Results were presented as the mean ± SD. **p < 0.01; ***p < 0.001. Comparison between each group and the normal group ( # p < 0.05; ## p < 0.01;
| In vivo inhibition of inflammation and neovascularization by NPs
Corneal alkali burn can lead to oxidative stress and severe inflammatory reactions, which can promote each other. As shown in F I G U R E 1 0 Legend on next page.
did not inhibit neovascularization in alkali-burned corneas. This may be because oxidative stress and inflammatory reactions first occur in alkali-burned corneas, and then they stimulate angiogenic factors and promote neovascularization. 6 Therefore, inhibiting VEGF without controlling oxidative stress and inflammation does not inhibit CNV. However, siVEGF-TIL treatment combines the ability of INS to inhibit oxidative stress and inflammation with the ability of siVEGF to inhibit neovascularization; morever, this treatment exhibits superior penetration and adsorption to enhance the bioavailability of drugs and genes, contributing to good therapeutic effects on corneal alkali burns.
| Biocompatibility of siVEGF-TIL in vivo
In vivo biocompatibility was assessed by corneal stimulation assessment in normal SD rat eyes treated with the different formulations, followed by corneal examination using a slit-lamp microscope (Figure 10a). After 30 days of the various treatments, no evidence of corneal opacity, CNV, inflammation, or congestion was found in any corneas. The integrity of the corneal epithelium was evaluated by fluorescein staining. The results showed that the corneal epithelium was intact. In addition, corneal anatomy was examined by H&E staining, and the results showed that the corneas in each group had a regular appearance, were closely and orderly arranged and lacked inflammatory cells or CNV ( Figure 10b). Morever, H&E staining of the major visceral organs (heart, liver, spleen, lung, and kidney) revealed that various reagents in this study treatment did not cause significant histological changes. As a result, siVEGF-TIL NPs have no obvious toxic effects and have excellent biocompatibility, paving the way for clinical applications (Figure 10c).
| CONCLUSION
To the best of our knowledge, this is the first study using a liposome-TMC nanosystem for the delivery of siVEGF/INS as a combination therapy to treat corneal alkali burns. siVEGF-TIL treatment showed significant effects in alleviating oxidative stress-induced HCEC damage (in vitro) and alkali injury in corneas (in vivo). Morever, siVEGF-TIL treatment had the ideal properties of NPs, including good biosafety profiles, lack of toxicity, facile preparation, adherence, and sustained release, suggesting that this strategy holds potential as a novel delivery platform for the cornea. Furthermore, the molecular mechanism of siVEGF-TIL treatment was revealed in this study. We found that corneal alkali burn was linked to the regulation of ferroptosis, which could be suppressed by INS. This is also the first report showing the effects of INS on ferroptosis. Notably, siVEGF-TIL could substantially inhibit both ferroptosis and CNV, eventually preventing alkali damage in corneas. siVEGF-TIL treatment is an up-and-coming therapeutic agent for future clinical applications in corneal damage. There were still many shortcomings in this study. Only male rats were used in this study because males were more susceptible than females to corneal alkali burn. This research did not compare siVEGF TIL NPs with existing treatments for corneal alkali burns (such as topical corticosteroids and NSAIDs). The absolute concentration of INS or siVEFG in the NPs was not detected, which was a limitation of this study regarding the further clinical translation of siVEGF-TIL treatment. cstc2021ycjh-bgzxm0064). The authors also would like to thank the technical support of the SHANGHAI BIOTREE BIOTECH CO., LTD.
CONFLICT OF INTEREST STATEMENT
The author declare no conflict of interest.
DATA AVAILABILITY STATEMENT
Data available on request from the authors. | 2023-02-11T16:08:16.120Z | 2023-02-09T00:00:00.000 | {
"year": 2023,
"sha1": "a6835f02f19b0e8b3db0199d21b00a10bfd3064e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/btm2.10499",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ee82799823bda98561fb1fe793900c81b47944f",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54957391 | pes2o/s2orc | v3-fos-license | CPT Tests: Kaon vs Neutrinos[*]
CPT violation has an impressive limit in the neutral kaon system | m ( K 0 ) − m ( K 0 ) | < 10 − 18 m K = 0 . 50 × 10 − 18 GeV. However, if viewed as a constraint on the mass-squared, the bound appears weak, | m 2 ( K 0 ) − m 2 ( K 0 ) | < 0 . 25 eV 2 . We point out that neutrino oscillation offers better limits on CPT violation in this case. The comparison of solar and rector neutrino results puts the best limit on CPT violation by far, | ∆ m 2 ν − ∆ m 2¯ ν | < 1 . 3 × 10 − 3 eV 2 (90% CL).
The CPT theorem is one of the few solid predictions of the relativistic local quantum field theory [1]. In particular, it states that a particle and its anti-particle must have the same mass and lifetime. It is based on three reasonable assumptions: • Lorentz invariance, • Hermiticity of the Hamiltonian, • Locality.
If CPT is found violated, the implication to the fundamental physics is enormous, as at least one of the three assumptions above must be violated. One way to prove the CPT theorem is by defining S-matrix elements by analytic continuation of the Euclidean correlation functions. The CPT transformation is then achieved by the Euclidean rotation that changes the sign of the (imaginary) time and all spatial coordinates, and hence is a symmetry of the S-matrix elements. String theory is normally argued to be CPT-conserving, as its S-matrix elements are defined precisely in this fashion. However, it does not exclude the possibility of a spontaneous violation of the CPT symmetry depending on the details of the low-energy limit. Kostelecky and collaborators have a series of papers on possible CPT violation based on this point of view [2]. It was also argued that it may be possible to break CPT in a field theory by giving up locality but not the other two [3] (see, however, Ref. [4] for criticisms).
Recently, a possible CPT violation which allows different masses for particles and anti-particles has attracted attention in neutrino oscillation phenomenology. If three indications for the neutrino oscillation, solar [5,6,7,8,9,10], atmospheric [11], and LSND [12], are all correct, we have to accommodate three mass-squared differences of quite different orders of magnitudes, which is not possible within the three generations of neutrinos. A fourth kind of neutrino is usually invoked to explain the data. It has to be "sterile" so that it does not violate the data from Z 0 decay at Large Electron Positron collider (LEP): N ν = 2.994 ± 0.012 [13]. However, recent data from SNO requires ν e oscillation into an active (non-sterile) neutrino [9], while the SuperKamiokande prefers ν µ oscillation into ν τ [14], leaving little room for a sterile neutrino. Further combined with older data from CDHSW [15], Bugey [16], even the extension with a sterile neutrino does not help explain the data very much [17]. Yanagida and the author [18] have pointed out that we can explain all data consistently allowing different mass spectra for neutrinos and anti-neutrinos, because the solar neutrino oscillation is purely in neutrinos while statistically significant evidence for oscillation at LSND is in anti-neutrinos. This observation was partially motivated by the consistency between the LSND and SN1987A data. This possibility of CPT-violating neutrino mass spectra was elaborated further by a series of works by Barenboim et al [19]. (Indirect constraints are important only for the Majorana case [20].) Phenomenologically, a stringent limit exists on the CPT violation in the neutral kaon system. Thanks to the mixing between K 0 and K 0 , the limit on the possible mass difference between them is exceptionally strong [13]: Given such a stringent limit, there does not appear much window for CPT violation or improved tests [21]. We point out that the strength of the CPT limit from the neutral kaon system may be misleading. In lack of a concrete theory of CPT violation, the limit Eq. (1) may be looked at as a limit on the difference in mass squared rather than the masses. In fact, a local Lagrangian field theory always has mass squared as a natural parameter for bosons. Also in relativistic kinematics, mass squared is the natural parameter in Einstein's relation E 2 = p 2 c 2 + m 2 c 4 rather than the mass itself. If reinterpreted as a limit on the possible difference in mass squared, it reads It is intriguing that the possible violation of CPT in quantum gravity suppressed by the Planck scale may lead to an order of magnitude v 2 /M P l ∼ 10 −5 eV, which is well within the above bound.
On the other hand, the neutrino oscillation experiments always measure ∆m 2 , and cannot measure the masses themselves. Yet, limits on the difference δ ≡ ∆m 2 ν − ∆m 2 ν can be obtained. The SuperKamiokande collaboration has studied the possible difference in neutrino and anti-neutrino ∆m 2 in atmospheric neutrino oscillations. Their current limit is [23] − 7.5 × 10 −3 eV 2 < δ < 5.5 × 10 −3 eV 2 . (3) This limit is much better than that from the kaon system. We have to note, however, that this limit assumes the same maximal mixing for both neutrinos and antineutrinos. The limit may be considerably worse if this assumption is relaxed [22]. We find that the best limit comes from the comparison of the solar neutrino data and the recent KamLAND result [24]. We analyze the data within the two-flavor oscillation framework. However, we emphasize that we cannot naively use the result of the global fit to compare the preferred values of ∆m 2 between solar and reactor data. It is because global fits are based on the ∆χ 2 relative to the minimum and hence defines only the relative probability, while throwing away information on which region of the parameter space is excluded on the basis of the absolute probability. We have to find a way to obtain an absolute limit on the parameter.
KamLAND has recently reported its initial result of a significant deficit in the reactor anti-neutrino flux [24]. It demonstrated a deficit in the reactor anti-neutrino flux, which we interpret as neutrino oscillation. Then we can speak of ∆m 2 ν . Combined with the previous reactor experiments CHOOZ [25] and Palo Verde [26], we have a range of ∆m 2 ν not excluded by the data: at 90% CL independent of the mixing angle [27]. We emphasize that both ends of the inequality are the exclusion limits, rather than the "preferred range" from the ∆χ 2 analysis. Therefore, this statement has an absolute meaning: the probability that a value of ∆m 2 ν outside this range would fluctuate and produce the observed data is less than 10%.
As for the solar neutrino data, currently the Large Mixing Angle (LMA) solution is the most preferred, while the LOW solution or Vacuum oscillation (VAC) solution may exist at a higher confidence level. From the analysis in [28], the goodness-of-fit is not necessarily bad even for these solutions or Small Mixing Angle (SMA) solution. It is not clear we can set a lower limit on ∆m 2 ν . Fortunately for our purpose, it will suffice to have only an upper bound on the ∆m 2 ν . SNO [9] convincingly established that the survival probability of 8 B neutrinos is about a third. By naively combining the reported numbers on solar neutrino fluxes with the charged-current reaction φ CC = 1.76 +0.06 −0.43 , we find P surv = φ CC /φ N C = 0.346 ± 0.048. The upper bound at 90% CL is P surv < 0.425. It is important that it is less than a half. If the neutrinos oscillate purely in the vacuum, the deficit would be at most a half in the case of the maximal mixing [29]. The deficit of two thirds is explained only by the presence of the matter effect. In order for the matter effect to be important relative to the mass difference, ∆m 2 ν is bounded from above. We would like to obtain a quantitative upper limit on ∆m 2 ν using this piece of information.
The Hamiltonian of the two-flavor neutrinos is − cos 2θ sin 2θ sin 2θ cos 2θ In this expression we dropped terms that are proportional to the identity matrix as they are not important for the consideration of the survival probabilities. This Hamiltonian is time-dependent as the electron number density n e changes in the course of neutrino propagation. The time evolution of the neutrino states is adiabatic for high ∆m 2 ν , and hence we only need to study the eigenstates of the Hamiltonian at the point of production (n e ≈ 100N A /cm 3 ) and the detection (n e = 0) [35]. In the vacuum, the eigenstates are given simply by We choose the convention that ∆m 2 ν > 0 without a loss of generality, while the mixing angle is varied 0 < θ < π/2 [30,31]. On the other hand, in the core of the sun, with and The mixing angle in the presence of matter is Because two different mass eigenstates decohere on the averaging over the energy and the production region for this range of ∆m 2 ν , we can obtain a very simple expression for the survival probability P surv = cos 2 θ cos 2 θ m + sin 2 θ sin 2 θ m 10 -2 10 -1 10 0 10 1 tan 2 θ The point here is that one cannot explain the reduction of the electron neutrino flux to less than a half if ∆ is too large. One can show that the 90% CL upper limit P surv < 0.425 translates to ∆ < 1.31. Therefore, To be conservative, we use n e = 100N A cm −3 at the core, even though the production region of 8 B neutrino is spread over about a tenth of the solar radius. We also conservatively take p ≃ 10 MeV, the higher end of the 8 B spectrum. We then find numerically, Now we combine Eqs. (4) and (14) to obtain a limit on possible CPT violation. We, however, allow for the possibility that the definition of ∆m 2 may be different between neutrinos and anti-neutrinos and hence they may have a different sign. Given this, the limit is |δ| ≡ |∆m 2 ν −∆m 2 ν | < 1.3×10 −3 eV 2 (90%CL). (15) Indeed this constraint is the world best bound on CPT violation in mass-squared parameters so far.
The situation on the LSND evidence for neutrino oscillation remains unresolved. Naively, the consistency between solar neutrino data and KamLAND seems to exclude the possibility of explaining LSND together with other data using CPT violation within three generations. However, the authors of Ref. [22] argued that the antineutrinos are subdominant in atmospheric neutrino data and hence ∆m 2 as large as that of LSND is allowed for anti-neutrinos. This point had been criticized in [32]. If LSND data stands, we may either need more than one sterile neutrino [33] or lepton number violating muon decay [34]. In the latter case, Mini-BooNE data will not neither confirm or verify LSND data and the situation may remain ambiguous.
In summary, we argued that the limit on CPT violation from the neutral kaon system is not as strong as it appears once viewed as a constraint on the masssquared difference between kaon and anti-kaon. Compared to the kaon constraint, neutrino oscillation data provide much stronger limits. We derived a limit on δ = ∆m 2 ν − ∆m 2 ν quantitatively from SNO and Kam-LAND data, with an emphasis on using the absolute probability rather than relying on the ∆χ 2 analysis. The obtained bound |δ| ≡ |∆m 2 ν − ∆m 2 ν | < 1.3 × 10 −3 eV 2 (90% CL) is currently the best limit on the possible CPT violation in mass-squared of particles and anti-particles. | 2018-12-14T03:16:55.556Z | 2003-07-09T00:00:00.000 | {
"year": 2003,
"sha1": "84847d8892a0213fd957cd766e8d6d4426dc30f8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2004.06.106",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "06b79187bfd7270e4442c6dcb14262681f2eae64",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
643697 | pes2o/s2orc | v3-fos-license | An exploratory study into the role of miR-204-5p in pregnancy-induced hypertension
The molecular mechanism that leads to pregnancy-induced hypertension (PIH), a pregnancy-specific syndrome, remains poorly understood. It has been suggested that microRNAs (miRNAs) may be potentially useful biomarkers for severe preeclampsia (PE), which is an important condition associated with PIH. The aim of the present study was to identify miR-204 by verifying differentially expressed serum miRNAs in patients with PIH during pregnancy compared with normal controls. Subsequently, the effects of miR-204 on proliferation and apoptosis of human choriocarcinoma (JAR) cells in hypoxic microenvironment were investigated. Previous studies indicated a number of miRNA candidates and the present study validated the expression of eight miRNAs in serum samples using reverse transcription-quantitative polymerase chain reaction (RT-qPCR). A higher expression of miR-204 was identified in patients with PIH. To assess the impact of miR-204 inhibition on hypoxic JAR cells function in vitro, cell proliferation was detected using a Cell Counting Kit-8 assay. The rate of apoptosis and cell cycle progression was then examined by flow cytometry. RT-qPCR confirmed that serum miR-204-5p is more highly expressed in patients with PIH. Further statistical analysis indicated that the survival ratio of JAR cells in hypoxic microenvironments was increased in the miR-204-5p inhibitor group. However, the miR-204-5p inhibitor protected hypoxic JAR cells from apoptosis. The analysis of cell-cycle status demonstrated that the percentage of cells in the G2/G1 phase was larger compared with the control group. The results of the present study suggest that low levels of miR-204-5p may increase cell proliferation and reduce cell apoptosis with cell cycle changes in vitro. Therefore, serum miR-204-5p may be used as a notable biomarker for the diagnosis, prevention and treatment of PIH.
Introduction
Pregnancy-induced hypertension (PIH) is a syndrome that occurs during pregnancy. PIH is the primary cause of maternal, neonatal and fetal mortality and morbidity (1). PIH is defined by a diastolic blood pressure (DBP) >90 mmHg and systolic blood pressure (SBP) >140 mmHg, and causes complications in 6-10% of pregnancies. Therefore, PIH is classified as either mild (SBP, 140-149 mmHg; and DBP, 90-99 mmHg), moderate (SBP, 150-159 mmHg; and DBP, 100-109 mmHg) or severe (SBP, ≥160mmHg; and DBP, ≥110 mmHg) (2,3). According to the Canadian Hypertension Society (4), an important condition associated with PIH is gestational hypertension and preeclampsia (PE), a common health problem with adverse effects for fetus and mother (5). It has been indicated that patients with PIH may be at a higher long-term risk of other medical conditions in later life, including diabetes mellitus, hypertension, kidney disease and cardiovascular disease (6). Indeed, a positive association between PIH and hypertensive diseases was observed in a study of 3,593 women with PE during their first singleton pregnancy (7).
It has been suggested that PIH is impacted by endothelial cell injury occurring during maternal and blood vessel formation disorder in the placenta, which may result from different cell factors generated in placental hypoxia (8). However, the pathogenic molecular mechanism of PIH is not yet fully understood and therefore there is difficulty in monitoring disease progress. PIH is a genetic disease that develops through a multi-step process (9,10). Significant progress in the development of novel therapeutic agents may improve the understanding of the molecular basis of gene-disease interactions. Although a number of challenges remain regarding the prediction, prevention and management of PIH, previous studies have suggested that serum-based microRNAs (miRNAs) may be potential biomarkers or effective therapy for early detection, diagnosis and follow-up of severe PE (11,12). This may provide novel suggestions for the management of PIH.
The discovery of miRNAs was a landmark milestone in molecular biology and pharmacology. miRNAs are a series of small (18-24 nucleotides) endogenous noncoding single-stranded RNAs, which can post-transcriptionally regulate target mRNAs by a non-perfect pairing of 6-8 nucleotides (13). Target mRNAs are subsequently degraded by the formation of RNA-induced silencing complex, which suggests that miRNAs may control a range of different biological functions including cellular differentiation, proliferation and apoptosis (14). A higher expression of miRNA has been identified in the placenta of patients with PIH, suggesting that they may have a function in the placenta of patients with severe PE. These miRNAs include miR-92b, miR-342-3p, miR-197, miR-25, miR-296-5p, miR-26a, miR-202, miR-198, miR-95 and miR-204-5p (15)(16)(17), and alterations in miRNA expression may serve a critical role in disease progression due to the negative transcriptional or post-transcriptional regulation of important target genes (18). The present study aimed to examine whether the aforementioned miRNAs that underwent significant alterations in expression during the course of PIH are regulated by miRNAs.
In the present study, differentially expressed miRNAs were investigated in the serum of patients with PIH using reverse transcription-quantitative polymerase chain reaction (RT-qPCR). The results indicate that miR-204-5p may affect cell proliferation, cell apoptosis and cell cycle progression in the hypoxic model in vitro. The results of the present study may be therefore be the basis for further studies to examine the pathophysiological mechanism and identify PIH biomarkers in order to improve the diagnosis, prevention and treatment of PIH.
Patient characteristics, clinical features and serum harvest.
For the identification of PIH-specific serum miRNAs, a total of 40 patients were selected: 20 healthy female volunteers as controls and 20 female patients with PIH, were recruited from the Third Affiliated Hospital, Sun Yat-sen University (Guangzhou, China; Table I) between December 2014 and March 2015, having given informed consent to be included in the present study. Differentially expressed miRNAs were directly validated using RT-qPCR according to previous research (19). As presented in Table I, the exclusion criteria for both groups included: Patients with kidney disease or essential hypertension, a history of alcohol or drug abuse, and illegal drug addiction within the 6 months prior to signing the informed consent. Furthermore, the PIH patients were all pathologically diagnosed by doctors and all blood samples were collected prior to any surgery, chemotherapy and/or radiation treatment.
From each patient, 5 ml venous blood was collected on first admission to the hospital. Blood was drawn into a sterile tube without anticoagulant to harvest cell-free serum. The tube was left in a standing position for 20 min prior to centrifugation at 20˚C and 1,500 x g for 10 min. The supernatant serum was quickly removed by pipette and stored immediately at -80˚C until analysis. The present study was approved by the Ethics Committee from the Third Affiliated Hospital, Sun Yat-sen University and a signed informed consent form was obtained from each participant prior to the study.
Cell culture and treatment. The human choriocarcinoma (JAR) cell line was obtained from the American Type Culture Collection (ATCC; Manassas, VA, USA). JAR cells were cultured for 24 h in growth media containing high glucose-Dulbecco's modified Eagle's medium and supplemented with 10% fetal bovine serum (Hyclone; GE Healthcare Life Sciences, Logan, UT, USA) and 1% penicillin/streptomycin (Mediatech, Inc., Manassas, VA, USA) in a humidified atmosphere of 5% CO 2 and a temperature of 37˚C.
Following dilution into single cell suspensions and seeding into 96-well plates (1x10 4 cells/well), a JAR cell hypoxic model was induced using an AnaeroPack ® system (Mitsubushi Gas Chemical America, Inc., New York, NY, USA) for 48 and 72 h, respectively, prior to harvesting for total RNA isolation (20).
Total RNA isolation and reverse transcription. TRIzol ® reagent (Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA) was used to prepare total RNA and subsequently, 75% ethanol replaced isopropanol for RNA precipitation, according to the manufacturer's protocol. RNA quality was determined using a NanoDrop 1000 spectrophotometer (Thermo Fisher Scientific, Inc., Wilmington, DE, USA). A total of 1 µg RNA was reverse-transcribed into cDNA using a DBI Bestar ® qPCR RT kit (DBI Bioscience, Ludwigshafen, Germany) according to the manufacturer's protocol.
Quantitative polymerase chain reaction (qPCR) of mature miRNAs. RT-qPCR was performed using a 7500 Fast Real-Time PCR System Light Cycler (Applied Biosystems; Thermo Fisher Scientific, Inc.). The 20 µl PCR reaction included 1 µl reverse transcription product (1:5), 0.5 µl sense primer, 0.5 µl universal reverse primer and 10 µl DBI-2043 Bestar ® Real time PCR Master Mix (DBI Bioscience). The reactions were incubated at 94˚C for 2 min in a 96-well optical plate, followed by 40 cycles of 94˚C for 20 sec, 8˚C for 20 sec and 72˚C for 20 sec. All reactions were completed in triplicate and primer sequences are listed in Table II. mRNAs were quantified using the 2 -ΔΔCq formula (21).
Cell proliferation detection. miR-204-5p inhibitor (Shanghai GenePharma, Ltd., Shanghai, China) was transfected into the hypoxic JAR cells using Lipofectamine ® 2000 (Invitrogen; Thermo Fisher Scientific, Inc.) prior to incubation in DMEM without FBS at 37˚C for 48 h. JAR cells were then transfected with 100 nM miR-204-5p inhibitor and subjected to the hypoxia precondition for 48, 72 and 96 h respectively. Then, 100 µl Cell Counting Kit-8 solution (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) was added to each well and incubation was completed for 1 h at 37˚C. Absorbance was measured at 450 nm using a microplate reader.
Apoptosis assay. Following transfection of the miRNA inhibitor in the hypoxic environment for 48 h, quantification of the apoptotic cells was completed using the Annexin V-FITC/PI apoptosis detection kit (Merck Millipore, Darmstadt, Germany). JAR cells were collected by trypsin digestion method (22), washed with phosphate buffered saline (PBS) and re-suspended in 200 µl binding buffer containing 5 µl Annexin V (10 µg/ml) in DMEM with FBS at 37˚C for 10 min in the dark. The cells then underwent incubation with 10 µl PI (20 µg/ml) for 15 min and samples were analyzed using an EPICS ® XL™ flow cytometer (Beckman Coulter, Inc., Brea, CA, USA). Data acquisition and analysis were performed using CellQuest™ software version 5.1 (BD Biosciences, Franklin Lakes, NJ, USA) (23,24).
Cell cycle assay. Following transfection of the miRNA inhibitor into the hypoxic environment and incubation at 37˚C for 48 h, JAR cells were collected by trypsin digestion method and washed with PBS prior to re-suspension in 250 µl DMEM. Cold (4˚C) dehydrated ethanol (99%) was added to this buffer and incubated overnight at 4˚C. Following treatment, cells were collected and incubated with 200 µl PI (20 µg/ml) using a cell cycle assay kit (Vazyme Biotech, Co., Ltd., Nanjing, China) at 37˚C for 15 min. Samples were immediately analyzed using flow cytometry (EPICS ® XL™; Beckman Coulter, Inc.). Data acquisition and analysis were performed using CellQuest software version 5.1 (BD Biosciences) (23,24).
Data analysis. For RT-qPCR data analysis, the relative quantification method was used to determine the changes in the expression of the target miRNAs. U6 RNA was used to normalize the expression and change in amplification. The fold change in expression was calculated for each sample using 2 -ΔΔCq , where ΔΔCq=(Cq target gene-CqU6) PIH-(Cq target gene-CqU6) control (25). A value of 2 -ΔΔCq >1.5 or <0.67 was considered to represent differentially expressed miRNA. The Welch t-test was used to assess the differential expression of miRNA measured by RT-qPCR.
For other data analysis, Statistical analysis was performed using SPSS, version 17.0 (SPSS, Inc., Chicago, IL, USA). One-way analysis of variance was used to compare log 10 -transformed relative quantities of target miRNAs between all groups. Bartlett's test was used to assess the differences in variance between genes. P<0.05 was considered to represent a statistically significant difference for all experiments.
Expression of miRNAs in the hypoxic environment in vitro.
The expression of miR-197-3p, miR-92b-3p, miR-26a-5p, miR-198 and miR-204-5p were selected to be assessed in vitro, on the basis of the results of the aforementioned clinical sample examination. Only miR-204-5p expression increased significantly in a time-dependent manner in hypoxic JAR cells (0, 48 and 72 h; Fig. 2).
Effect of miR-204-5p inhibitor on cell proliferation. The effect of miR-204-5p inhibitor on JAR cell proliferation was subsequently assessed. In hypoxia pre-treatment JAR cells, cellular proliferation was enhanced significantly in a time-independent manner in the miR-204-5p inhibitor group, as indicated by the number of cells detected at each time point (P<0.001; Fig. 3). Table I. Characteristics of study subjects in the control and PIH groups. [21][22][23][24][25][26][27][28][29][30][31][32][33][34] Data are presented as mean ± standard deviation. PIH, pregnancy-induced hypertension; GA, gestational age; BP, blood pressure. Effects of miR-204-5p inhibitor on cell apoptosis and cell cycle progression. Apoptosis and cell cycle distribution were analyzed using flow cytometry following the transfection of miR-204-5p inhibitor and hypoxia pre-treatment for 48 h. Compared with the control group, JAR cell exposure to miR-204-5p inhibitor exhibited typical protection from apoptotic morphology with nuclear fragmentation, cell shrinkage and cellular rupture into debris. Apoptosis occurred at a significantly higher rate in cells treated with the negative control compared with the group treated with miR-204-5p inhibitor (P<0.01; Fig. 4). Assessment of the cell cycle indicated that the ratio of cells in the G2/G1 phase increased, although this increase was not significant. However, G1 cell cycle arrest was significantly reduced following transfection with the miR-204-5p inhibitor (P<0.05; Fig. 5).
Discussion
To the best of our knowledge, the molecular mechanism during PIH pathogenesis, which is responsible for the occurrence and progression of the syndrome, remains unknown. miRNAs may target a large number of genes and participate in numerous cellular events that may be important in maintaining homoeostasis in a number of organs, including the placenta (26). In the present study, due to the developing understanding of miRNA expression in PIH, the expression of eight miRNAs was identified in the serum samples of healthy controls and patients with PIH using RT-qPCR. The results of the present study identified that five of the eight miRNAs (miR-197-3p, miR-92b-3p, miR-26a-5p, miR-198 and miR-204-5p) are more highly expressed in patients with PIH (Fig. 1). This indicates that a number of miRNAs may impact the course of PIH. Further detection of these miRNAs using a hypoxia pre-treatment cell model demonstrated that only miR-204-5p expression increased in a time-dependent manner in hypoxic JAR cells (Fig. 2), which suggests that miR-204-5p reduced JAR cell proliferation as a suppressor in the hypoxic environment. Effects of the miR-204-5p inhibitor on cell function were also examined, the results of which suggested that the miR-204-5p inhibitor may enhance the cellular population of JAR cells in the hypoxic environment (Fig. 3) and protect JAR cells from apoptosis ( Fig. 4) by reducing G1 cell cycle arrest (Fig. 5). The data collected in the present study indicates that miR-204-5p may be an important therapeutic target for the improved prediction, prevention and treatment of PIH. The present study produced evidence that expression of miR-204-5p, which acts as a potent cell survival suppressor in vitro, is somatically increased in patients with PIH. This is consistent with the results from a previous study, which indicated that miR-204 is also upregulated in the serum of endometrial carcinoma patients (27). Other previous studies have demonstrated that miR-204-5p expression is decreased in several types of solid tumors (28)(29)(30)(31)(32). It has been previously determined, using a human miRNA microarray, that miR-204-5p is markedly downregulated in endometrioid adenocarcinoma tissues (32), inhibiting tumor growth in renal clear cell carcinoma (33,34), and that it suppresses invasion I endometrial cancer (35), gastric cancer (36), and head and neck tumor (37). This implies that a change of miR-204-5p may be common in tumorigenesis and reinforces the complexity of miR-204 regulation. There have been limited studies focusing on the association between PIH and miR-204-5p, as a potential marker of this syndrome. The results of the present study support the possibility that miR-204-5p may be a potential biomarker for good prognosis of PIH. Thus, therapeutic approaches targeting elevated levels of miR-204-5p should be investigated as a novel approach to improve clinical outcomes for patients with PIH. miRNA typically targets the 3' untranslated region of genes and leads to a significant reduction of full-length proteins (38). It has previously been suggested that the ratio of fms-like tyrosine kinase-1 (sFlt-1) and placental growth factor (PIGF) may be an additional diagnostic or predictive tool for PE (39,40). A meta-analysis showed increased concentrations of placental sFlt1 and decreased concentrations of PIGF in pregnancies in which the mother developed PE (41). Furthermore, the level of vascular endothelial growth factor was lower in women who developed PE, compared with those that did not (42). One previous study produced a series of gene profiles in human peripheral blood mononuclear cells and investigated their association with PE (43). However, the molecular mechanism of miR-204-5p of targeting these important genes during PIH or PE requires further study. In conclusion, the present study demonstrated that the expression of miR-204-5p is significantly upregulated in the clinical serum samples of patients with PIH. Suppression of miR-204-5p by its specific inhibitor induces JAR cell growth, decreases cell apoptosis and changes the cell cycle in vitro. Therefore, miR-204-5p may be considered as a potential unfavorable progression indicator for patients with PIH and may serve as a therapeutic target in the future. | 2018-04-03T03:10:12.108Z | 2017-03-09T00:00:00.000 | {
"year": 2017,
"sha1": "41af24ad0e143b8a89f424dd6d447bae3d13b9e9",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2017.4212/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "41af24ad0e143b8a89f424dd6d447bae3d13b9e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24375304 | pes2o/s2orc | v3-fos-license | The genetic code, algebra of projection operators and problems of inherited biological ensembles
This article is devoted to applications of projection operators to simulate phenomenological properties of the molecular-genetic code system. Oblique projection operators are under consideration, which are connected with matrix representations of the genetic coding system in forms of the Rademacher and Hadamard matrices. Evidences are shown that sums of such projectors give abilities for adequate simulations of ensembles of inherited biological phenomena including ensembles of biological cycles, morphogenetic ensembles of phyllotaxis patterns, etc. For such modeling, the author proposes multidimensional vector spaces, whose subspaces are under a selective control (or coding) by means of a set of matrix operators on base of genetic projectors. Development of genetic biomechanics is discussed. The author proposes and describes special systems of multidimensional numbers under names as tensorcomplex numbers, tensordouble numbers, etc. Described results can be used for developing algebraic biology, biotechnical applications and some other fields of science and technology.
ABOUT THE PARTNERSHIP OF THE GENETIC CODE AND MATHEMATICS
Science has led to a new understanding of life itself: "Life is a partnership between genes and mathematics" [Stewart, 1999]. But what kind of mathematics can be a partner for the genetic coding system? This article shows some evidences that algebra of projectors can be one of main parts of such mathematics. Till now the notion of projection operators (or briefly, projectors) was one of important in many fields of non-biological science: physics including quantum mechanics; mathematics; computer science and informatics including theory of digital codes; chemistry; mathematical logic, etc. On basis of materials of this article, the author thinks that projectors can become one of the main notions and effective mathematical tools in mathematical biology. Moreover they will help not only to a development of algebraic biology and a new understanding of living matter but also to a mutual enrichment of different branches of science.
Projectors are expressed by means of square matrices (http://mathworld.wolfram.com/ProjectionMatrix.html, https://en.wikipedia.org/wiki/Projection_(linear_algebra)). A necessary and sufficient condition that a matrix P is a projection operator is the fulfillment of the following condition: P 2 = P. A set of projectors is separated into two sub-sets: • orthogonal projectors, which are expressed by symmetric matrices and theory of which is well developed and has a lot of applications; • oblique projectors, which are expressed by non-symmetric matrices; their theory and its applications are developed much weaker as the author can judge. Namely oblique projectors will be the main objects of attention in this article.
This article is a continuation and an essential development of the author's article about relations between the genetic system and projection operators [Petoukhov, 2010].
In accordance with Mendel's laws of independent inheritance of traits, information from the micro-world of genetic molecules dictates constructions in the macro-world of living organisms under strong noise and interference. This dictation is realized by means of unknown algorithms of multi-channel noise-immunity coding. For example, in human organism, his skin color, eye color and hair color are inherited genetically independently of each other. It is possible if appropriate kinds of information are conducted via independent informational channels and if a general "phase space" of living organism contains sub-spaces with a possibility of a selective control or a selective coding of processes in them. So, any living organism is an algorithmic machine of multi-channel noise-immunity coding with ability to a selective control and coding of different sub-spaces of its phase space (a model approach to phase spaces with a selective control of their sub-spaces is presented in this article). This machine works in conditions of ontogenetic development of the organism when a multi-dimensionality of its phase space is increased step by step.
To understand such genetic machine, it is appropriate to use the theory of noiseimmunity coding and transmission of digital information, taking into account the discrete nature of the genetic code. In this theory, mathematical matrices have the basic importance. The use of matrix representations and analysis in the study of phenomenological features of molecular-genetic ensembles has led to the development of a special scientific direction under a name "Matrix Genetics" [Petoukhov, 2008;Petoukhov, He, 2009]. Namely researches of the "matrix genetics" gave results that are represented in this article.
Concerning the theme of projectors in inherited biological phenomena, one can note that our genetically inherited visual system works on the principle of projection of external objects at the retina. This projection is modeled using projection operators. The author believes that the value of projectors for bioinformatics is not limited to this single fact of biological significance of projection operators, but that the whole system of genetic and sensory informatics is based on their active use. This ubiquitous use of projection operators reflects and ensures (in some degree) the unity of any organism and interrelations of its parts.
The set of projection operators, which are associated with the matrix representation of the genetic code, provides new opportunities for modeling ensembles of inherited cycles; ensembles of phyllotaxis structures; a numeric specificity of reproduction of genetic information in acts of mitosis and meiosis of biological cells, etc. In the frame of the "projector conception" arised here in genetic informatics, some features of evolutionary transformations of variants (or dialects) of the genetic code are clarified.
The last examined sums of the column projectors (c 0 +c 3 ) and (c 1 +c 2 ) possess the following feature. Matrices of their second power is quadrupled in a result of exponentiation in integer powers: ((c 0 +c 3 ) 2 ) n = 4 n-1 *(c 0 +c 3 ) 2 , ((c 1 +c 2 ) 2 ) n = 4 n-1 *(c 1 +c 2 ) 2 , where n = 1, 2, 3… (this feature can be used to simulate a genetic phenomenon of tetra-reproduction of gametes and genetic information in a course of meiosis). The cells with these sums (c 0 +c 3 ) and (c 1 +c 2 ) are marked by yellow colour in the left table on Figure 4.
One can mention that the structure of this symmetric table is unexpectedly coincided with the structure of the typical (4*4)-matrix of dyadic shifts, which is known in theory of signal processing and which is related with some phenomenologic properties of moleculargenetic systems [Ahmed, Rao, 1975;Petoukhov, 2008Petoukhov, , 2012aPetoukhov, He, 2009].
Every of cells in these tables on Figure 9 represents a sum of those projectors which denote its column and row by analogy with Figure 4. Again we have three types of such sums which are marked by green, red and yellow and which possess the similar properties in comparison with the cases on Figure 4.
GENETIC HADAMARD MATRICES AS SUMS OF PROJECTORS
The genetic Hadamard matrix H 4 from Figure 1 can be also decomposed into sum of 4 sparse matrices H 4 = h 0 +h 1 +h 2 +h 3 where each of sparse matrices contains only one non-zero column (in a case of the «column decomposition») or only one non-zero row (in a case of the «row decomposition») ( Figure 10). Each of these sparse matrices h 0 , h 1 , h 2 , h 3 and g 0 , g 1 , g 2 , g 3 on Fugure 10 is a projector. We will conditionally name projectors h 0 , h 1 , h 2 , h 3 again as «column projectors» and projectors g 0 , g 1 , g 2 , g 3 as «row projectors».
By analogy with the previous section about the Rademacher matrix R 4 , one can analyse features of sums of pairs of these column projectors and row projectors in relation to their exponentiation. In other words, one can analyze features of matrices (h 0 +h 1 ) n , (h 0 +h 2 ) n ,…. and (g 0 +g 1 ) n , (g 0 +g 2 ) n , …. where n =1, 2, 3,… . Such analysis leads to resulting tables on Figure 11. h 0 h 1 h 2 h 3 h 0h 1 h 2 h 3 -g 0 g 1 g 2 g 3 g 0g 1 g 2 g 3 - Figure 11. Tables of some features of sums of pairs of the different «column projectors» h 0 , h 1 , h 2 , h 3 (in the left table) and of the «row projectors» g 0 , g 1 , g 2 , g 3 (from the Hadamard matrix H 4 on Figure 10) in relation to their exponentiation. Explanations in text.
By analogy with the previous sections, one can analyse features of sums of pairs of these column projectors and row projectors in relation to their exponentiation. In other words, one can analyze features of matrices (u0+u1) n , (u0+u3) n ,…. and (d0+d1) n , (d0+d2) n , …. where n =1, 2, 3,… . Such analysis leads to resulting tables on Figure 16. u 0 u 1 u 2 u 3 u 4 u 5 u 6 u 7 u 0 - Figure 16. Tables of some features of sums of pairs of the different column projectors u 0 , u 1 , …, u 7 (from Figure 14) and of the row projectors d 0 , d 1 , …, d 7 (from Figure 15) in relation to their exponentiation. This is the case of the Hadamard matrix H 8 from Figure 1.
Cyclic properties of these (8*8)-matrix operators exist due to a connection of these operators with complex numbers. Figure 17 shows some examples of decompositions of the (8*8)-matrices from green cells on Figure 16 into corresponding sets of two sparse matrices, each of which is closed in relation to multiplication and each of which defines the multiplication table of complex numbers (see some additional details about representations of complex numbers by means of (2 n *2 n )-matrices in [Petoukhov, 2012b]). It should be noted here that our study in the field of matrix genetics has revealed methods of extension of these (8*8)-genetic matrices R 4 , R 8 , H 4 , H 8 ( Figure 1) into (2 n *2 n )-matrices which are also sums of "column projectors" and "row projectors" and which give by analogy as much cyclic groups as needed to model big ensembles of cyclic processes. Figure 17. The decomposition of the (8*8)-matrices u 0 +u 4 , u 1 +u 5 , u 2 +u 6 , u 3 +u 7 , which are examples of (8*8)-matrices from green cells on Figure 16, into corresponding sets of two sparse matrices e 0 and e 4 , e 1 and e 5 , e 2 and e 6 , e 3 and e 7 , each of which is closed in relation to multiplication and each of which defines the multiplication table of complex numbers (on the right) Figure 17 testifies that the Hadamard (8*8)-matrix H 8 = (u 0 +u 4 )+(u 1 +u 5 )+(u 2 +u 6 ) +(u 3 +u 7 ) (Fig. 1) is a sum of 4 complex numbers in 8-dimensional space.
INHERITED BIOCYCLES AND A SELECTIVE CONTROL OF CYCLIC CHANGES OF VECTORS IN A MULTIDIMENSIONAL SPACE. PROBLEMS OF GENETIC BIOMECHANICS
Any living organism is an object with a huge ensemble of inherited cyclic processes, which form a hierarchy at different levels. Even every protein is involved in a cycle of the "birth and death," because after a certain time it breaks down into its constituent amino acids and they are then collected into a new protein. According to chronomedicine and biorhythmology, various diseases of the body are associated with disturbances (dyssynchronization) in these cooperative ensembles of biocycles. All inherited physiological subsystems of the body should be agreed with the structural organization of genetic coding for their coding and transmission to descendants; in other words, they bear the stamp of its features. We develop a "genetic biomechanics", which studies deep coherence between inherited physiological systems and molecular-genetic structures.
Our discovery of the described cyclic groups (on basis of genetic projectors), which are connected with phenomenological properties of molecular-genetic systems in their matrix forms of representation, gives a mathematical approach to simulate ensembles of cyclic processes. In this approach an idea of multi-dimensional vector space is used to simulate inherited biological phenomena including cooperative ensembles of cyclic processes.
Multidimensional vectors of this bioinformation space can be changed under influence of those matrix operators on the basis of genetic projectors that were decribed in previous section. Due to special properties of these operators a useful possibility exists to provide a selective control (or a selective coding) of cyclic changes (and some other changes) of separate coordinates of multidimensional vectors in this space.
One can see from Figure 18 that only coordinates z 0 and z 2 have cyclic changes in this set of new vectors Z n , all other coordinates are equal to zero. In other words, all cycles are realized on a 2-dimensional plane (z 0 , z 2 ) inside the 8-dimensional space. If one uses another cyclic group of operators, for example, (2 -0.5 *(s 1 +s 3 )) n (s 1 and s 2 are from Figure 7) then the same initial vector X=[x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ] will be transformed into a cyclic set of vectors in another 2-dimensional plane (z 1 , z 3 ) of the same 8-dimensional space. One should conclude that, in this model approach, the same initial information in a form of a multidimensional vector X could generate a few cyclic processes in different planes of appropriate multidimensional space by means of using cyclic operators of the described type. In other words, we have here a multi-purpose using of vector information due to such operators (for instance, this informational vector can represent a fragment of a nucleotide sequence that can be used to organize many cyclic processes in different planes or subspaces of a phase space of genetic phenomena).
In the proposed model approach, one more benefit is that different cyclic processes of such cooperative ensemble can be easy coordinated and synchronized including an assignment of their relative phase shifts, starting times and different tempos of their cycles.
One technical remark is needed here. If we use a cyclic operator on the basis of the "column projectors", then a vector X should be multipled by the matrix on the right in accordance with the sample: [x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ]*(2 -0.5 *(s 0 +s 2 )) n . But if we use a cyclic operator on the basis of the "row projectors" (for instance, v 0 and v 2 from Figure 8) then a vector X should be multiplied by the matrix on the left in accordance with the following sample: (2 -0.5 *(v 0 +v 2 )) n *[x 0 ; x 1 ; x 2 ; x 3 ; x 4 ; x 5 ; x 6 ; x 7 ].
Below we will describe extensions of the genetic (4*)-matrices R 4 and H 4 ( Figure 1) into 2 n *2 n -matrices every of which consists of 2 n "column projectors" (or 2 n "row projectors"); summation of projectors from this expanded set leads to new cyclic groups, etc. by analogy with the described cases (see Figures 4,9,11,16). It gives a great number of cyclic groups of operators with similar properties of the selective control (or selective coding) of cyclic changes of coordinates of 2 n -dimensional vectors. These numerous cyclic groups are useful to simulate big cooperative ensembles of cyclic processes, for instance, an ensemble of cyclic motions of legs, hands and separate muscles during different gaits (walking, running, etc.) simultaneously with heartbeats, breathing cycles, metabolic cycles, etc. Such models and their practical applications are created in the author's laboratory. The problem of inherited ensembles of biological cycles are closely linked to the fundamental problems of the biological clock and time, aging, etc. Taking into account results, which were obtained in "matrix genetics", the author puts forward "a biological concept of projectors" , which interprets the living body as a colony of projection operators.
It should be noted that in a case of a cyclic group of vector transformations with a period 8 (for example, in the case of the cyclic group (2 -0.5 *(s 0 +s 2 )) n ) that has only 8 discrete stages inside one cycle, one can enlarge a quantity of stages in "k" times by changing of the power in a form n/k: the cyclic group (2 -0.5 *(s 0 +s 2 )) n/k has k*8 stages inside one cycle (here "n" and "k" -integer positive numbers). The more value of "k", the less discretization of the cycle and the more smooth (uninterrupted) type of this cyclic process.
It can be added that many gaits (which are based on cyclic movements of limbs and corresponding muscle actuators) have genetically inherited character. So, newborn turtles and crocodiles, when they hatched from eggs, crawl with quite coordinated movements to water without any training from anybody; a newborn foal, after a bit time, begins to walk and run; centipedes crawl by means of coordinated movements of a great number of their legs (this number sometimes reaches up to 750) on the basis of inherited algorithms of control of legs. One should emphasize that, in the previous history, gaits and locomotion algorithms were studied in biomechanics of movements without any connection with the structures of genetic coding and with inheritance of unified control algorithms. The projection operations are associated with many kinds of movements and planned actions of our body to achieve the goal by the shortest path: for example, sending a billiard ball in the goal, we use a projection operation; directing a finger to the button of computer or piano, we make a projection action, etc. In other words, the concept of projection operators can be additionally used to simulate a broad class of such biomechanical actions.
Subject of genetically inherited ability of coordinating movements of body parts is connected with fundamental problems of congenital knowledge about surrounding space and of physiological foundations of geometry. Various researches have long put forward ideas about the importance of kinematic organization of body and its movements in the genesis of spatial representations of the individual. For example, H. Poincare has put these ideas into the foundation of his teachings about the physiological foundations of geometry and about the origin of spatial representations in individuum.
According to Poincare, the concept of space and geometry arises from an individual on the basis of kinematic organization of his body with using characterizations of positions and movements of body parts relative to each other, ie in the kinematic organization of the body is something that precedes the concept of space [Poincare, 1913]. Evolutionary development of the whole apparatus of kinematic activity of our body has provided a coherence of this apparatus with realities of the physical world. Because of this, each newborn organism receives adequate spatial representations not only through personal contact during ontogeny with the objects of the surrounding world, but also at the expense of achievements of previous generations enshrined in the apparatus of body movements in the phylogenesis. According to Poincare, for organism, which is absolutely immobile, spatial and geometric concepts are excluded. «To localize an object simply means to represent to oneself the movements that would be necessary to reach it. I will explain myself. It is not a question of representing the movements themselves in space, but solely of representing to oneself the muscular sensations which accompany these movements and which do not presuppose the preexistence of the notion of space. [Poincare, 1913, p. 247]. «I have just said that it is to our own body that we naturally refer exterior objects; that we carry about everywhere with us a system of axes to which we refer all the points of space and that this system of axes seems to be invariably bound to our body. It should be noticed that rigorously we could not speak of axes invariably bound to the body unless the different parts of this body were themselves invariably bound to one another. As this is not the case, we ought, before referring exterior objects to these fictitious axes, to suppose our body brought back to the initial attitude" [Poincare, 1913, p. 247]. «We should therefore not have been able to construct space if we had not had an instrument to measure it; well, this instrument to which we relate everything, which we use instinctively, it is our own body. It is in relation to our body that we place exterior objects, and the only spatial relations of these objects that we can represent are their relations to our body. It is our body which serves us, so to speak, as system of axes of coordinates» [Poincare, 1913, p. 418]. In times of Poincare science did not know about the genetic code, but from the modern point of view these thoughts by Poincare testify in favor the importance of the structural organization of the genetic system for physiological foundations of geometry and innate notions of space, which are connected with inherited apparatus and algorithms of body movements. And they are in tune with the results of matrix genetics, which are presented in our paper.
Modern physiology makes a significant addition to the teachings of the Poincare about an innate relationship of body and spatial representations, claiming an existence of a priori notions about our body shell. This statement is due to the study of the so-called phantom sensations in disabled: a special sense of the presence of natural parts of the body, which are absent in reality. It was found [Vetter, Weinstein, 1967;Weinstein, Sersen, 1961] that phantom sensations occur not only in cases of disabled with amputees, but also in people with congenital absence of limbs. Hence, the notion of the individual scheme of our body is not conditioned by our experiences, but has an innate character.
Additional materials relating to innate spatial representations, including the concept of B. Russell [Russel, 1956] about an innate character of ideas of projective geometry for each person, as well as an overview of works E. Schroedinger and other researchers about the geometry of spaces of visual perception, can be found in the book [Petoukhov, 1981].
We note here that although the concept of space is the primary concept for most physical theories, one can develop a meaningful theory in theoretical physics, in which it serves as only one of secondary notions, which are deduced from primary bases of a numeric system of a discrete character. We mean the "binary geometrophysics" [Vladimirov, 2008], ideas of which generate some associations with the ability of animal organisms (initially endowed with discrete molecular genetic information) to receive spatial representations and to create spatial movements on the basis of this primary information of discrete character.
ABOUT A DIRECTION OF ROTATION OF VECTORS UNDER INFLUENCE OF THE CYCLIC GROUPS OF THE OPERATORS
In configurations and functions of biological objects frequently one direction of rotation is preferable (it concerns the famous problem of biological dissymmetry). Taking this into account, it is interesting what one can say about directions of rotation of 4-dimensional and 8-dimensional vectors under influence of the cyclic groups described in previous sections. Figure 19 gives answer and shows directions of cyclic rotation of vectors [x 0 , x 1 , x 2 , x 3 ]*(2 -0.5 *(c i +c j )) n , (2 -0.5 *(r i +r j )) n *[x 0 , x 1 , x 2 , x 4 ], [x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ]* (2 -0.5 *(s i +s j )) n , (2 -0.5 *(v i +v j )) n *[x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ] under enlarging "n" (i≠j; all these cyclic operators correspond to green cells in tables on Figure 19, they are based on summation of pairs of the projectors of the Rademacher matrices R 4 and R 8 from Figure 1).
Q -r 0 r 1 r 2 r 3 r 0 -P r 1 Pr 2 -P r 3 Ps 0 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 0 - P P - Figure 19. In addition to Figures 4 and 9, the tables show directions of rotations of 4-dimensional and 8-dimensional vectors under influence of the cyclic groups of operators, which correspond to green cells and which are based on summation of pairs of the "column projectors" (on the left, see Figures 2,4,7,9) and of the "row projectors" (on the right, see Figures 2,4,8,9) of the Rademacher matrices R 4 and R 8 ( Figure 1). The symbol Q means counter-clockwise rotation, the symbol P means clockwise rotation. Figure 20 shows directions of cyclic rotation of vectors [x 0 , x 1 , x 2 , x 3 ]*(2 -0.5 *(h i +h j )) n , (2 -0.5 *(g i +g j )) n *[x 0 , x 1 , x 2 , x 4 ], [x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ]*(2 -0.5 *(u i +u j )) n , (2 -0.5 *(d i +d j )) n * [x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ] under enlarging "n" (i≠j; all these operators correspond to green cells in tables on Figure 19, they are based on summation of pairs of projectors of the Rademacher matrices R 4 and R 8 from Figure 1).). Each of tables on Figures 19 and 20 contains completely different (asymmetrical) number of rotations in the clockwise and counterclockwise. These facts give evidences in favor of an idea that living matter at its basic level of genetic information has certain informational reasons to provide dissymmetry of inherited biological structures and processes. Taking this into account, the author thinks about a possibility of informational reasons for biological dis-symmetry. Here one can remind for a comparison that usually scientists look for reasons of biological dis-symmetry in physical or chemical sciences but not in informatic science.
HAMILTON'S QUATERNIONS, COCKLE'S SPLIT-QUATERNIONS, THEIR EXTENSIONS AND PROJECTION OPERATORS
In previous sections we described cases of summation of pairs of the oblique projectors. Now let us consider cases of summation of 4 of these projectors and cases of summation of 8 of these projectors.
The matrix H 4 ( Figure 1) is sum of the four "column projectors" or the four "row projector" (Figure 10). But H 4 has also another decomposition in a form of four sparse matrices H 40 , H 41 , H 42 and H 43 ( Figure 21). This set is closed in relation to multiplication and it defines their multiplication table (Figure 21, bottom level) that is identical to the known multiplication table of quaternions by Hamilton. From this point of view, the matrix H 4 is the quaternion by Hamilton with unit coordinates. (Such type of decompositions is termed a dyadic-shift decomposition because it corresponds to structures of matrices of dyadic shifts, well known in technology of signals processing [Ahmed, Rao, 1975]). Here one can mention that Hamilton quaternions are closely related to the Pauli matrices, the theory of the electromagnetic field (Maxwell wrote his equation on the language of quaternions Hamilton), the special theory of relativity, the theory of spins, quantum theory of chemical valency, etc. In the twentieth century thousands of works were devotes to quaternions in physics [http://arxiv.org/abs/math-ph/0511092]. Now Hamilton quaternions are manifested in the genetic code system. Our scientific direction -"matrix genetics" -has led to the discovery of an important bridge among physics, biology and computer science for their mutual enrichment. In our studies, we have received a new example of the effectiveness of mathematics: abstract mathematical structures, which have been derived by mathematicians at the tip of the pen 160 years ago, are embodied long ago in the information basis of living matter -the system of genetic coding. The mathematical structures, which are discovered by mathematicians in a result of painful reflections (like Hamilton, who has wasted 10 years of continuous thought to reveal his quaternions), are already represented in the genetic coding system.
Let us turn now to the (8*8)-matrix H 8 ( Figure 1) that can be represented as sum of two matrices HL 8 = u 0 +u 2 +u 4 +u 6 and HR 8 = u 1 +u 3 +u 5 +u 7 ( Figure 22). Here u 0 , u 1 , …, u 7 are the «column projectors» from Figure 14. Figure 23 shows a decomposition of the matrix HL 8 (from Figure 22) as a sum of 4 matrices: HL 8 = HL 80 + HL 81 + HL 82 + HL 83 . The set of matrices HL 80 , HL 81 , HL 82 and HL 83 is closed in relation to multiplication and it defines the multiplication table that is identical to the multiplication table of quaternions by Hamilton. General expression for quaternions in this case can be written as Q L = a 0 *HL 80 + a 1 *HL 81 + a 2 *HL 82 + a 3 *HL 83 , where a 0 , a 1 , a 2 , a 3 are real numbers. From this point of view, the (8*8)-genomatrix HL 8 is the 4-parametric quaternion by Hamilton with unit coordinates. HL 8 = HL 80 + HL 81 + HL 82 + HL 83 = HL80 HL81 HL82 HL83 HL80 HL80 HL81 HL82 HL83 HL81 HL81 --HL80 HL83 --HL82 HL82 HL82 --HL83 --HL80 HL81 HL83 HL83 HL82 --HL81 --HL80 The similar situation holds true for the matrix HR 8 (from Figure 22). Figure 24 shows a decomposition of the matrix HR 8 as a sum of 4 matrices: HR 8 = HR 80 + HR 81 + HR 82 + HR 83 . The set of matrices HR 80 , HR 81 , HR 82 and HR 83 is closed in relation to multiplication and it defines the multiplication table that is identical to the same multiplication table of quaternions by Hamilton. General expression of quaternions in this case can be written as Q R = a 0 *HR 80 + a 1 *HR 81 + a 2 *HR 82 + a 3 *HR 83 , where a 0 , a 1 , a 2 , a 3 are real numbers. From this point of view, the (8*8)-genomatrix HR 8 is the quaternion by Hamilton with unit coordinates. Figure 22) as sum of 4 matrices: H 8R = H0 8R + H1 8R + H2 8R + H3 8R . Bottom row: the multiplication table of these 4 matrices HR 80 , HR 81 , HR 82 and HR 83 , which is identical to the multiplication table of quaternions by Hamilton. HR 80 represents the real unit for this matrix set The initial (8*8)-matrix H 8 ( Figure 1) can be also decomposed in another way on the base of dyadic-shift decomposition. Figure 25 One can analyze the Rademacher genomatrices R 4 and R 8 (From Figure 1) by a similar way [Petoukhov, 2012b]). In particular, in this case the following results arise: • The Rademacher (4*4)-matrix R 4 represents split-quaternion by J.Cockle with unit coordinates (http://en.wikipedia.org/wiki/Split-quaternion) in the case of its dyadicshift decomposition; • The Rademacher (8*8)-matrix R 8 represents bisplit-quaternion by J.Cockle with unit coordinates in the case of its dyadic-shift decomposition; • If the Rademacher (4*4)-matrix R 4 is represented as sum of two sparse matrices (c 0 +c 2 ) + (c 1 +c 3 ) (here c 0 , c 1 , c 2 , c 3 are the column projectors from Figure 2), then the matrix R 4 is sum of two hyperbolic numbers with unit coordunates because each of summands (c 0 +c 2 ) and (c 1 +c 3 ) is hyperbolic number with unit coordinates. A similar is true for the case of the "row projectors" r 0 , r 1 , r 2 , r 3 from Figure 2. Now let us pay a special attention to the Rademacher (8*8)-matrix R 8 as a sum of the following two sparse matrices RL 8 and RR 8 , the first of which is a sum of 4 projectors with even indexes s 0 , s 2 , s 4 , s 6 and the second of which is a sum of 4 projectors with odd indexes s 1 , s 3 , s 5 , s 7 : R 8 = (s 0 +s 2 +s 4 +s 6 ) + (s 1 +s 3 +s 5 +s 7 ) = RL 8 + RR 8 (here s 0 , s 1 , …, s 7 are the column projectors from Figure 7). Below this decomposition will be useful for analysis of a correspondence between 64 triplets and 20 amino acids with stop-codons. Each of these sparse matrices RL 8 and RR 8 can be decomposed into a set of 4 sparse matrices: RL 8 = RL 80 + RL 81 + RL 82 +RL 83 and RR 8 = RR 80 + RR 81 + RR 82 +RR 83 (Figures 27 and 28). The first set of matrices RL 80 , RL 81 , RL 82 , RL 83 is closed relative to multiplication and it defines a known multiplication table of split-quaternions by J. Cockle (http://en.wikipedia.org/wiki/Split-quaternion) on Figure 27. The second set of matrices RR 80 , RR 81 , RR 82 , RR 83 is also closed relative to multiplication and it defines the same multiplication table of split-quaternions by J. Cockle ( Figure 28). Consequently, each of matrices RL 8 and RR 8 is split-quaternion by Cockle with unit coordinates. Figure 28. The decomposition of the matrix RR 8 from Figure 26 into the set of 4 matrices RR 80 , RR 81 , RR 82 , RR 83 , which defines the same multiplication table of splitquaternions by J. Cockle (http://en.wikipedia.org/wiki/Split-quaternion) But each of the same (8*8)-matrices RL 8 and RR 8 (Figure 26)) can be decomposed in another way, which leads to its representation in a form of sum of two hyperbolic numbers: RL 8 = (s 0 +s 4 )+(s 2 +s 6 ) and RR 8 =(s 1 +s 5 )+(s 3 +s 7 ), where is s 0 , s 1 , …, s 7 are column projectors from the decomposition of R 8 on Figure 7. Figure 29 shows that each of sums (s 0 +s 4 ), (s 2 +s 6 ), (s 1 +s 5 ), (s 3 +s 7 ) in these decompositions for RL 8 and RR 8 is a (8*8)-matrix representation of hyperbolic number, whose coordinates are equal to 1. It means also that the whole (8*8)matrix R 8 is sum of 4 hyperbolic numbers, whose coordinates are equal to 1, in an 8dimensional space. These decompositions are useful for analyzing the degeneration of the genetic code in next Sections. . Decompositions of sums of projectors (s 0 +s 4 ), (s 2 +s 6 ), (s 1 +s 5 ), (s 3 +s 7 ), which show that each of these sums is an (8*8)-matrix representation of hyperbolic numbers, whose coordinates are equal to 1, in an 8-dimensional space.
GENETIC MATRICES AS SUMS OF TENSOR PRODUCTS OF OBLIQUE (2*2)-PROJECTORS. EXTENSIONS OF GENETIC MATRICES INTO (2 N *2 N )-MATRICES
The Rademacher matrices R 4 and R 8 and also Hadamard matrices H 4 and H 8 ( Figure 1) are interconnected by means of the following expressions: where U means tensor multiplication; the matrix [1 1; 1 1] is a traditional (2*2)-matrix representation of hyperbolic number with unit coordinates; the matrix [1 -1; 1 1] is a traditional (2*2)-matrix representation of complex number with unit coordinates.
The following extensions of the expressions (1) lead to (2 n *2 n )-matrices R K and H K (where K=2 n , n = 4, 5, 6,…; (n-2) means a tensor power): In this algorithmic way we get a great set of (2 n *2 n )-matrices R K and H K , each of which can be represented as a sum of 2 n «column projectors» (or 2 n «row projectors») by analogy with cases described above. Summations of these new «column projectors» (and also «row projectors») in different combinations (in pairs, in fours, in eights, etc.) give many new operators, exponentiation of which generates a great number of cyclic groups and other kind of operators. They also give many new representations of complex numbers, hyperbolic numbers, Hamilton's quaternion, split-quaternions and their extensions in a form of (2 n *2 n )matrices that correspond to 2 n -dimensional spaces. These new operators possess many similar properties, including a selective control (or coding) of different subspaces in 2 n -dimensional space, in analogy with operators described in previous sections.
Why one can declare that each of matrices in the «column decomposition» (or in the «row decomposition») of any of matrices R k and H k in the expressions (2) is a projection operator? It can be declared on basis of the following simple theorem, taking into account that main diagonals of all matrices R k and H k contains only entries +1.
Theorem: any sparse square matrix P, which contains only a single non-zero column or a single non-zero row and which has its entry +1 on the main diagonal, is a projection operator (it satisfies the criterion P 2 =P).
Proof. When multiplying two matrices |A ik | and |B kj |, the elements of the rows in the first matrix are multiplied with corresponding columns in the second matrix to receive the resulting matrix (AB) ij | = A ! !!! ik *B kj (http://en.wikipedia.org/wiki/Matrix_multiplication ). Let us consider a case of a square matrix |P ij | (here i,j = 1, 2,…, m) with only a single nonzero column P is ≠ 0, which is numerated by an index "s" and which contains +1 in its cell on the main diagonal of this matrix: P ss = 1. It means that all entries P ik = 0 if k≠s. The second degree of this matrix gives a square matrix: But among all P ik only one column differs from zero: P is ≠0. In the equation (3), P is corresponds to the second factor P ss = 1. By these reasons we have P is *P ss = P is for the equation (3). So the sparse square matrix |P ij | 2 contains only the same single non-zero column P is like as the matrix |P ij |. Consequently |P ij | 2 =|P ij |; in other words, this matrix |P ij | is a projection operator, Q.E.D. The case of similar representations of such 2 n *2 n -matrices on basis of a sum of "row projectors" has its proof by analogy.
This theorem allows making the following conclusion about any variant of matrix presentations of complex numbers, hyperbolic numbers and their extensions into 2 ndimensional numerical systems (including Hamilton's quaternions and biquaternions, splitquaternions and bisplit-quaternions by Cockle, etc.): if the real part of such 2 n -dimensional number is equal to +1, then its matrix presentation is a sum of 2 n «column projectors» (and «row projectors»). It is provided by the fact that real parts of such multidimensional numerical systems are represented by matrix diagonal that contains only entries +1. Figure 30 shows an example of one of matrix presentations of Hamilton quaternions in a case when their real parts are equal to +1. So, many kinds of hypercomplex numbers are based on sums of projectors. In this sense the notion "projectors" can be considered as more fundemental than the notion "hypercomplex numbers" of the mentioned types. Many of these hypercomplex numbers are applied widely in different fields of science: physics, chemistry, informatics, etc. Awareness of the fact that these systems of hypercomplex numbers are based on sums of projectors may help in a rethinking of existing theories and in developing new theories in the field of mathematical natural science. In particularly, it concerns Hamilton's quaternions. For example, Maxwell has used them in creation of his equations of electro-magnetic field. Could one develop an alternative description and development of the theory of electro-magnetic field on basis of sums of appropriate projectors? It is one of many open questions in theoretical aplications of projectors. Now we show that each of genetic Rademacher and Hadamard matrices (including R 2 , R 8 , H 4 , H 8 from Figure 1 and their extensions into 2 n *2 n -matrices R K and H K in expressions (2)) can be expressed as sums and tensor multiplications of four (2*2)-matrices of «column projectors» (or of analogical «row projectors»). Figure 31 shows these 4 basic (2*2)-projectors, which are marked by 4 different colours for visibility, and some examples of expressions of a few Rademacher and Hadamard matrices by means of their using. 1 0 1 0 ; Figure 31. Examples of using the 4 basic (2*2)-projectors (upper level) to express (2*2)-matrix representations of hyperbolic number and of complex number with unit coordinates (the second level) and to express the Rademacher (4*4)-matrix R 4 and the Hadamard (4*4)-matrix H 4 from Figure 1 (two lower levels) One can also note that every of the genetic "column (2 n *2 n )-projectors" and the "row (2 n *2 n )-projectors" can be expressed by means of tensor multiplications of appropriate (2*2)projectors from their basic set of the 4 projectors ( Figure 31, upper level). It means that cases of 2-dimensional spaces can be considered as basic in this model approach. It is interesting because of the known fact that namely 2-dimensional sub-spaces play a fundamental role in morphological organization and development of living bodies (see for example about a fundamental role of primary tissue layers or primary germ layers in http://en.wikipedia.org/wiki/Germ_layer; in accordance with germ layer theory, for example, all different organs of human bodies develop from one of the 3 germ layers).
AN APPLICATION OF OBLIQUE PROJECTORS TO SIMULATE ENSEMBLES OF PHYLLOTAXIS PATTERNS IN LIVING BODIES
In the field of mathematical biology, phyllotaxis phenomena are one of the most known [Adler, Barabe, Jean, 1997;Jean, 1995; http://www.goldenmuseum.com/0604Phillotaxis_engl.html]. Usually phyllotaxis laws are described as those inherited spiral-like dislocations of leaves and some other parts of plants, which are connected with Fibonacci numbers. But the similar phyllotaxis laws dictate also inherited configurations of some biological molecules, parts of animal bodies, etc. (see, for example, a review in [Jean, 1995]). In other words, phyllotaxis laws appear in inherited morphological structures at very different levels and branches of biological evolution. Figure 32 shows a few examples of phyllotaxis spirals. 3) a pine cone (from http://www.maths.surrey.ac.uk/hostedsites/R.Knott/Fibonacci/fibnat.html); 4) a seashell (from http://www.eb.tuebingen.mpg.de/?id=476 ); 5) a fractal vegetable Romanesco Broccoli (Brassica oleracea) (from http://egregores.blogspot.ru/2010/12/extremely-cool-naturalfractals.html ); 6) a spruce with cones (from http://foto.rambler.ru/users/nadezhdarodnaja/albums/53780408/photo/4e9c302c-1a1a-3e9d-c61e-db70c258d714/ ) On Figure 32, images 5 and 6 illustrate that a whole organism can contain many parts with similar phyllotaxis patterns in each (like as a spruce with many phyllotaxis cones). For physicist or mathematician is natural to think that such organism can be modelled as a multidimensional phase space (or a configuration space) with an appropriate number of similar subspaces, each of which receives the same phyllotaxis pattern due to a selective control of subspaces or their selective genetic coding. Such selective control or coding in this phase space should be based on an appropriate system of operators (about a phase space see http://en.wikipedia.org/wiki/Phase_space).
Our model approach allows such modelling due to a discovered system of operators based on described sums of oblique projectors (including 2 n *2 n -dimensional matrices R k and H k in expressions (2)), which have such properties of a selective control (or coding) of subspaces of 2 n -dimensional space. In other words, we propose an approach to model ensembles of phyllotaxis patterns (or other patterns and processes) inside a multidmensional phase space that represents a whole organism. The described system of operators for a selective control or coding can be conditionally named briefly as a «genetic system of operators» (or more briefly, «G-system of operators). As we can judge, in the field of phyllotaxis study, other authors didn't simulate such ensembles of phyllotaxis patterns though many different models of separate patterns (without their ensembles in a joint phase space) exist. In addition, known models of inherited phyllotaxis patterns don't associate them with structural properties of the genetic coding system in contrast to our genetic approach. Let us explain our model approach to phyllotaxis phenomena in more details.
It is known that classical phyllotaxis patterns arise in the result of iterative rotations of initial object approximately on an angle 137 0 with a simultaneous increase of its distance from the center of the phyllotaxis pattern. On a complex plane such iterative operations can be simulated by means of iterative multiplication of an initial vector (or a point) with appropriate complex number z = x+i*y, which provides such angle of rotation (due to its appropriate argument) and increase (due to its appropriate modulus) in accordance with known properties of complex numbers. The described G-system of operators, which contains many variants of sparse 2 n *2 n -matrix presentations of complex numbers on basis of sums of some genetic projectors (from "column decompositions" of 2 n *2 n -matrices R K and H K in expressions (2)), allows generating many phyllotaxis patterns, each of which belong to its own 2-dimensional plane inside a whole phase space. In these cases, each of phyllotaxis patterns in a separate phase plane can have its own degree of maturation (or development) and its own type of a phyllotaxis picture; it depends on a kind of complex numbers, which is chosen for its iterative generating.
This model approach does not pretend to a new explanation for existence of Fibonacci numbers in phyllotaxis patterns. But this model approach give abilities to simulate bunches of phyllotaxis patterns in separate organisms. Concerning to Fibonacci or Luca numbers in phyllotaxis laws, one should remind here that "the phyllotaxis rules … cannot be taken as applying to all circumstances, like a law of nature. Rather, in the words of the famous Canadian mathematician Coxeter, they are 'only a fascinatingly prevalent tendency" (http://goldenratiomyth.weebly.com/phyllotaxis-the-fibonacci-sequence-in-nature.html ). One can think that the role of iterative operations in living nature is much more important than particular realizations of Fibonacci or Luca numbers.
To simulate an ensemble of phyllotaxis 3d-patterns (an ensemble of many cones of a spruce, etc.), each of which belongs to a separate subspace of a whole 2 n *2 n -dimensional phase space, an iterative application of Hamilton's quaternions can be used in their 2 n *2 nmatrix forms of presentation in the described G-system of operators.
In addition the author reminds here about cyclic groups on basis of Hamilton's quaternion and biquaternion with unit coordinates: these cyclic groups allow simulating some heritable biological phenomena including color perception, properties of which correspond to the Newton's color circle (see [Petoukhov, 2011b] and Section 17 in [Petoukhov, 2012a]). Using Hamilton's quaternions and biquaternions as 2 n *2 n -operators from the described G-system allows simulating some inherited ensembles of biological patterns including some inherited ensembles of color patterns and color changes of biological bodies.
Gxy = x*1+y*i = x, y y, x = x* 1 0 0 1 + y* 0 1 1 0 Upper level: the matrix representation of hyperbolic numbers, where two sparse (2*2)-matrices represent real and imaginary units (1 and i) of these numbers. The multiplication table of these basic elements 1 and i is shown on the right side. Bottom level: the special case of the set of hyperbolic numbers J (with the fixed value "a") describes a hyperbola x 2 +y 2 =a, where "x" is a variable.
Hyperbolic numbers G xy = x*1+y*i (where «i» is the imaginary unit of hyperbolic numbers with its property i 2 =+1) have a known (2*2)-matrix form of their representation shown on Figure (where "x" is a variable and the parameter "a" is fixed) describes a hyperbola, which corresponds to the equation x 2 +y 2 =a. This equation describes the hyperbola in the coordinate system (x,y), axes of which coincide with axes of symmetry of the hyperbole. In another coordinate system, axes of which coincide with asymptotes of the hyperbola, the same hyperbola is described by the equation y=a/x ( Figure 34). Any point of any hyperbola can be transformed into a new point of the same hyperbola by means of so called hyperbolic rotation, which is described by the same matrix representation of hyperbolic numbers [x, y; y, x] if its determinant is equal to 1.
It is known that hyperbolic numbers and hyperbolic operators are closely connected with natural logarithms, which can be defined on the base of hyperbolic rotations because of their relations with values of area under hyperbolas (http://mathworld.wolfram.com/NaturalLogarithm.html). By this reason natural logarithm «was formerly called hyperbolic logarithm as it corresponds to the area under a hyperbola» (http://en.wikipedia.org/wiki/Natural_logarithm ). Area s of a curvilinear trapezoid inside boundaries, which are created by the hyperbola x*y = a, the x-axis and the lines x = x 0 and x = x 1 , is equal to s = a*ln(x 1 /x 0 ) = a*{ln(x 1 )-ln(x 0 )}, where ln -natural (or hyperbolic) logarithm ( Figure 34). History of hyperbolic logarithms is described for example in the book [Klein, 2009]. Figure 34. The function a*ln(x 1 /x 0 ) is illustrated as the area under the hyperbola y = a/x from x 0 to x 1 .
But a wide class of genetically inherited physiological phenomena is organized by the nature by means of the same logarithmic law (5) and it can be described mathematically on the base of hyperbolic numbers. We mean here the main psychophysiologic law by Weber-Fechner (http://en.wikipedia.org/wiki/Weber-Fechner_law): the intensity of the perception is proportional to the logarithm of stimulus intensity; it is expressed by the equation where p -the intensity of perception, V -stimulus, V 0 -threshold stimulus, ln -natural logarithm. Proportionality factor k in the expression (6) is different for different channels of sensory perception (vision, hearing, etc.); this difference of values «k» is associated with different values "a" in the equation of the hyperbola: x*y = a. The threshold stimulus V 0 in (5) is also different for different channels of sensory perception.
The identity of expressions (5) and (6) allows to propose a geometric model of the Weber-Fechner law on the base of the described matrix representation of hyperbolic numbers in connection with a phenomenology of the molecular-genetic system. In this model the threshold stimulus V 0 in the expression (6) is interpreted as the value x 0 from the expression (5); the proportionality factor "k" in (6) is interpreted as the value "a" from (5); the stimulus V(t), which is varied in time, is interpreted as the variable x(t) from (5) and the perception "p" in (6) is interpreted as the area "s" from (5). Taking into account the type of hyperbolic numbers J from the expression (4), it is obvious that the hyperbolic number J 0 =[x 0 , (x 0 2 -a) 0.5 ; (x 0 2 -a) 0.5 , x 0 ] corresponds to the fixed area s 0 =a*ln(x 0 ) of the curvilinear trapezoid, points of which have their x-coordinates inside the interval from 1 till x 0 ; another hyperbolic number J=[x, (x 2 -a) 0.5 ; (x 2 -a) 0.5 , x 0 ] corresponds to the area s x =a*ln(x) of the curvilinear trapezoid with its x-coordinates inside the bigger interval from 1 till x. The total area s of the third curvilinear trapezoid, points of which have their x-coordinates inside the interval from x 0 till x, is equal to difference s x -s 0 , that is s = s x -s 0 = a*ln(x) -a*ln(x 0 ) = a*ln(x 1 /x 0 ) This curvilinear trapezoid with its area s corresponds to the hyperbolic number, which is equal to the following: We have described the geometric model for the case of one sensory channel. Let us generalize now this geometric model for a case of a multi-dimensional space with different 2dimensional planes inside it, each of which contains the described hyperbolic model of the Weber-Fechner law for one of many sensory channels of an organism. It is known that different types of inherited sensory perception are subordinated to this law: sight, hearing, smell, touch, taste, etc. One can suppose that the Weber-Fechner law is the law specially for nervous system. But it is not true since its meaning is much more wider because it is realized in many kinds of lower organisms without a nervous system in them: "this law is applicable to chemo-tropical, helio-tropical and geo-tropical movements of bacteria, fungi and antherozoids of ferns, mosses and phanerogams .... The Weber-Fechner law, therefore, is not the law of the nervous system and its centers, but the law of protoplasm in general and its ability to respond to stimuli" [Shultz, 1916, p. 126]. So the whole system of perception of an organism demonstrates itself as a multi-parametric system with many sub-systems of perception, which are subordinated to the logarithmic Weber-Fechner law. A generalized model of multi-dimensional space with appropriate sub-spaces is needed to describe this inherited multi-channel organization of the logarithmic perception in the case of the whole living organism. Let us show that our approach on the base of sums of genetic projectors allows to create such generalized model.
It means that these sums of genetic (4*4)-and (8*8)-projectors are matrix representations of hyperbolic numbers with unit coordinates in corresponding 2-dimensional planes of 4-dimensional and 8-dimensional spaces. On the basis of each of these sums, one can construct a general representation of hyperbolic numbers (with an arbitrary values of their coordinates) in an appropriate hyperbolic plane of such multi-dimensional space. For example, if e 0 and e 4 are (8*8)-matrices taken from the decomposition of the (8*8)-matrix s 0 +s 4 on Figure 29, then the expression G 04 =a 0 *e 0 +b 4 *e 4 represents the hyperbolic number with real coordinates «a 0 » and «b 4 » in the plane (x 0 , x 4 ) inside an 8-dimensional space with its coordinate system (x 0 , x 1 , …, x 7 ). Similar situations holds true for expressions G 15 =a 1 *e 1 +b 5 *e 5 , G 26 =a 2 *e 2 +b 6 *e 6 , G 37 =a 3 *e 3 +b 7 *e 7 , each of which represents hyperbolic number in an appropriate plane inside (8*8)-dimensional space, if e 1 , e 2 , e 3 , e 5 , e 6 , e 7 are (8*8)-matrices taken from decompositions on Figure 29. The (8*8)-matrix G 04 +G 15 +G 26 +G 37 represents an operator of an 8-dimensional space with a set of 4 hyperbolic planes (x 0 , x 4 ), (x 1 , x 5 ), (x 2 , x 6 ), (x 3 , x 7 ) inside it.
One can see that the (8*8)-matrix W on Figure 35 has a special structure: its both (4*4)-quadrants along each diagonals are identical to each other. This fact allows expressing the matrix W in the following form: where [1 0; 0 1] and [0 1; 1 0] are matrix representations of real and imaginary units of hyperbolic numbers; M 0 is the (4*4)-matrix, which reproduces each of (4*4)-quadrants along the main diagonal; M 1 is the (4*4)-matrix, which reproduces each of (4*4)-quadrants along the second diagonal. The expression (9) means that the matrix W belongs to so called «tensornumbers» (more precisely, to a category of «tensorhyperbolic numbers»), which will be introduced below in a special Section.
Our approach, which was described above on the base of genetic matrices, allows natural modeling such kind of 2 N -parametric systems with its 2-parametric hyperbolic subsystems by means of the described type of a (2 N *2 N )-matrix operator of a 2 N -dimensional space with appropriate quantity of hyperbolic planes inside it (in this case each of hyperbolic planes corresponds to an individual channel of perception with its own coefficient k and the threshold value V 0 in the expression (6)). This (2 N *2 N )-matrix operator also belongs to the category of tensorhyperbolic numbers, because it can be expressed by means of the expression (9), where M 0 and M 1 are (2 N-1 *2 N-1 )-matrices.
So we have two important facts:
• the logarithmic Weber-Fechner law has a total meaning for different sub-systems of perception inside the whole perception system of organism; • this unity of all subsystems of perception inside the whole organism, which are subordinated to the Weber-Fechner law, can be expressed by means of 2 Ndimensional tensorhyperbolic numbers by analogy with the expression (9).
These facts allow the author to put forward the following statement (or hypothesis): a living organism percepts an external world as a multi-parametric system, which belongs to a tensorhyperbolic category. In other words, for the whole perception system of a living organism, the external world is a life of tensorhyperbolic numbers in time. Correspondingly, interrelations of a living organism with the external world are realized on the base of processing of percepted tensorhyperbolic numbers, which are systematically changed over time in accordance with changes of external stimuluses (in addition, the author believes that a living organism can be regarded as a life of tensornumbers over time; in this approach tensorcomplex numbers and their extensions deserve special attention). This mathematical approach to phenomenology of perception of the world is closely connected with the multiparametric system of the genetic coding in its matrix form of representation; such connection allows a genetical transfer of this general biological property along a chain of generations.
REFLECTION OPERATORS AND GENETIC PROJECTORS.
By definition, a linear operator L is the reflection operator (or briefly, a "reflection") if and only if it satisfies the following criterion: L 2 = E, where E is the identity operator (it is also denoted as «1»), that is the real unit (see for example [Vinberg, 2003, Chapter 6]). The imaginary unit «i» of hyperbolic numbers satisfies this criterion and consequently it is the reflection operator: i 2 =+1 (see Figure 33). Hyperbolic number with unit coordinates (1+i) is sum of the identity operator «1» and the reflection operator «i». The well-known (2*2)matrix represenation of the imaginary unit is the following: [0 1; 1 0] (Figure 33). The acion of this reflection operator on a voluntary 2-dimensional vector generates a new vector, which is a mirror-symmetrical analogue of the initial vector relative to the bisector of the angle between the x-axis and y-axis of the coordinate system (x, y). For example, [3, 5]*[0 1; 1 0] = [5, 3]. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state.
But we have shown above that many of sums of genetic projectors are the (2 N *2 N )matrix representation of hyperbolic numbers, which have their own real and imaginary units in respective planes inside 2 N -dimensional space (see for example Figures 5, 6 and 29). Correspondingly, imaginary units of these hyperbolic numbers are (2 N *2 N )-operators of reflections in these planes inside 2 N -dimensional space. It gives evidences in favor of that the system of genetic coding actively uses also reflection operators. It is interesting because mirror reflections exist in many genetically inherited biological structures, including left and right halves of human and animal bodies. In the author's laboratory, the genetic (2 N *2 N )matrices of reflection operators are used to analyze mirror symmetries in molecular-genetic systems including long genetic sequences, genetic palindromes, chromosomal inversion, etc.
THE SYMBOLIC MATRICES OF GENETIC DUPLETS AND TRIPLETS
In Section 1 the author promised to explain a relation of numeric matrices R 4 , R 8 , H 4 , H 8 (Figure 1), which were the initial matrices in this article, with a phenomenology of the genetic coding system in matrix forms of its representation. This Section is devoted to the explanation. Theory of noise-immunity coding is based on matrix methods. For example, matrix methods allow transferring high-quality photos of Mar's surface via millions of kilometers of strong interference. In particularly, tensor families of Hadamard matrices are used for this aim. Tensor multiplication of matrices is the well-known operation in fields of signals processing technology, theoretical physics, etc. It is used for transition from spaces with a smaller dimension to associated spaces of higher dimension.
By analogy with theory of noise-immunity coding, the 4-letter alphabet of RNA (adenine A, cytosine C, guanine G and uracil U) can be represented in a form of the symbolic (Figure 36) which reflect important features of the genetic code. These features are connected with a specificity of reading of mRNAsequences in ribosomes to define protein sequences of amino acids (this is the reason, why we use the alphabet A, C, G, U of RNA in matrices on Figure 36; below we will consider the case of DNA-sequences separately A combination of letters on the two first positions of each triplet is ususally termed as a "root" of this triplet [Konopelchenko, Rumer, 1975a,b;Rumer, 1968]. Modern science recognizes many variants (or dialects) of the genetic code, data about which are shown on the NCBI's website http://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi. 19 variants (or dialects) of the genetic code exist that differ one from another by some details of correspondences between triplets and objects encoded by them (these dialects are known at July 10, 2013, but perhaps later their list be increased). Most of these dialects (including the so called Standard Code and the Vertebrate Mitochondrial Code) have the symmetric general scheme of these correspondences, where 32 "black" triplets with "strong roots" and 32 "white" triplets with "weak" roots exist (the next Section shows all of these 19 dialects in details). In this basic scheme, the set of 64 triplets contains 16 subfamilies of triplets, every one of which contains 4 triplets with the same two letters on the first positions (an example of such subsets is the case of four triplets CAC, CAA, CAT, CAG with the same two letters CA on their first positions). In the described basic scheme, the set of these 16 subfamilies of NNtriplets is divided into two equal subsets. The first subset contains 8 subfamilies of so called "two-position" NN-triplets, a coding value of which is independent on a letter on their third position: (CCC, CCT, CCA, CCG), (CTC, CTT, CTA, CTG), (CGC, CGT, CGA, CGG), (TCC, TCT, TCA, TCG), (ACC, ACT, ACA, ACG), (GCC, GCT, GCA, GCG), (GTC, GTT, GTA, GTG), (GGC, GGT, GGA, GGG). An example of such subfamilies is the four triplets CGC, CGA, CGT, CGC, all of which encode the same amino acid Arg, though they have different letters on their third position. The 32 triplets of the first subset are termed as "triplets with strong roots" [Konopelchenko, Rumer, 1975a,b;Rumer, 1968] Gln. The 32 triplets of the second subset are termed as "triplets with weak roots" [Konopelchenko, Rumer, 1975a,b;Rumer, 1968]. The following duplets are appropriate 8 weak roots for them: CA, AA, AT, AG, TA, TT, TG, GA (weak duplets). All members of these 32 NN-triplets and 8 weak duplets are marked by white color in the matrices [C U; A G] (3) and [C U; A G] (2) on Figure 36.
From the point of view of its black-and-white mosaic, each of columns of genetic matrices [C U; A G] (2) and [C U; A G] (3) has a meander-like character and coincides with one of Rademacher functions that form orthogonal systems and well known in discrete signals processing. These functions contain elements "+1" and "-1" only. Due to this fact, one can construct Rademacher representations of the symbolic genomatrices [C U; A G] (2) and [C U; A G] (3) (Figure 36) by means of the following operation: each of black duplets and of black triplets is replaced by number "+1" and each of white duplets and white triplets is replaced by number "-1". This operation leads immediately to the matrices R 4 and R 8 from Figure 1, that are the Rademacher representations of the phenomenological genomatrices [C U; A G] (2) and [C U; A G] (3) . Figure 36 are numerated from left to right by indexes 0, 1, 2, …, 7, one can see that 4 columns with even indexes 0, 2, 4, 6 contain 32 triplets, each of which has nitrogenous bases C or A on its third position, that is in its suffix (these C and A are usually termed "amino bases"). Other 4 columns with odd indexes 1, 3, 5, 7 contain other 32 triplets, each of which has nitrogenous bases T or G on its third position (these T and G are usually termed "keto bases"). The following important phenomen is connected with this separation of the matrix [C U; A G] (3) in columns with even and odd indexes: adjacent columns with indexes "0 and 1", "2 and 3", "4 and 5" and "6 and 7" contain identical list of amino acids and stop-codons (these adjacent columns are twins from this point of view). Consequently the symbolic matrix [C U; A G] (3) can be represented as a sum of two sparse matrices with identical lists of amino acids and stop-codons: the first of these two matrices coincides with the matrix [C U; A G] (3) in columns with even indexes and has zero columns with odd indexes; the second one coincides with the matrix [C U; A G] (3) in columns with odd indexes and has zero columns with even indexes.
If columns of the matrix [C U; A G] (3) on
By analogy the Rademacher representation R 8 of this symbolic matrix [C U; A G] (3) can be also decomposed into two sparse matrices RL 8 and RR 8 (Figure 26), the first of which has all non-zero columns with even indexes and the second one has all non-zero columns with odd indexes. As it was shown above, each of these numeric (8*8)-matrices RL 8 and RR 8 represents split-quaternion by Cockle, whose coordinates are equal to 1, in an 8-dimensional space. It means that the system of correspondences between the set of 64 triplets (with their internal separation into subsets of triplets with strong and weak roots) and the set of 20 amino acids and stop-codon is created by the nature in accordance with the layout of these two splitquaternions RL 8 and RR 8 in an 8-dimensional space. In some extend this double numeric construction resembles double helix of DNA.
But each of these (8*8)-matrices RL 8 and RR 8 consists of two hyperbolic numbers ( Figure 29): RL 8 = (e 0 +e 4 )+(e 2 +e 6 ) and RR 8 =(e 1 +e 5 )+(e 3 +e 7 ). In other words the system of correspondences between the set of triplets and the set of amino acids and stop-codons is based on the mentioned 4 hyperbolic numbers in an 8-dimensional space. One can mention that here we are meeting again with a set of 4 elements in some analogy with the sets of 4 elements in the genetic alphabets of nitrogenous bases in DNA and RNA -A, C, G, T/U (and also with the Ancient set of Pythagorean Tetraktys -http://en.wikipedia.org/wiki/Tetractys). Figure 36? To receive an answer on this question, let us compare a content of corresponding cells of the symbolic genomatrix [C U; A G] (3) (Figure 36) with non-zero cells of matrices (e 0 +e 4 ), (e 2 +e 6 ), (e 1 +e 5 ) and (e 3 +e 7 ), which represent these 4 hyperbolic numbers (Figure 29). Figure 37 shows results of such comparison for the hyperbolic numbers (e 0 +e 4 ) and (e 2 +e 6 ) with even indexes of their column projectors; results of such comparison for the hyperbolic numbers (e 1 +e 5 ) and (e 3 +e 7 ) with odd indexes are identical because in the matrix [C U; A G] (3) (Figure 36) adjacent columns with indexes "0 and 1", "2 and 3", "4 and 5" and "6 and 7" contain identical lists of amino acids and stop-codons. Those amino acids, which belong to matrix cells with triplets of strong roots, are marked by bold letters on Figure 37. One can see here the following symmetrical feature: each of real and imagine parts of these hyperbolic numbers (e 0 +e 4 ) and (e 2 +e 6 ) contains an equal quantity of amino acids marked by bold letters and also an equal quantity of amino acids of another type. By such way we receive a special separation of the set of 20 amino acids into a few groups, which belong to real parts or to imagine parts of these hyperbolic numbers and which should be analized in future more attentively. Figure 38, where beginnings of appropriate tensor family of matrices [C T; A G] (n) for the case of the DNA alphabet (adenine A, cytosine C, guanine G and thymine T) are shown. What kind of black-and-white mosaics (or a disposition of elements "+1" and "-1" in numeric representations of these symbolic matrices) can be appropriate in the case of the DNA alphabet for the basic matrix [C T; A G] and [C T; A G] (2) ? The important phenomenological fact is that the thymine T is a single nitrogenous base in DNA which is replaced in RNA by another nitrogenous base U (uracil) for unknown reason (this is one of the mysteries of the genetic system). In other words, in this system the letter T is the opposition in relation to the letter U, and so the letter T can be symbolized by number "-1" (instead of number "+1" for U). Taking this into account, a simple algorithm Figure 38. Concerning to their mosaics, the matrices [C T; A G] (2) and [C T; A G] (3) coincide with mosaics of the Hadamard matrices H 4 and H 8 (Figure 1), which are their Hadamard represenations (here one should remind that Hadamard matrices contain only entries +1 and -1). The mentioned algorithm was desribed in a few author's works (see for example [Petoukhov, 2012a,b]). The Appendix 3 describes another way to construct Hadamard (4*4)and (8*8)-matrices on the base of the unique status of the letter T in the genetic alphabet A, C, G and T in DNA.
GENETIC PROJECTORS AND THE EXCLUSION PRINCIPLE FOR EVOLUTIONARY CHANGES OF DIALECTS OF THE GENETIC CODE
This Section describes an exclusion principle of evolution of dialects of the genetic code. This principle, which was discovered by the author, shows that evolutionary changes of dialects of the genetic code are related with the genetic projectors. One should note that discovering of exclusion principles of nature is a significant task of mathematical natural science (the exclusion principle by Pauli in quantum mechanics is one of examples). By analogy with the Rademacher presentation R 8 (see above Figures 1,36, and Section 10), one can again replace black triplets by elements «+1» and white triplets by elements «-1» to receive numeric representations of these genetic matrices [C U; A G] (3) of all dialects. Such numeric representations of genetic matrices can be conditionally called as «±1-represenations». The result is the following: this numeric ±1-representation of matrices [C U; A G] (3) of every of 19 dialects is decomposed into a sum of 8 sparse (8*8)-matrices of «column projectors» (or «row projectors») (see Figure 39). It is connected with the fact that all cells on main diagonals of these numeric matrices contain only «+1» (see the theorem in Section 7). This general feature of all dialects is a consequence of the following phenomenologic fact: biological evolution never changes code meaning of 16 black triplets, which occupies (2*2)-sub-quadrants along the main diagonal of these matrices (CCC, CCU, CCA, CCG, CGC, CGU, CGA, CGG, GCC, GCU, GCA, GCG, GGC, GGU, GGA, GGG).
From the point of view of algebra of projection operators, the described facts mean that biologic evolution of dialects of the genetic code is connected with a condition of conservation of the numeric ±1-representation of the genetic matrix [C U; A G] (3) as a sum of 8 column projectors (or 8 row projectors). In other words, algebra of projectors shows an existence of an algebraic invariant of biological evolution.
One can formulate here the phenomenologic exclusion principle for evolutionary changes of dialects of the genetic code: it is forbidden for biological evolution to violate a separation of the set of 64 triplets into two subsets of triplets with strong and weak roots (black and white triplets) in a such way that a black-and-white mosaic of the genetic matrix [C U; A G] (3) in its «±1-representation» ceases to be a sum of 8 column projectors (or 8 row projectors). Figure 39. The matrices [C U; A G] (3) show 19 known dialects of the genetic code. Black (white) cells contain triplets with strong (weak) roots. Red color shows triplets, which have different code meanings in a considered dialect in comparison with their code meanings in the Vertebrate Mitochondrial Code, which is the most symmetrical among all dialects.
ABOUT «TENSORCOMPLEX» NUMBERS
This Section describes a system of multidimensional numbers, which seem to be a new one for mathematical natural sciences and which are constructed on the basis of sums of genetic projectors described above. Here the author will take some data from his work [Petoukhov, 2012b].
Many people know that the sum of two complex numbers gives a new complex number and that the product of two complex numbers is commutative. This is true when these complex numbers belong to the same complex plane. But the sum of these (4x4)representations of two complex numbers C L and C R , which belong to different planes of 4-dimensional space, is not equal to a new complex number, and their product is not commutative: C L *C R ≠ C R *C L . Each of these products C L *C R and C R *C L gives a new complex number. Figure 40 (two middle levels) show expressions C L *C R and C R *C L with their decompositions into sets of two matrices, which correspond to the multiplication table of complex numbers. Figure 40 (bottom level) also shows an expression of a corresponding commutator. One should note here that the expression of the commutator C L *C R -C R *C L on Figure 40 belongs to so called "tensorcomplex numbers", which will be introduced below (Figures 41 and 42) with marks of their quadrants by means of yellow and green colors to emphasise a special cross-like structure of this type of numbers.
3)
The order of the factors inside V is essential since tensor multiplication is not commutative.
Let us consider a set of matrices, which includes all matrices of this kind V together with their inverse matrices V -1 and together with all products of matrices of this kind. This set of matrices has properties of multi-dimensional numeric system as it is described below.
What one can say about algebraic properties of matrices of such type V ( Figure 41)? Matrices of this type can be added and subtracted. The matrix V has its inverse matrix V -1 (Figure 42), which is defined on the basis of the condition V*V -1 =V -1 *V=E 4 , where E 4 is identity matrix [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1]. Product of two different matrices of this type (for example G and S on Figure 42) generates a new (4*4)-matrix W=G*S (Figure 42), where both (2*2)-quadrants (marked by yellow color) along the main diagonal are identical each other, and two other (2*2)-quadrants (marked by green color) differ each from other only by inversion of sign in their entries. This matrix can be written in a form W = [1 0; 0 1]UQ + [0 1; -1 0]UK, where Q=[ a*k-c*k-b*n-d*n, a*m+c*m+b*p-d*p; b*n-c*k-a*kd*n, c*m-a*m-b*p-d*p] and K=[c*n-a*n-d*k-b*k, a*p-b*m+d*m+c*p; b*k+a*n-d*k+c*n, b*m-a*p+d*m+c*p]; U is a symbol of tensor multiplication; [1 0; 0 1] -a matrix representation of real unit; [0 1; -1 0] -a matrix representation of imaginary unit; a, b, c, d, k, m, n in the case of W, multipliers of the basic elements are square matrices Q and K instead of real numbers «a» and «b» in the case of complex numbers.
3)
The order of the factors inside W is essential since tensor multiplication is not commutative.
Taking into account a significant role of tensor multiplication U in W, the author names algebraic constructions in a form W as "tensorcomplex numbers" because such matrices W have the following algebraic properties in relation to usual operations of addition, subtraction, multiplication and division: • Addition and subtraction of two different matrices of this type W create a new matrix of the same type. Multiplication of different matrices of this type with each other is noncommutative and it gives a new matrix of the same type ( Figure 42). • Each non-zero matrix W=G*S has an inverse matrix W -1 = G -1 *S -1 (expressions for G -1 and S -1 were shown on Figure 41). It allows a definition of operation of division of two matrices of this type as a multiplication with an inverse matrix.
Such properties of tensorcomplex numbers resemble algebraic properties of quaternions by
Hamilton, which represent noncommutative division algebra (http://en.wikipedia.org/wiki/Quaternion). Here one should emphasize that tensorcomplex numbers cardinally differ from hypercomplex numbers x 0 +x 1 *i 1 +…+x n *i n , where x 0 , x 1 ,…, x n are real numbers, because, in the case of tensorcomplex numbers, multipliers of the basic elements are square matrices but not real numbers. By this reason, the famous Frobenius theorem (http://en.wikipedia.org/wiki/Frobenius_theorem_(real_division_algebras)) for hypercomplex numbers is not related to tensorcomplex numbers. This theorem says that any finite-dimensional associative division algebra is isomorphic to one of the following algebras: the real numbers, the complex numbers, the quaternions by Hamilton. a*k-c*k-b*n-d*n a*m+c*m+b*p-d*p c*n-a*n-d*k-b*k a*p-b*m+d*m+c*p b*n-c*k-a*k-d*n c*m-a*m-b*p-d*p b*k+a*n-d*k +c*n b*m-a*p+d*m+c*p b*k+a*n+d*k-c*n b*m-a*p-d*m-c*p a*k-c*k-b*n-d*n a*m+c*m+b*p-d*p d*k-a*n-b*k-c*n a*p-b*m-d*m-c*p b*n-c*k-a*k-d*n c*m-a*m-b*p-d*p Figure 42. Multiplication of two matrices G and S gives a new matrix W = G*S, which belong to so called «tensorcomplex numbers». Here a, b, c, d, k, m, n and p are real numbers. This set of tensorcomplex numbers is one of examples of numeric systems, where real numbers exist only inside matrices in a form of whole ensembles but not as individual multipliers (or as individual personages) inside such numeric systems. It is one of differences of tensorcomplex numbers from hypercomplex systems. One can mention that the commutator CL*CR--CR*CL (Figure 40) belongs to tensorcomplex numbers.
Till now in this Section we considered the case of tensorcomplex numbers, which were represented by means of the expression W = [1 0; 0 1]UQ + [0 1; -1 0]UK, where Q and K are (2*2)-matrices. But our work [Petoukhov, 2012b,….] describes that complex numbers can be also represented by means of sparse (2 N *2 N )-matrices on the basis of sums of projectors (here N = 2, 3, 4, …). One can take sum of two complex numbers, which belong to different planes of the same 2 N -dimensional space and which are represented by means of appropriate (2 N *2 N )-matrices. In this case new types of tensorcomplex numbers arise. Theory and expressions for such tensorcomplex numbers in spaces of higher dimensions are developed now for a publication in the nearest future.
The author hopes that tensorcomplex numbers, which seem to be a new type of multidimensional numbers for mathematical natural sciences, will be useful not only in bioinformatics, but also in physics, theory of communication, logic and other fields.
The set of matrix D has zero divisors, examples of which are shown on Figure 44. Figure 44 also shows a general expression of the inverse matrix D -1 for the matrix D.
Multiplication of two matrices D 0 and D 1 of this type gives a new matrix L (Figure 45), where both quadrants along each of diagonals are identical (they are marked by yellow and blue colors). This matrix can be represented in the following form: L = [1 0; 0 1]UQ + [0 1; 1 0]UK, where Q = [a*k-c*k+b*n+d*n, a*m+c*m-b*p+d*p; d*n-c*k-b*n-a*k, c*m-a*m +b*p +d*p], K=[a*n+b*k-c*n+d*k, b*m-a*p-c*p-d*m; d*k-b*k-c*n-a*n, a*p-b*m-c*p-d*m].
Taking into account a significant role of tensor multiplication U in L, the author names algebraic constructions in a form L (Figure 45) as "tensorhyperbolic numbers" because such matrices L have the following algebraic properties in relation to usual operations of addition, subtraction, multiplication and division: • Addition, subtraction and multiplication of two different matrices of this type create a new matrix of the same type. Multiplication of different matrices of this type with each other is noncommutative. The set of matrices L has zero divisors. • Each non-zero matrix L=D0*D1, if it is not a zero divisor, has an inverse matrix L -1 = D 0 -1 *D 1 -1 (expressions for D -1 was shown on Figure 44). It allows a definition of operation of division of two matrices of this type as a multiplication with an inverse matrix. D0= a*k-c*k+b*n+d*n, a*m+c*m-b*p+d*p d*n-c*k-b*n-a*k, c*m-a*m+b*p+d*p a*n+b*k-c*n+d*k, b*m-a*p-c*p-d*m d*k-b*k-c*n-a*n, a*p-b*m-c*p-d*m a*n+b*k-c*n+d*k, b*m-a*p-c*p-d*m d*k-b*k-c*n-a*n, a*p-b*m-c*p-d*m a*k-c*k+b*n+d*n, a*m+c*m-b*p+d*p d*n-c*k-b*n-a*k, c*m-a*m+b*p+d*p Figure 45. Multiplication of two matrices D 0 and D 1 (of the type D from Figure 44) gives a new matrix L = D 0 *D 1 , which belongs to so called «tensorhyperbolic numbers». Here a, b, c, d, k, m, n and p are real numbers. This Section has described the case of tensorhyperbolic numbers in the form of (4*4)matrices for 4-dimensional spaces. But tensorhyperbolic numbers and their generalization can be expressed in forms of (2 N *2 N )-matrices for 2 N -dimensional spaces. These materials will be published later together with data about «tensordual» numbers, «tensorquaternions», etc.
Different types of such multidimensional numbers can be combined under a brief name «tensornumbers». Tensornumbers [1]UM 0 +[i 1 ]UM 1 +…+[i n ]UM n (here M n are square matrices) are a generalization of hypercomplex numbers in the case when the following changes are made in the usual denotation of hypercomplex numbers 1*x 0 +i 1 *x 1 +…+i n *x n : • Real multipliers x 0 , x 1 , …, x n are replaced by square matrices M 0 , M 1 ,…, M n ; • Usual multiplication is replaced by tensor multiplication.
It is obvious that hypercomplex numbers 1*x 0 +i 1 *x 1 +…+i n *x n are a degenerate case of tensornumbers when their matrices M n have the first order: 1) (1x1)-matrices M n = [x n ] are real numbers x n ; 2) tensor multiplication [i n ]U[x n ] of (1x1)-matrix [x n ] with the matrix representation [i n ] of any of the basic elements is commutative and it coincides with usual multiplication.
By What one can say at this initial stage about a future of tensornumbers in mathematical natural sciences and technologies? Two extreme points of view are possible here: 1) tensorcomplex numbers will have no applications; 2) tensorcomplex numbers will have a great significance for mathematical natural sciences including a creation of new theories of physical fields and their generalization, new laws of conservation, generalization of many physical and other rules and knowledge, new approaches in engineering and biological informatics, mathematical logics, etc. The author believes that the second point of view will coincide with a real future in a higher extent.
SOME CONCLUDING REMARKS
As it was noted in the beginning of the article, living organism is a machine for coding and processing of information. For example, visual information about external objects is transmitted through the nerves from the eye retina to the brain already in a logarithmically encoded form. This article shows some evidences that oblique projection operators and their combinations, which are connected with matrix representations of the genetic coding system, can be a basis of adequate approaches to simulate ensembles of inherited biological phenomena. Only some of such phenomena were considered in this article. Some other phenomena will be described from the proposed point of view later. In addition, the author reminds about fractal genetic nets (FGN), which can be represented as a construction on a base of orthogonal projectors and which lead to new genetic rules in structures of long nucleotide sequences [Petoukhov, 2012;Petoukhov, Svirin, 2012]. The revealed genetic system of operators connected with oblique projectors allows modeling multi-dimensional phase spaces with many subspaces, processes in which can be selectively determined and controled. Speaking about importance of projectors in the nature, the following points should be noted (it seems that the nature «likes» projectors): • Most people are familiar with the idea of projectors due to sun rays (and light rays in general) that are distributed in a straight line and provide shade from the subjects (from the ancient time sundials were constructed on the use of this). Light rays have the projection property; • Electromagnetic vectors are the sum of their projections in the form of their electric and magnetic vectors; • Evolution of living organisms is associated with the consumption of solar energy that is projected by means of sun rays to surfaces of living bodies (photosynthesis, which is for living matter one of its basic mechanisms, is the conversion of sunlight energy into biochemical energy for activity of organisms; circadian biorhythms are connected with external light cycles "day-night"); • Projection phenomena of birefringence in biological tissues and crystals exist; • A great variety of living organisms has polarization eyesight; • Our vision is based on the projection of images on the retina; • Religious people may ask whether there is any indication in the Bible on this subject?
Especially for them, it may be recalled the following. According to the Bible, God's creation of the world began with the creation of the light: "Let there be light." Many thinkers suggested previously for various reasons that, figuratively speaking, the body is woven from the light. This has some associations with our hypothesis that the body is woven from the projectors. Now projectors and their combinations become interesting instruments to study and simulate genetic phenomena and inherited structures and processes in living matter. A new conceptual notion with appropriate mathematical formalisms are proposed about a multidimensional control space (or coding space) with subspaces of a selective control in each on basis of a participation of projection operators in such control. Here one can remember the statement: "Profound study of nature is the most fertile source of mathematical discoveries" (Fourier, 2006, Chapter 1, p. 7).
Using this ideology of projection operators, one can get many unexpected results and approaches. In author's opinion, one of many promising applications of projectors in mathematical biology and bioinformatics is the study of connections between genetic projectors and Boolean algebra. It is known that every family of commutative projectors generates a Boolean algebra of projectors. The Boolean algebra plays a great role in the modern science because of its connections with many scientific branches: mathematical logic, the problem of artificial intelligence, computer technologies, bases of theory of probability, etc. G.Boole was creating such algebra of logics (or logical operators), which would reflect inherited laws of human thought. One should note here that some of genetic projectors (which are not described in this article) form commutative pairs; this fact provokes thoughts about Boolean algebras in genetics and bioinformatics and also about genetic basis of logics of human thought.
Genetic molecules are subordinated to laws of quantum mechanics, which has begun from matrix mechanics by W. Heisenberg. Till this pioneer work by Heisenberg, matrices were not used in physics where specialists operated only with numbers to study physical systems. It was very unexpected for scientific community that applying whole ensembles of numbers in a form of matrices can be useful and appropriate to describe natural phenomena and systems. that matrices were not used in physics till a creation of the matrix mechanics by W. Heisenberg (modern quantum mechanics was begun. Previously physicists operated with numbers in physical tasks. It was very unexpected that using whole ensembles of numbers in a form of matrices can be useful and appropriate to describe natural phenomena and systems. Contemporary science uses matrices widely in many fields, and our work uses matrices to study molecular-genetic systems.
Materials of this article reinforce the author's point of view that living matter in its informational fundamentals is an algebraic essence. The author believes that a development of algebraic biology, elements of which are contained in this and other author's articles, is possible. By analogy with the known fact that molecular foundations of molecular genetics turned up unexpectedly very simple, perhaps algebraic foundations of living matter are also relative simple. In the infinite set of matrices, we find a small subset, which simulates the world of molecular genetic coding with many of its phenomenologic features; this discovery was possible due to studying the family of alphabets in the molecular-genetic system.
Concerning the new theme of tensornumbers, which were revealed in our study of genetic projectors, one can remind that the idea of multi--dimensional numbers and multi--dimensional spaces works intensively for a long time in the theoretical physics and other fields of science for modeling the phenomena of our physical world. Our results add mathematical formalisms, first of all, into the fields of molecular genetics and bioinformatics. After the discovery of non--Euclidean geometries and of Hamilton quaternions, it is known that different natural systems can possess their own geometry and their own algebra (see about this [Kline, 1980]). The genetic code is connected with its own multi--dimensional numerical systems or the multi--dimensional algebras. These algebras allow revealing hidden peculiarities of the structure of the genetic code and its evolution. It seems that many difficulties of modern bioinformatics and mathematical biology are connected with utilizing for their natural structures inadequate algebras, which were developed for completely other natural systems. Hamilton had similar difficulties in his attempts to describe 3D--space transformations by means of 3-dimensional numbers while this description needs quaternions. The author hopes that proposed tensornumbers will help to make progress not only in bioinformatics and mathematical biology, where algebraization of biology seems to be possible, but also in many fields of sciences and technologies.
"Complexity of a civilization is reflected in complexity of numbers used by this civilization" [Davis, 1967]. Whether modern civilization will use tensornumbers or not? It is the open question. Pythagoras has formulated the idea: "all things in the world are numbers" or "number rules the world". B. Russell noted that that he did not know other person who would exert such influence on thinking of people as Pythagoras. From this viewpoint, there is no more fundamental scientific idea in the world, than this idea about a basic meaning of numbers. Our researches of oblique projectors in the field of matrix genetics have led to new systems of multidimensional numbers and have given new materials to the great idea by Pythagoras in its modernized formulating: "All things are multi-dimensional numbers".
This article proposes a new mathematical approach to study "a partnership between genes and mathematics" (see Section 1 above). In the author's opinion, this kind of mathematics is beautiful and it can be used for further developing of algebraic biology and theoretical physics in accordance with the famous statement by P. Dirac, who taught that a creation of a physical theory must begin with the beautiful mathematical theory: "If this theory is really beautiful, then it necessarily will appear as a fine model of important physical phenomena. It is necessary to search for these phenomena to develop applications of the beautiful mathematical theory and to interpret them as predictions of new laws of physics" (this quotation is taken from [Arnold, 2007]). According to Dirac, all new physics, including relativistic and quantum, are developing in this way. One can suppose that this statement is also true for mathematical biology.
APPENDIX 1. COMPLEX NUMBERS, CYCLIC GROUPS AND SUMS OF GENETIC PROJECTORS
This Appendix shows a connection between complex numbers and cyclic groups on the base of sums of (8*8)-projectors s 0 +s 2 , s 0 +s 3 , s 1 +s 2 , s 1 +s 3 , s 4 +s 6 , s 4 +s 7 , s 5 +s 6 , s 5 +s 7 from Fig. 7 on the base of the Rademacher (8*8)-matrix R 8 (on Fig. 9 these sums were marked by green color and they corresponded to cyclic groups, if the weight coefficient 2 -0.5 was used for them). Each of these 8 sums can be decomposed into two matrices e 2k and e 2k+1 (k=0, 1, …7), a set of whose is closed relative to multiplication and has a multiplication table, which coincides with the multiplication table of basic elements of complex numbers (Fig. 46). It means that these matrices e 2k and e 2k+1 represent basic elements of complex numbers in corresponding 2-dimensional planes of a 8-dimensional vector space. Sets of matrices a 2k *e 2k + a 2k+1 *e 2k+1 (here a 2k and a 2k+1 are real numbers; each of matrices e 2k plays a role of unitary matrix inside the appropriate set a 2k *e 2k + a 2k+1 *e 2k+1 ) represent complex numbers inside these 2-dimensional planes of the 8-dimensional space. The decomposition of each of (8*8)-matrices s 0 +s 2 , s 0 +s 3 , s 1 +s 2 , s 1 +s 3 , s 4 +s 6 , s 4 +s 7 , s 5 +s 6 , s 5 +s 7 from Fig. 7 into a set of two matrices e 2k and e 2k+1 (k=0, 1, …7), a set of whose is closed relative to multiplication and gives the multiplication table of complex numbers (on the right) In a general case, the described approach allows constructing selective operators of a 2 n -dimensional vector space with a set of different 2-dimensional planes, each of which can contain a function of complex numbers (parameters of these functions in different planes can be independent or interrelated). Such selective operator allows simulating a combinatory behaviour of a multi-parametric system, which contains different 2-parametric subsystems, whose independent or interrelated behaviours can be simulated by means of functions of complex numbers. If these functions of complex numbers are cyclic, such selective operator describes a behaviour of a multi-parametric system, which contains an appropriate ensemble of 2-parametric subsystems with cyclic behaviours.
APPENDIX 2. HYPERBOLIC NUMBERS AND SUMS OF GENETIC PROJECTORS
Let us show now a connection between hyperbolic numbers and sums of (8*8)projectors s 0 +s 1 , s 0 +s 4 , s 0 +s 5 , s 1 +s 4 , s 1 +s 5 , s 2 +s 3 , s 2 +s 6 , s 2 +s 7 , s 3 +s 6 , s 3 +s 7 , s 4 +s 5 , s 6 +s 7 from Fig. 7 on the base of the Rademacher (8*8)-matrix R 8 (on Fig. 9 these sums were marked by red color). Each of these 12 sums can be decomposed into two matrices j 2k and j 2k+1 (k=0, 1, 2, …, 11), a set of whose is closed relative to multiplication and has a multiplication table, which coincides with the multiplication table of basic elements of hyperbolic numbers (Fig. 47). It means that these matrices j 2k and j 2k+1 represent basic elements of hyperbolic numbers in corresponding 2-dimensional planes of a 8-dimensional vector space. Sets of matrices a 2k *j 2k + a 2k+1 *j 2k+1 (here a 2k and a 2k+1 are real numbers; each of matrices j 2k plays a role of unitary matrix inside the appropriate set a 2k *j 2k + a 2k+1 *j 2k+1 ) represent hyperbolic numbers inside these 2-dimensional planes of the 8-dimensional space. Fig. 47. The decomposition of each of (8*8)-matrices s 0 +s 1 , s 0 +s 4 , s 0 +s 5 , s 1 +s 4 , s 1 +s 5 , s 2 +s 3 , s 2 +s 6 , s 2 +s 7 , s 3 +s 6 , s 3 +s 7 , s 4 +s 5 , s 6 +s 7 from Fig. 7 into a set of two matrices j 2k and j 2k+1 (k=0, 1, …, 11), a set of whose is closed relative to multiplication and gives the multiplication table of hyperbolic numbers (on the right)
APPENDIX 3. ANOTHER TENSOR FAMILY OF GENETIC HADAMARD MATRICES
Hadamard matrices are well--known in noise--immunity coding, quantum mechanics, etc. Their rows are Walsh functions, which are widely used in radiocommunication for a code division in systems with many channels, etc., for example, in cellular standards such as IS--95, CDMA2000 or UMTS. Walsh functions and corresponding series and transforms find various applications in physics and engineereing, in particular, in digital signal processing. They are used in speech recognition, in medical and biological image processing, in digital holography, and other areas.
Above we have described the variant of the relation of the molecular-genetic system with the special tensor family of Hadamard matrices H 4 , H 8 , etc. (Fig. 1 and 38). This variant uses the phenomenological fact of existence of triplets with strong and weak roots; in other words this variant is based on a specifity of the degeneracy of the genetic code. This Appendix shows second variant of a relation of the genetic alphabets with another tensor family of Hadamard matrices. This new family of genetic Hadamard matrices is based only on properties of the genetic alphabet A, C, G, T and doesn't depend on the degeneracy of the genetic code. On the author's opinion, this new variant is more interesting and fundamental for further using in future genetic researches. Fig. 48 shows the beginning of this new tensor family of Hadamard matrices P (n) = [1 1; -1 1] (n) together with genetic matrices [C T; A G] (m) (m=1, 2, 3) of monoplets, duplets and triplets with their black-and-white mosaics which coincide with mosaics of the Hadamard matrices. From the point of view of these two facts, the letters A, C, G are identical to each other and the letter T is opposite to them. Correspondingly this binary-oppositional division inside the DNA-alphabet can be reflected by the symbol "+1" for each of the letters A, C, G and by the opposite symbol "-1" for the letter T. Concerning the genetic matrices [C T; A G] (n) this approach leads to a simple algorithm for assigning a sign «+1» or «-1» to each of multiplets (monoplets, duplets, triplets, etc.): each multiplet is considered as a product of the signs «+1» or «-1» of its letters (A=C=G=1, T=-1). For example, the triplet CTG has the sign «-1» because 1*(-1)*1=-1; the triplet CTT has the sign «+1» because 1*(-1)*(-1)=+1, etc. Inside of the genetic matrices [C T; A G] (n) on Fig. 48, all multiplets with the symbol «+1» are denoted by black color and all multiplets with the symbol «-1» are denoted by white color. In the result we have the connection of the genetic matrices [C T; A G] (n) with Hadamard matrices P (n) = [1 1; -1 1] (n) on the base of fundamental molecular-genetic properties, which can be used in genetic computers of living organisms.
APPENDIX 4. ABOUT SOME APPLICATIONS IN ROBOTICS
Turning once more to tensornumbers (first of all, to tensorcomplex numbers), which were described above, one can note their possible application in some tasks of robotics when movement control of a group of robots is needed. In this case a special class of multiparametric system of a tensornumber organization is under consideration. Control and encoding in these systems can be organized so that each of the representative set of subsystems may be selectively controlled and coded independently from other subsystems. An example of such a multi-parameter system is a group of robots, each of which moves along a certain trajectory plane in accordance with the program from a matrix operator, whose components are functions of time; the entire set of these individual operators incorporated into a single matrix operator, whose multiplication with another matrix operator of a similar structure generates a new matrix operator, endowed with the same property independent motion control of each robot in a new regime (This is the problem of collective motion control of a set of robots, each of which can move quite independently of the others due to the fact that the management of its movement is carried by its "personal" sub-operator from a general matrix operator, allowing collective restructuring of all sub-operators by means of simple multiplication of the general operator with a matrix operator of a similar structure).
Consider an example of collective management of an 8-parametric system, which has a tensorcomplex type of its organization and which consists of four 2-parametric subsystems (four robots), the status of each of which may change over time regardless of the status of the other three subsystems. This management can be carried out using an (8 * 8)matrix M(t), which is an operator of the tensorcomplex type ; all its 8 components a0, a1, ..., a7 are functions of time t (Figure 53). A state of the whole system inside its configuration space during time can be characterized by an 8-dimensional vector [x 0 , x 1 , x 2 , ..., x 7 ], which is determined by the operator M (t).
In this example, behavior of each of the four 2-parametric subsystems is graphically depicted by means of movement of a point inside one of four coordinate planes (x 0 , x 4 ), (x 1 , x 5 ), (x 2 , x 6 ), (x 3 , x 7 ) of the configuration space of the system; this behavior can determined entirely independently from behavior of the other three subsystems. To specify the path and pace of movement of each point along its individual trajectory, which is defined parametrically, it is only necessary to set the functions a n (t) of the operator M (t).
In the plane (x 0 , x 4 ), motion occurs along a cardoid: In the plane (x 1 , x 5 ), motion occurs along a "petal clover": In the plane (x 2 , x 6 ), motion occurs along a "5-petal rose": In the plane (x 3 , x 7 ), motion occurs along a logarithmic spiral: Fig. 54. The example of motion trajectories for 4 sub-systems of the 8-parametric system (explanation in text). | 2014-12-31T07:48:21.000Z | 2013-07-30T00:00:00.000 | {
"year": 2013,
"sha1": "35b3207f7bc17ad554f72f4e9c25f09517378d4a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1c7d6f8bb967180d311a1c93cdd3f7d905d57287",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
} |
220965129 | pes2o/s2orc | v3-fos-license | Progranulin modulates cartilage-specific gene expression via sirtuin 1 – mediated deacetylation of the transcription factors SOX9 and P65
Progranulin (PGRN) is an autocrine growth factor that exerts crucial roles within cartilage tissue; however, the molecular mechanisms underlying PGRN-mediated cartilage homeostasis remain elusive. In the present study, we investigated the role of PGRN in regulating chondrocyte homeostasis and its therapeutic potential for managing osteoarthritis (OA). We found that PGRN levels are significantly increased in human cartilage in mild OA and that its expression is decreased in the cartilage in severe OA. In vitro , treatment of primary rat chondrocytes with recombinant PGRN significantly enhanced the levels of collagen type II a 1 chain (COL2A1) and aggrecan, and attenuated TNF a -induced up-regulation of matrix metallopeptidase 13 (MMP13) and ADAM metallopeptidase with thrombospondin type 1 motif 5 (ADAMTS5) in chondrocytes. These effects were abrogated in SIRT1 2 / 2 cells,
Progranulin (PGRN) is an autocrine growth factor that exerts crucial roles within cartilage tissue; however, the molecular mechanisms underlying PGRN-mediated cartilage homeostasis remain elusive. In the present study, we investigated the role of PGRN in regulating chondrocyte homeostasis and its therapeutic potential for managing osteoarthritis (OA). We found that PGRN levels are significantly increased in human cartilage in mild OA and that its expression is decreased in the cartilage in severe OA. In vitro, treatment of primary rat chondrocytes with recombinant PGRN significantly enhanced the levels of collagen type II a 1 chain (COL2A1) and aggrecan, and attenuated TNFa-induced up-regulation of matrix metallopeptidase 13 (MMP13) and ADAM metallopeptidase with thrombospondin type 1 motif 5 (ADAMTS5) in chondrocytes. These effects were abrogated in SIRT1 2/2 cells, indicating a causative role of SIRT1 in the effects of PGRN on protein expression in chondrocytes. Mechanistically, PGRN increased SIRT1 expression and activity, which reduced the acetylation levels of SRY-box transcription factor (SOX9) and transcription factor P65 (P65) and thereby promoted nuclear translocation of SOX9 and inhibited TNFa-induced P65 nuclear accumulation to maintain chondrocyte homeostasis. In conclusion, our findings reveal a mechanism of action for PGRN that maintains cartilage homeostasis and supports the notion that PGRN up-regulation may be a promising strategy for managing OA.
Osteoarthritis (OA) is the most prevalent disorder in joint disease, characterized by progressive loss of articular cartilage, synovial inflammation, and osteophyte formation (1), causing joint pain and disability in older adults. A series of pathological changes are implicated in OA development, including loss of chondrocyte cellularity, an imbalance between anabolism and catabolism of chondrocytes (1,2), increased apoptosis (3), and bone regeneration of subchondral bone. Although it is well established that multiple risk factors have been involved in OA development, including age, sex, joint injury, and mechanical and genetic factors (4), much remains to be explored for its underlying molecular mechanisms.
Progranulin (PGRN), also known as proepithelin, granulin/ epithelin precursor, and PC cell-derived growth factor, is a 593 amino acid autocrine growth factor which is involved in a variety of physiological and disease processes such as inflammation (5,6), wound healing (7), tumorigenesis (8), and bone regeneration (9). Interestingly, several previous studies indicated that PGRN could act as a cartilage growth factor and exert a crucial role in cartilage homeostasis. For instance, PGRN-deficient mice showed more severe degeneration of articular cartilage compared with control mice, whereas intra-articular injection of recombinant PGRN dramatically improved OA score and attenuated cartilage matrix loss in surgically induced OA mice (10). Consistently, it is reported that PGRN could inhibit the degradative processes that occur in OA and RA patients by disrupting the interaction between cartilage oligomeric matrix protein and ADAMTS7/ADAMTS12 (11). Additionally, several clinical investigations indicated that both serum and protein levels of PGRN were significantly higher in patients with OA and rheumatoid arthritis (RA) (12)(13)(14). Collectively, these observations support an important regulatory role for PGRN in cartilage homeostasis.
Although the regulatory role of PGRN in cartilage homeostasis has been identified, the intracellular events responsible for PGRN-mediated protective role against OA progression needed to be elucidated in more detail. It is reported that sirtuin-1 (SIRT1), an NAD 1 -dependent histone deacetylase, exerts protective roles in human chondrocytes through enhancing the expression of cartilage anabolic markers such as collagen type II a 1 chain (COL2A1) (15) and aggrecan (16) while inhibiting apoptosis (17). Furthermore, inhibition of SIRT1 in human chondrocytes leads to OA-like gene expression changes (18), and cartilage-specific SIRT1 knockout mice show accelerated OA progression (19). We recently showed that SIRT1 facilitates growth plate chondrogenesis via deacetylating PERK and attenuating the PERK-eIF-2a-CHOP axis of the unfolded protein response pathway (20). Of note, treatment with PGRN in podocytes has been shown to increase SIRT1 expression and activity, resulting in decreased level in acetylation of PGC-1a and FoxO1 (21). However, whether and how This article contains supporting information. ‡ These authors contributed equally to this work. * For correspondence: Shufang Wu, shufangw@hotmail.com; Yangjun Zhu, hhyycskzyj@163.com.
PGRN modulates SIRT1 expression and activity in the articular chondrocytes have not been elucidated yet. Based on all these findings, we hypothesized that PGRN facilitates cartilage homeostasis by up-regulating SIRT1 expression and activity. In the current study, PGRN promoted SIRT1 expression and activity and then initiated a cascade of target genes involved in anabolism and catabolism of chondrocytes, which clearly demonstrated the relationship between PGRN and SIRT1 in chondrocytes. To the best of our knowledge, this is the first study to demonstrate that induction of SIRT1 expression and activity in articular chondrocytes is critical in PGRN-mediated cartilage homeostasis.
PGRN expression in human articular cartilage of OA
To gain insight into the role of PGRN during OA development, we obtained normal cartilage of the femoral head from patients with femoral neck fracture without joint disease, the lateral femoral condyle with mild OA, and the medial femoral condyle with severe OA. Safranin-O staining of cartilage was weaker in the cartilage of lateral femoral condyle (mild OA group) than in the normal group, and the cartilage of medial femoral condyle (severe OA group) was barely stained with safranin-O and was severely degenerated (Fig. 1a). Immunohistochemistry analysis showed that PGRN levels were increased in human cartilage in mild OA, with PGRN-positive cells mainly observed in the superficial zone, while its expression was decreased in severe OA (Fig. 1b). Moreover, the levels of PGRN in human articular cartilage were further confirmed by Western blotting and real-time PCR (Fig. 1, c and d). Similarly, the expression of PGRN receptor EphA2 was increased in mild OA cartilage but decreased in severe OA cartilage (Fig. S1a).
PGRN positively regulates cartilage gene expression and up-regulates SIRT1 in cultured chondrocytes
Because PGRN is known as a growth factor that promotes cell cycle progression in many cellular systems (22), we sought to determine whether PGRN affects cartilage-specific gene expression in cultured chondrocytes. Therefore, we tested the expression of cartilage genes markers, including COL2A1 and aggrecan, in chondrocytes isolated from rat articular cartilage in the presence of recombinant PGRN (0-200 ng/ml). As expected, graded concentrations of PGRN stimulated COL2A1 and aggrecan expression and secretion in a dose-dependent manner, with the lowest stimulated concentration of 100 ng/ml (Fig. 2, a-e and Fig. S2a). In addition, 50 ng/ml and 100 ng/ml recombinant PGRN increased the level of PGRN receptor EphA2 in chondrocytes (Fig. S1b). It is well accepted that overexpression or activation of SIRT1 promoted cartilage gene expression in human chondrocytes (15), together with recent findings that PGRN regulated SIRT1-PGC-1a/FoxO1 signaling in podocytes (21), led us to explore whether SIRT1 was involved in PGRN-mediated anabolism of chondrocytes. PGRN enhanced SIRT1 expression and activity in a dose-dependent manner, with higher concentrations (100 and 200 ng/ ml) causing a statistically significant up-regulation (Fig. 2, f-h and Fig. S2b). Interestingly, inhibition of SIRT1 by SIRT1 siRNA (the efficiency was validated by real-time PCR and Western blotting) (Fig. 2, i and j and Fig. S2c) dramatically repressed PGRN-mediated up-regulation of cartilage genes (Fig. 2, k-o and Fig. S2d), indicating that PGRN may promote COL2A1 and aggrecan expression and secretion via up-regulation of SIRT1.
PGRN decreases the acetylation level of SOX9 and promotes SOX9 nuclear transportation
Because several studies reported that SIRT1 promotes cartilage-specific gene expression through deacetylation of SOX9 (15,16), we hypothesized that PGRN increased the expression of COL2A1 and aggrecan through SIRT1-mediated SOX9 deacetylation. Co-immunoprecipitation assays showed that PGRN could significantly reduce the acetylation level of SOX9 (Fig. 3a), whereas such effect was abrogated in SIRT1 2/2 chondrocytes (Fig. 3b). Given that SOX9 acetylation state may affect SOX9 nuclear entry (16), we postulated that PGRN increased SIRT1 expression and activity to reduce the acetylation level of SOX9 and thereby promoted nuclear translocation of SOX9. As expected, PGRN significantly increased nuclear localization of SOX9, whereas cotreatment with SIRT1 siRNA reversed such stimulatory effect of PGRN (Fig. 3, c and d).
PGRN inhibited TNFa-induced MMP13 and ADAMTS5 expression through SIRT1-mediated deacetylation of P65
To confirm whether PGRN affects catabolism of chondrocytes, we examined the effect of PGRN on expression of matrix metallopeptidase 13 (MMP13) and ADAM metallopeptidase with thrombospondin type 1 motif 5 (ADAMTS5). However, graded concentration of PGRN (0-200 ng/ml) had a minor effect on the expression of MMP13 and ADAMTS5 both at protein and mRNA levels ( Fig. 4, a and b and Fig. S3a). To explore the role of PGRN in cellular models of OA, chondrocytes were treated with 10 ng/ml TNFa for 24 h. However, no changes were observed in either PGRN expression or secretion upon stimulation of TNFa as detected by real-time PCR and ELISA (Fig. 4, c and d). In addition, TNFa inhibited the activity of SIRT1, whereas a combination of PGRN with TNFa partially abrogated this inhibition (Fig. 4e). The stimulation with TNFa significantly induced MMP13 and ADAMTS5 expression, while the addition of PGRN abolished the chondrocyte catabolism induced by TNFa (Fig. 4, f and g and Fig. S3b). Considering the protective role of SIRT1 against OA-related catabolic stimuli in cultured chondrocytes (23), we reasoned that PGRN may suppress TNFa-induced chondrocyte catabolism through up-regulation of SIRT1. Indeed, under the stimulation with TNFa, PGRN significantly inhibited MMP13 and ADAMTS5 expression, whereas this suppression was largely nullified in SIRT1 2/2 chondrocytes (Fig. 4, h and i and Fig. S3c).
It is well accepted that TNFa can induce NF-kB-P65 activation and up-regulate the levels of various MMPs and ADAMTS (24) in chondrocytes. Therefore, to explore the downstream mechanism that regulates chondrocytes catabolism, we assessed the acetylation level P65 at Lys-310 in cultured chondrocytes with or without SIRT1 siRNA. TNFa increased both the total P65 and acetylated P65 expression in chondrocytes (Fig. 5a). PGRN reduced the TNFa-induced acetylation level of P65, whereas a combination of PGRN with SIRT1 siRNA abolished this inhibition induced by PGRN (Fig. 5a). Simultaneously, PGRN inhibited TNFa-induced nuclear translocation of P65, whereas such effect was largely lost in SIRT1 2/2 chondrocytes (Fig. 5, b-d). The previous study reported that SIRT1 inhibited MMP13 expression by reducing LEF1 transcriptional activity, and LEF-1 was known to bind to the MMP13 promoter and transactivate its expression (25). Then, we determined whether the SIRT1/P65 axis was involved in the regulatory sequences of ADAMTS5 promoter by ChIP assay. As shown in Fig. 5e, PGRN inhibited TNFainduced enrichment of P65 on ADAMTS5 promoter, with these inhibitions being neutralized by co-treatment with SIRT1 siRNA, indicating that PGRN may regulate the transcriptional activity of ADAMTS5 through SIRT1/P65 axis. Collectively, our data indicated that PGRN could inhibit chondrocytes catabolism induced by TNFa by deacetylation and suppressing P65 nuclear accumulation via enhancing the level of SIRT1.
Effects of endogenous PGRN ablation on cartilage gene expression
To determine the effect of endogenous PGRN, we used siRNA for PGRN in cultured chondrocytes, and its validation was measured by the reduction of PGRN protein and mRNA expression by Western blotting and real-time PCR (Fig. 6, a and b and Fig. S4a). Meanwhile, PGRN siRNA also resulted in a considerable decrease in PGRN secretion in the culture medium (Fig. 6c). Knockdown of PGRN inhibited the anabolism of chondrocytes as evidenced by a decrease in COL2A1 and aggrecan expression and secretion (Fig. 6, d-g), whereas it did not affect levels of catabolic markers, including MMP13 and ADAMTS5 ( Fig. 6h and Fig. S4b).
Discussion
PGRN has been purified and identified as a growth factor from conditioned tissue culture media by several independent laboratories (26)(27)(28), suggesting it may act as an autocrine manner. Additionally, PGRN is expressed in both growth plate chondrocytes and articular cartilage chondrocytes, and its level is significantly increased during the entire cartilage development, whereas cartilage-specific ablation of PGRN showed a sharp reduction in skeletal length (29). Our study also showed that recombinant PGRN significantly enhanced the levels of COL2A1 and aggrecan, whereas down-regulation of endogenous PGRN inhibited COL2A1 and aggrecan expression and secretion. Therefore, it is reasonable to speculate that PGRN functions effectively in an autocrine manner in chondrocytes. Because PGRN is an autocrine growth factor, we first determined the level of endogenous PGRN in OA cartilage samples. We found the expression of PGRN was increased in the cartilage of lateral femoral condyles with mild OA, whereas its level was decreased in the cartilage of medial femoral condyles with severe OA. Although PGRN may increase as an adaptive response to protect chondrocytes against various stresses during the initial degenerative phase, it may be that elevated PGRN levels are not sufficient to completely neutralize the loss of articular cartilage, and failure to adapt may lead to decreased PGRN level and further progression of cartilage degeneration. Furthermore, the level of PGRN tended to be more increased in the superficial zone of the cartilage, where chondrocytes may undergo more stresses compared with the deep zone. We also observed that PGRN receptor EphA2 was decreased in severe OA cartilage, which may be the other reason why endogenous PGRN produced in OA could not ameliorate cartilage degradation.
Other endogenous autocrine factors in addition to PGRN have also exerted crucial roles within cartilage tissue. For instance, most bone morphogenetic proteins (BMPs), which are members of the transforming growth factor-b superfamily, play a protective role during OA development (30). Among the fibroblast growth factors family, FGF1, FGF2, and FGF8 may act as negative regulators of chondrocyte homeostasis, whereas FGF9 and FGF18 protect articular cartilage from degradation during OA progression (31). As with most anabolic growth factors, including BMPs and IGF-1, PGRN promotes chondrocytes proliferation and hypertrophy during the growth plate chondrogenesis, an essential process for both the long bone growth and bone fracture healing. Although BMP2 was the first trial in bone fracture healing given its promoting effect on new bone formation, its significant side effect of ectopic ossification hampers its further clinical application. Therefore, PGRN may be a novel therapeutic approach for application to bone fracture healing (32). Unlike other growth factors, however, PGRN still plays a protective role in the progression of OA, which is characterized by abnormal chondrocyte hypertrophy in articular cartilage followed by cartilage degradation. In the present study, we found that under the stimulation of TNFa, PGRN significantly inhibited MMP13 and ADAMTS5 expression, with such suppression being largely nullified in SIRT1 2/2 Figure 2. Effects of recombinant PGRN on chondrocytes anabolism and SIRT1 expression. Chondrocytes isolated from newborn Sprague-Dawley rat articular cartilage were cultured in the presence of graded concentrations of recombinant PGRN (0-200ng/ml) for 48 h. a, at the end of the culture period, col2a1 mRNA level was examined by real-time PCR. One-way ANOVA and Dunnett's multiple comparison test. b, quantification of secreted COL2A1 in the conditioned medium of chondrocytes treated with PGRN. One-way ANOVA and Dunnett's multiple comparison test. c and d, aggrecan expression and secretion were detected by real-time PCR and ELISA, respectively. One-way ANOVA and Dunnett's multiple comparison test. e, COL2A1 and aggrecan protein expression were detected by Western blotting. f-h, the expression and activity of SIRT1 in chondrocytes treated with PGRN. One-way ANOVA and Dunnett's multiple comparison test. Chondrocytes transfected with control siRNA or SIRT1 siRNA cultured in the absence or presence of 100 ng/ml PGRN. SIRT1i-1, SIRT1i-2, and SIRT1i-3 indicate three independent siRNAs against SIRT1. i-j, the protein and mRNA level of SIRT1 were detected by Western blotting and real-time PCR. Oneway ANOVA and Tukey's multiple comparison test. k and l, COL2A1 expression and secretion were detected by real-time PCR and soluble collagen quantification assay. One-way ANOVA and Tukey's multiple comparison test. m and n, aggrecan expression and secretion were detected by real-time PCR and ELISA. One-way ANOVA and Tukey's multiple comparison test. o, the protein levels of COL2A1 and aggrecan were analyzed by Western blotting. Data were expressed as mean 6 S.D. in each scatterplot. *, p , 0.05; **, p , 0.01; ***, p , 0.001.
chondrocytes. Although PGRN exerts an anti-inflammatory function in inflammatory arthritis murine models, proteolytic cleavage of this precursor protein by serine proteases and metalloproteinases gives rise to individual 6-kDa granulin units, which are actually pro-inflammatory (33). Thus, the function of PGRN during inflammation is complex, with intact PGRN having anti-inflammatory properties, whereas granulins have been shown to promote inflammation. Furthermore, the function of PGRN might be tissue specific. Contrary to its positive effects on cartilage homeostasis, both our findings (34) and other studies (35,36) indicate that PGRN is proinflammatory adipokine and involves in adipose insulin resistance, suggesting that PGRN may be a negative factor in regulation of energy metabolism. Different from other anabolic growth factors (BMP, insulin-like growth factors, fibroblast growth factors, etc.) that have already identified binding receptors, PGRN membrane receptor has not yet been clarified so far. Some studies reported PGRN action is not mediated through TNFR (37,38), whereas more studies suggested that PGRN exerted its anti-inflammatory effect through antagonizing TNFa by binding to TNF receptors (5,6). However, in our preliminary study, neither TNFR1 nor TNFR2 mediated PGRN function on cartilage homeostasis and STRT1 expression (data not shown). However, we found that expression pattern of EphA2 was similar to that of PGRN in OA cartilage and recombinant PGRN increased EphA2 level in chondrocytes, implicating a potential involvement of EphA2 in mediation of PGRN function. This finding may add value to the body of conflicting reports on PGRN functional receptor in chondrocyte. Consistent with what we have found, other groups indicated that PGRN could bind to EphA2 (39) and sortilin (40) in urothelial carcinoma cells and neuron, respectively. Thus, further investigation should be warranted to fully elucidate how PGRN acts through its membrane receptor to regulate cartilage homeostasis.
Among the related publications, little is known about downstream events mediating PGRN in regulation of cartilage homeostasis, apart from the finding that PGRN promotes chondrogenesis through activation of Erk1/2 pathway in vitro (10,29). Although this experimental evidence has shed light on the signaling pathways involved, it remains an exciting prospect to determine the one or more signaling pathways ultimately mediating the effects of PGRN on cartilage homeostasis. Here our study provides the novel mechanism of PGRN action in maintaining cartilage homeostasis. In our preliminary study, we first detected autophagy activity in chondrocytes cultured with PGRN, because our previous studies showed that PGRN disturbed autophagic balance in adipocytes and hepatocytes, respectively (34,41). However, no statistical changes were observed in autophagy-related protein (LC3-II and p62) expression. The discrepancies may reflect cell-type specificities including differentiation state. As aging is a major risk factor of osteoarthritis, our latest study revealed a protective effect of SIRT1, a known anti-aging factor on chondrogenesis through PERK-eIF-2a-CHOP (20). More recently, Zhou et al. shown that PGRN increases SIRT1 expression, resulting in decreased level in acetylation of PGC-1a and FoxO1 in podocytes treated with high glucose. However, crosstalk between PGRN and SIRT1 in chondrocytes remain unknown. In the current study, the expression and activity of SIRT1 were dramatically increased in presence of PGRN, and PGRN-mediated chondroprotective function was largely impaired in SIRT1 2/2 cells, indicating that PGRN mediates its effects, at least in part, by regulating SIRT1 level. Although PGRN did not affect SOX9 expression (data not shown), we found PGRN could decrease the acetylation level of SOX9 and subsequently increase nuclear accumulation of SOX9 in chondrocytes. Because PGRN has no inherent deacetylation, it was thought that its effect on acetylation level and nuclear localization of SOX9 was mediated through SIRT1, which in turn, promoted COL2A1 and aggrecan expression. In a subsequent study, we have shown that the effects of PGRN on SOX9 acetylation and nuclear translocation were significantly diminished after silencing SIRT1 expression, further supporting that PGRN-mediated such effects depend on up-regulation of SIRT1.
TNFa is well accepted to act as a critical mediator of pathogenesis in cartilage degeneration and arthritis; therefore, TNFa inhibitors or antagonists have been extensively studied for therapy of RA (42) and OA (43). In our study, although PGRN alone did not affect levels of catabolic markers, a combination of PGRN with TNFa inhibited the expression of catabolic markers induced by TNFa. Moreover, TNFa inhibited SIRT1 activity, whereas co-treatment with PGRN reversed these inhibitory effects induced by TNFa. These findings are similar to a previous report that TNFa cleaved the C-terminal domain of SIRT1 and generated a smaller 75-kDa fragment, resulting in reduced SIRT1 activity (44). Given that TNFa is known to activate NF-kB signaling pathway, which in turn induces the expression of various MMPs and ADAMTS (24), it is plausible for us to hypothesize that PGRN inhibited chondrocyte catabolism induced by TNFa is mediated by regulating NF-kB-P65. Although PGRN did not much affect the total P65 protein level, it decreased the acetylation level of Lys-310 of P65 induced by TNFa, whereas transfection with SIRT1 siRNA almost abolished such inhibitory effects in PGRN-treated chondrocytes. Because the acetylation status of Lys-310 of P65 has been well established to be an important regulatory mechanism of the transcriptional activity of P65 (45,46), we then determined whether PGRN affected P65 nuclear translocation by confocal microscopy. As expected, PGRN reduced translocation of P65 into the nucleus in chondrocytes treated with TNFa, whereas this effect was impaired in SIRT1 2/2 cells. Although Zhao and his colleagues thought that PGRN may mainly act as an
PGRN regulates cartilage homeostasis via SIRT1
antagonist of TNFa (10), we hereby propose that PGRN inhibited chondrocyte catabolism induced by TNFa through activation of SIRT1, and subsequently prevented P65 nuclear translocation by deacetylating P65, thereby decreasing the expression of MMP13 and ADAMTS5.
In conclusion, our study suggested that PGRN promoted cartilage-specific gene expression as well as inhibited the TNFainduced expression of cartilage-degrading enzymes through activating SIRT1-SOX9/NF-kB-P65 signaling (Fig. 6i), indicating that PGRN may be a promising therapeutic target for managing OA as well as other cartilage degenerative diseases.
Experimental procedures
OA and normal cartilage sample obtaining and processing OA human articular cartilage tissues were obtained from six patients (mean 6 S.D. age 75.2 6 5.7 years) undergoing total knee-joint replacement surgery. Normal human cartilage tissues were harvested from six age-matched patients (mean 6 S. D. age 69.8 6 7.1 years) undergoing surgery for femoral neck fracture without history of joint-related disease (inflammatory arthritis, OA, microcrystalline arthritis, or osteonecrosis).
Safranin-O and fast green staining
Cartilage tissues were fixed in 4% paraformaldehyde overnight and decalcified with 10% EDTA for 2 weeks, and 5-mmthick sections from each cartilage tissue were obtained and stained with safranin-O and fast green. Briefly, sections were stained with hematoxylin for 2 min, followed by fast green staining for 3 min and safranin-O staining for 3 min.
Immunohistochemistry
To detect PGRN or EphA2 expression in articular cartilage, 5-mm-thick cartilage sections from each sample were obtained. Sections were immunostained with SPlink Detection Kits (Beijing Zhongshan Biotechnologies, SP-9001). Briefly, sections were treated with 0.25% trypsin for 15 min at 37°C for antigen retrieval and followed by 3% H 2 O 2 for 10 min, and then were blocked using 5% goat serum for 15 min at room temperature (RT). Afterward, sections were incubated with rabbit polyclonal antibodies against PGRN at a dilution of 1:200 (Beijing Bioss Biotechnologies, bs-0823R) or rabbit monoclonal antibodies against EphA2 at a dilution of 1:200 (Cell Signaling Technology, 6997) overnight at 4°C. After three rinses with PBS, sections were incubated with rabbit secondary antibodies for 15 min at RT. Finally, sections were counterstained with hematoxylin. The percentage of designated gene-positive cells was calculated as the number of positive cells per grid divided by the total number of cells per grid quantified by Image-Pro Plus. The grid circumscribed a portion of the cartilage analyzed through a 103 objective and generally contained an average of 50 cells. For each sample, the fraction of positive cells in three distinct grid locations was calculated and averaged. Figure 5. Effects of recombinant PGRN on P65 acetylation and nuclear translocation. Chondrocytes were treated with or without 10 ng/ml TNFa, transfected with control siRNA or SIRT1 siRNA cultured in the absence or presence of 100 ng/ml PGRN. a, chondrocytes were lysed and the level of acetylation of P65 was analyzed by Western blotting. One-way ANOVA and Tukey's multiple comparison test. b, subcellular localization of P65 was analyzed by Western blotting. c and d, immunofluorescence staining and statistical analyses of P65 nuclear translocation in chondrocytes. Scale bar, 50 mM. One-way ANOVA and Tukey's multiple comparison test. e, ChIP analysis for recruitment of P65 to ADAMTS5 promoter in C28/I2 human chondrocyte cell line. One-way ANOVA and Tukey's multiple comparison test. Data were expressed as mean 6 S.D. in each scatterplot. *, p , 0.05; **, p , 0.01; ***, p , 0.001.
siRNA transfection
Chondrocytes were transfected with three siRNA with distinctive sequences targeted for SIRT1 or PGRN, respectively. All siRNA were designed and synthesized by GenePharma Co.
Western blotting
Whole-cell or cartilage tissue extracts were prepared with the RIPA buffer. Proteins were separated by 8-15% SDS-PAGE gel; separated proteins were transferred onto PVDF membranes (Millipore) and were probed with the following primary antibodies: rabbit monoclonal antibodies against SIRT1 (Cell Signaling Technology, 9475), PGRN (Abcam, ab227816), and NF-kB-P65 (Cell Signaling Technology, 8242); rabbit polyclonal antibodies against COL2A1 (Abcam, ab34712), aggrecan (Abcam, ab3778), MMP13 (Abcam, ab39012), ADAMTS5 (Abcam, ab41037), and acetyl-NF-kB P65 (Lys-310) (Cell Signaling Technology, 3045); mouse monoclonal antibodies against GAPDH (Santa Cruz Biotechnology, sc-365062). At last, the blots were visualized by an ECL detection system (Millipore) with a horseradish peroxidase-conjugated secondary antibody. To recognize aggrecan optimal epitope, chondrocytes lysates per 10 mg are pretreated with 0.01 units Chondroitinase ABC (Sigma, C3667) for 2 h at 37°C. A representative Figure 6. Effects of endogenous PGRN on cartilage gene expression. Chondrocytes were transfected with control siRNA or PGRN siRNA for 48 h. a and b, the protein and mRNA levels of PGRN were measured. PGRNi-1, PGRNi-2, and PGRNi-3 indicate three independent siRNAs against PGRN. One-way ANOVA and Tukey's multiple comparison test. c, PGRN secretion was detected by ELISA in the conditioned medium of chondrocytes. One-way ANOVA and Tukey's multiple comparison test. d and e, COL2A1 expression and secretion were detected by real-time PCR and soluble collagen quantification assay. One-way ANOVA and Tukey's multiple comparison test. f and g, aggrecan expression and secretion were detected by real-time PCR and ELISA. One-way ANOVA and Tukey's multiple comparison test. h, levels of MMP13 and ADAMTS5 were analyzed by Western blotting. i, model depicting the molecular mechanism that PGRN plays an important role in maintaining chondrocyte homeostasis. Data were expressed as mean 6 S.D. in each scatterplot. *, p , 0.05; **, p , 0.01; ***, p , 0.001. blot from three or four independent experiments was presented for each protein. The relative levels of protein expression were calculated using densitometric scans by ImageJ software and were normalized to the GAPDH levels from three or four independent experiments. For each independent experiment, the relative levels of protein expression were calculated and averaged from three replicates.
RNA extraction and real time PCR
Total RNA from cultured chondrocytes or cartilage tissue were isolated by TRIzol reagent (Invitrogen, 15596-026) according to the manufacturer's instruction. The recovered RNA was further processed using RevertAid First Strand cDNA Synthesis kit (Thermo Fisher, K1621) to produce cDNA in accordance with the manufacturer's instructions. The cDNA products were directly used for PCR or stored at -80°C for later analysis. Real-time quantitative PCR was performed in MJ Mini Real-Time PCR Detection System using SYBR Premix Ex Taq TM II (Takara, RR047A). Primer sequence information is available upon request. Each experiment was performed in duplicate and experiments repeated three or four times independently. A dissociation curve analysis was conducted for each qPCR. Expression levels of the target gene were evaluated using a relative quantification approach (2 2DDCt method) against GAPDH levels.
Immunoprecipitation
Cytoplasmic lysate (200 mg) was incubated for 2 h at 4°C with the corresponding antibodies coupled to 20 ml of packed protein A 1 G Sepharose beads (Santa Cruz Biotechnology, sc-2002). Immune complexes were resolved by means of SDS-PAGE and immunoblotted with the indicated antibodies. To analyze the level of SOX9 acetylation, chondrocytes lysates were immunoprecipitated using anti-SOX9 antibody (Abcam, ab185230), and then immunoprecipitated proteins were run on SDS-PAGE and immunoblotted with anti-SOX9 and anti-acetyl lysine antibody (Santa Cruz Biotechnology, sc-32268), respectively. The relative level of SOX9 acetylation was calculated using densitometric scans by ImageJ software and was normalized to the SOX9 levels from three or four independent experiments. For each independent experiment, the relative level of SOX9 acetylation was calculated and averaged from three replicates.
Immunofluorescence
Primary cultured chondrocytes were seeded on 6-well chamber slides and fixed in 4% paraformaldehyde for 15 min at RT. Cells were then permeabilized with 0.2% Triton X-100 in PBS. After incubation in 5% goat serum, the permeabilized chondrocytes were incubated with anti-SOX9 (1:200) or anti-NF-kB-P65 (1:200) antibodies. For secondary reactions, speciesmatched Cy3-labeled secondary antibody at a dilution of 1:200 was used for 1 h at 37°C. Cell nuclei were stained with DAPI. Fluorescent images were collected on a Leica confocal microscope (SP5 II). The percentage of cells with nuclear SOX9 and P65 in slides was quantified by Image-Pro Plus. The data represent the percentage of cells from four independent experi-ments. For each independent experiment, the percentage of cells in five distinct grid locations was calculated and averaged.
SIRT1 deacetylase activity assay
To detect SIRT1 activity in cultured chondrocytes, SIRT1 was initially immunoprecipitated with a SIRT1 antibody (Cell Signaling Technology, 8469). Subsequently, SIRT1 deacetylase activity was measured by fluorometric SIRT1 Assay Kit (Sigma, CS1040), according to the procedure recommended by the manufacturer. Fluorescent intensity was measured at 460 nm (excitation 380 nm) and concentration was calculated using a standard curve.
Collagen quantification and analysis
Collagen quantification was performed using fluorometric Soluble Collagen Quantification Assay Kit (Sigma, CS0006) following the manufacturer's protocol. Fluorescent intensity was measured at 465 nm (excitation 375 nm) and concentration was calculated using a standard curve.
ELISA
Collected conditional media of chondrocytes underwent ELISA using rat ELISA kits according to the manufacturers' instructions. PGRN-and aggrecan-specific ELISA kits were obtained from Jiangsu MEIMIAN Industrial Co., Ltd. (MM-70087R2 and MM-0772R2).
ChIP
ChIP analysis was performed using a SimpleChIP Enzymatic Chromatin IP Kit (Cell Signaling Technology, 9002) following the manufacturer's protocol. The chromatin sample was subjected to immunoprecipitation with P65 or the control normal rabbit IgG. Real-time PCR was performed, and primer sets that amplify the area included the NF-kB motif (2478/2328 for 2424/2415) within the 59 region of genomic human ADAMTS5 (NC_000021.9) based on previous reports (47). The primers used were as follows: Forward: 59-CAA AGG GGA AAA ACT TGC GG-39; reverse: 59-AAT TGG GGT TGG CTA ACC TTG-39. Values obtained from each immunoprecipitation are expressed as a percent of the total input chromatin from three independent experiments.
Ethics approval
Animal care was approved by the Animal Experiment Administration Committee of the Medicine of Xi'an Jiaotong University. Human samples were collected after obtaining written informed consent as approved by the Ethics Committee of the Xi'an Jiaotong University and the study was conducted in compliance with the ethical principle of the Declaration of Helsinki.
Statistics
Statistical analysis was performed with the SPSS 17.0 software (SPSS Inc., Chicago, IL). All the experiments were repeated three or four times independently, and data were presented as mean 6 S.D. A Shapiro-Wilk normality test was performed to evaluate the Gaussian distribution of the data. All groups of data were assessed for the homogeneity of variance using the Fisher test. Differences between two groups were analyzed by unpaired two-tailed Student's t tests. Differences among more than two groups were analyzed by one-way analysis of variance (ANOVA) and Tukey's multiple comparison test. p values less than 0.05 were considered statistically significant.
Data availability
All data are contained within the manuscript and supporting information.
Author contributions-D. F., R. W., W. F., H. L., and S. W. data curation; D. F., X. K., and S. W. formal analysis; D. F., X. K., Y. Z., and S. W. investigation; D. F., X. K., H. C., and K. Z. methodology; D. F. and X. K. writing-original draft; X. K. and S. W. funding acquisition; S. W. conceptualization. Conflict of interest-The authors declare that they have no conflicts of interest with the contents of this article. | 2020-08-05T13:06:28.205Z | 2020-08-03T00:00:00.000 | {
"year": 2020,
"sha1": "458939bfe674ef3abe39fd41c0aeb5706d34495b",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925817500198/pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3732debae2872f3175fafee3c118b64cc2b389c6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
119239898 | pes2o/s2orc | v3-fos-license | Room-Temperature Superfluidity in Graphene Bilayers
Because graphene is an atomically two-dimensional gapless semiconductor with nearly identical conduction and valence bands, graphene-based bilayers are attractive candidates for high-temperature electron-hole pair condensation. We present estimates which suggest that the Kosterlitz-Thouless temperatures of these two-dimensional counterflow superfluids can approach room temperature.
Introduction-Electron-hole pair (exciton) condensates were first proposed [1,2] as possible ordered states of solids more than forty years ago but have proved difficult to realize experimentally. Progress has been made recently with the discovery [3,4] of equilibrium exciton condensation below T ∼ 1K in the quantum Hall regime, the identification [5] of spontaneous coherence effects in cold optically excited exciton gases, and studies of dynamic condensation [6] of polaritons in non-resonantly pumped optical microcavities. In the weak-coupling limit exciton condensation is a consequence of the Cooper instability [2] of solids with occupied conduction band states and empty valence band states inside identical Fermi surfaces. Bilayer exciton condensates are counterflow superfluids with unusual electrical properties [4,7,8,9,10,11] which have so far been studied experimentally mainly in the quantum Hall regime. In this Letter we point out that superfluidity is likely to persist to remarkably high temperatures in graphene based bilayers. Graphene is a particularly attractive candidate for room temperature bilayer exciton condensation because it is atomically twodimensional, because it is a gapless semiconductor, and because its two-dimensional massless Dirac band structure implies nearly perfect particle-hole symmetry and stiff phase order. We consider a system with two graphene layers embedded in a dielectric media and gated above and below as illustrated schematically in Fig. 1. Each layer has two Dirac-cone bands centered at inequivalent points in its Brillouin-zone. The top and bottom gates can be used to control the electric fields E ext both above and below the bilayer. When the two fields are equal the bilayer is neutral, but charge is transferred from one layer to the other. The Fermi level lies in the graphene conduction band of one layer (the n-type layer) and in the valence band of the other layer (the p-type layer). The particle-hole symmetry of the Dirac equation ensures perfect nesting [12] between the electron Fermi spheres in the n-type layer and its hole counterparts in the opposite layer, thereby driving the Cooper instability. The condensed state establishes spontaneous long-range coherence between the two graphene layers. Our main interest here is in providing an estimate of the maximum possible Kosterlitz-Thouless (KT) temperature T KT of these two-dimensional counterflow superfluids [4]. We use a two band model [13] in which the occupied valence band of the n-type layer and the empty conduction band of the p-type layer are neglected. Our T KT estimate is constructed from mean-field (Hartree-Fock) theory calculations [14] of the temperature dependent phase stiffness of the ordered state.
Our main result is the normal to superfluid phase boundary depicted in Fig. 2. The KT temperature is plotted as a function of the separation between the layers d and the electric field E ext outside the bilayer. We estimate that superfluidity can survive at room temperature under favorable experimental conditions. The nonmonotonic dependence of T KT on d at fixed E ext follows from a competition between the increasing carrier density and the decreasing strength of interlayer electronhole interactions with increasing d. At small d the phase stiffness is limited by the carrier concentration, which increases with d. At large d, the KT temperature is limited by the same fermion-entropy effects which are responsible for the Bardeen-Cooper-Schrieffer (BCS) critical temperature of weak-coupling superconductors.
Two-band mean-field theory-In the band eigenstate representation our band-Hamiltonian is
v is the band quasiparticle velocity, τ is a Pauli matrix vector which acts on the which layer pseudospin, and V g = eE ext d is the gate induced potential difference between the two layers.
Spontaneous interlayer coherence is induced by interlayer Coulomb interactions. In the mean-field description the interlayer interaction reorganizes the low-energy fermionic degrees of freedom into quasiparticles which are phase coherent linear-combinations of the single-layer states. The mean-field theory Hamiltonian can be written in the following form [15]: Because of the model's particle-hole symmetry ∆ 0 vanishes. The pseudospin effective field ∆ k in Eq. (1) solves the following self-consistent equation: where A is the area of a graphene layer, ∆ ⊥ k = (∆ x k , ∆ y k ), n is a unit vector parallel to ∆ k , g = 4 accounts for the spin and valley degeneracy, and f d (x) = tanh(x/2T ) is the difference between the occupation numbers of the negative energy and positive energy quasiparticles. The Coulomb matrix element of the intralayer interactions in the eigenstate basis is where is the dielectric constant characterizing the embedding media, and φ k = tan −1 (k y /k x ). The corresponding matrix element of the interlayer interaction is V k,p exp(−|k − p|d). All energies are measured relative to the Dirac-point chemical potential of the balanced bilayer [16]. Note that each spin and valley pairs independently and that electron-hole condensation is indifferent to spin-valley space rotation in either layer.
The interaction strength in a graphene monolayer is usually characterized by the dimensionless effective fine structure constant, α = e 2 / v. This constant naturally appears in Eq. (2) Interestingly, the self-consistent equations (2) admit solutions with non-zero chirality J of the gap function ∆ ⊥ : ∆ ⊥ k = ∆ ⊥ k (cos(Jφ k ), sin(Jφ k )). However, the critical temperature of a state with non-zero chirality is higher than that of the corresponding T c of the zero chirality ground state so these solutions are unlikely to be physically relevant. We focus on the J = 0 solutions hereafter.
In the normal state, there is no interlayer coherence so ∆ ⊥ vanishes. The intralayer Hartree-Fock potential then follows from self-consistent solution for ∆ z . The main effects of electron-electron interactions in this case are to increase the bare quasi-particle velocity [17] and to screen the external bias voltage. Screening reduces the amount of charge transfer and therefore reduces the normal state Fermi momentum. As illustrated in Fig. 3 the energy bands change qualitatively in the condensed state because interlayer interactions induce coherence between the two layers and open an energy gap. Linearized gap equation-The mean-field theory phase boundary between the normal phase and superfluid phase is obtained by solving the linearized gap equation obtained by linearizing Eq. (2) with respect to ∆ ⊥ . The kernel of the linearized gap equation is obtained by solving the self-consistent equation for ∆ z in the normal phase. The normal phase is stable provided that all the eigenvalues of M are smaller than one. By numerically evaluating M for various interlayer distances and external fields we find the mean-field phase diagram T MF c (d, E ext ) (not shown). Phase stiffness-In two-dimensional superfluids the critical temperature is often substantially overestimated by mean-field theory and is ultimately limited by entropically driven vortex and antivortex proliferation at the KT temperature We estimate T KT by using mean-field theory to calculate the phase stiffness (superfluid density) ρ s (T ). In parabolic band systems, this procedure yields reasonable estimates of T KT in both BCS and BEC limits. The phase stiffness is most easily calculated by evaluating the counterflow current j Q = (e/ )ρ s Q at small exciton momentum Q. Put formally, we evaluate the expectation value of the counterflow current operator with the density matrix defined by the mean-field Hamiltonian where ∆ ⊥ kQ is the finite momenta pairing potential. Placing Q alongx we find that ∆ 0 k → 1 2 Qv cos φ k and that . This expression for j D Q has an ultraviolet divergence and fails to vanish in the normal state (∆ ⊥ → 0). Both properties are pathologies of the Dirac model. When the two Fermi circles are shifted in opposite directions at finite Q they are asymmetric with respect to the momentum-space origin. As a consequence an ultraviolet cutoff at some momentum magnitude yields a finite counterflow current. This current would vanish if the same calculation was performed using a microscopic model with integrations over the full Brillouin-zone. As long as ∆ ⊥ is small compared to graphene's π-band width, a condition that is very easily satisfied, the anomalous ultraviolet contribution to ρ s (T ) is identical in the normal and in the condensed states. It follows that the physical counterflow current is related to the Dirac model counterflow current (j D ) by . Following this prescription, we conclude that the last term in Eq. (9) can be neglected and find that Note that the zero temperature phase stiffness, is purely a normal state property just as in BCS theory. Indeed an identical result is obtained in the BCS theory of a parabolic band system when ρ s is expressed in terms of the Fermi energy. An alternative approach for estimating ρ s (T ) which also accounts for the intralayer interactions is to evaluate the density matrix in Eq. (7) using the self-consistent mean-field equations with finite pairing momentum. As explained above, the physical counterflow current is obtained by subtracting j D Q (∆ ⊥ = 0) from j D Q . The KT temperatures which follow from this procedure and Eq. (6) are depicted in Fig. 2. Since ρ s (T ) is a decreasing function of d it follows from Eqs. ( 6,11) that T KT ≤ E F /8. In our calculations we find that this inequality approaches an equality when k F d is small. Consequently, the increase in T KT with d at small d in Fig. 2 simply follows the increase in E F ∼ eE ext d/2. Discussion-The high transition temperatures we predict deserve comment. They are larger than those of typical superconductors because condensation is driven by Coulomb interactions over the full band width, rather than by phonon-mediated interactions between quasiparticles in a narrow shell around the Fermi surface. In this sense exciton condensation is more akin to ferromagnetism, which is also driven by Coulomb interactions and can survive to very high temperatures. The temperatures at which exciton condensation can be achieved in graphene bilayers are immensely higher than those which might be possible in semiconductor bilayers because more carriers can be induced by external electric fields when the semiconductor has no gap, because the Fermi energy increases more rapidly with carrier density for Dirac bands than for parabolic bands, and because graphene layers are atomically thin -eliminating the layer thickness effects which substantially weaken Coulomb interaction in semiconductor quantum well bilayers. The numerical estimates reported in Fig. 2 were obtained using a coupling constant appropriate for a SiO 2 dielectric.
The optimal dielectric for high exciton condensation temperatures should have a high dielectric breakdown field and a low dielectric constant, suggesting that a suitable wide-gap material is likely the optimal choice.
Screening and other beyond-mean-field inducedinteraction effects are difficult to describe. In the case of weakly interacting atomic gases induced interaction effects can [18] either increase or decrease T c , depending on the number of fermion flavors g. For the present Coulomb interaction case, a static Thomas-Fermi screening approximation with normal state screening wavevectors reduces interaction strengths very substantially when spin and valley degeneracies (g = 4) are included. Meanfield-theory critical temperatures are reduced by a factor of ∼ e g at small d in this approximation and by a larger factor at large d. On the other hand, when the screening wavevectors are evaluated in the condensed state there is little influence on T KT at small k F d both because the large gap weakens screening and because T KT is proportional to the Fermi energy and not to the interaction strength in this limit. All this leads us to suspect that at low-temperatures there is a first-order phase transition as a function of layer separation d between condensed and electron-hole plasma states, similar to the transitions studied experimentally [19] in quantum Hall exciton condensates and theoretically [20] in parabolic band bilayers.
Because of spin and valley degrees of freedom, the exciton pairing we describe in this work is SU(4) symmetric; crudely speaking the system has four identical superfluids simultaneously. We therefore anticipate interesting consequences of slightly unequal electron and hole densities, similar to anticipated effects associated with the spin degree-of-freedom in normal exciton condensates [21,22]. Because of this sensitivity, front and back gates which can control the electric fields on opposite sides of the bilayer independently are highly desirable in experimental searches for graphene bilayer exciton condensation.
Our finding that T KT ∼ 0.1E F in the limit of strong interactions between conduction band electrons and valence band holes is partially supported by experimental studies [23] of fermionic cold atoms in the stronginteraction unitary limit. It implies that T KT should approach room temperature when E F is larger than ∼ 0.3eV (n larger than ∼ 10 13 cm −2 ) and d is smaller than ∼ 2nm. Experimental detection of spontaneous coherence through one of its characteristic transport anomalies [4] will be necessary to construct a quantitatively reliable phase diagram. | 2008-08-19T00:07:47.000Z | 2008-02-23T00:00:00.000 | {
"year": 2008,
"sha1": "730830e6b9bbd08b4db156533a56a305ebf4221b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0802.3462",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "730830e6b9bbd08b4db156533a56a305ebf4221b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
213635064 | pes2o/s2orc | v3-fos-license | Yield, maturation cycle, and estimates of genetic parameters of Robusta coffee genotypes under irrigation in the Cerrado
Eighty-five Coffea canephora genotypes were evaluated for three crop seasons under irrigation in the Cerrado of the Distrito Federal, Brazil, to characterize the population regarding phenotype, yield and maturation cycle. The experiment was conducted through the Basic Model of Repeatability without Design methodology. Regarding the cycles, genotypes were divided into four groups with the following periods, in days, from resumption of irrigation to the cherry stage: very early (243-255), early (256-267), medium (268-280), and semilate (281-293). For yield, the repeatability obtained was approximately 33%, a medium value that represents the proportion of the permanent phenotypic variance in relation to the total phenotypic variance. The average repeatability obtained surpassed 59%, and accuracy, with three crop seasons, reached 77%. The selection gains obtained may be higher than 38%. There are promising genotypes for cultivation in the region under study. The mean repeatability and the accuracy obtained favor selection based on phenotypes.
INTRODUCTION
The Coffea canephora species, though high yielding, is less planted than Coffea arabica in Brazil, and is practically restricted to the states of Espírito Santo and Rondônia. C. canephora is better adapted to the soil and climatic conditions of tropical regions of low altitudes and high air temperatures. Thus, it is not commonly grown in the Cerrado (tropical savanna) of Central Brazil, where it should be established through selection of genotypes adapted to a climate with especially low nighttime temperatures in the winter and to the cultivation system of the region, specifically to mechanized harvesting.
When cultivated at high altitudes, C. arabica usually performs better than C. canephora, and the opposite is perceived in warm lands at lower altitudes (Rodrigues et al. 2016). Partelli et al. (2011) quote several authors reporting that low air temperatures limit the geographic distribution of coffee, especially when monthly averages are below 15-16 °C; photosynthesis is already affected below 18 °C, and severe cold (chilling and daytime temperatures remaining around 15 °C and nighttime temperatures below 10 °C) has serious consequences on yield. However, the authors also clarify that the plants have a certain capacity for acclimatization to cold.
MR Santin et al.
The Cerrado has been very conducive to coffee growing, especially when using technologies such as irrigation and mechanized farming practices (Fernandes et al. 2012), especially harvest. The same authors state that a significant limitation for C. canephora in the Cerrado is associated with low temperatures in high altitude areas. However, due to considerable genetic diversity within this species (Fonseca et al. 2006), different mechanisms of tolerance to low temperatures are used (Partelli et al. 2011). From this, it can be inferred that selection of materials adapted to cultivation in the central region of Brazil is possible.
Because coffee is a biennial crop, coffee plants have vegetative growth and maturation of reproductive buds for the next year's harvest throughout the reproductive cycle. The planting of cultivars with a late or very late cycle is not recommended in regions with milder air temperatures, for harvest may coincide with the beginning of flowering for the next harvest, which can damage flowers (or buds), as well as risk frost on green fruit. Although frost rarely occurs in Central Brazil, the use of mechanized harvesting, widely diffused in this region, may result in yield loss for the next harvest, due to damage caused to flowers or floral buds in late maturity cultivars, especially when water stress technology is not adopted to promote uniform flowering. Thus, study and differentiation of the maturation cycle of the genotypes under assessment is of paramount importance. Furthermore, according to the same author, the highest yielding cultivars are those of medium or late cycle. Therefore, identification of high yielding materials with early cycles would be of great value, especially to reduce the risk of rainfall during harvest.
Repeatability is a genetic parameter that expresses the proportion of the total variance that is explained by the variations proportioned by the genotypes and by the permanent alterations attributed to the common environment that act on the genotypes. Through study of repeatability, the number of measurements needed to predict the real value of a genotype can be estimated. Repeatability provides the maximum value that heritability in the broad sense can achieve, expressing the proportion of the phenotypic variance that is attributed to genetic differences along with confused with the permanent effects acting on the genotype (Cruz et al. 2012). Thus, through repeatability, one can compare distinct individuals by means of data collected over time, which is quite useful when considering a highly heterozygous heterogamous species like coffee, without the need to reproduce clones of a population of many distinct genotypes. This allows evaluation of a large number of materials in a reduced space (Della Bruna et al. 2012). Besides, since it is related to the minimum number of measurements necessary for accurate selection of superior genotypes, repeatability and maximum broad-sense heritability allows quantifying the maintenance of clone superiority overtime (Rocha et al. 2015).
Perennial plant species, such as coffee, have peculiar biological aspects, for instance, a long reproductive cycle, accentuated annual oscillation of production (resulting in a biennial cycle), overlapping of generations, trait expression over several years, and differences in days to maturity and yield longevity. These characteristics lead to some consequences, such as use of the selected genotypes for several years, use of evaluations repeated in each individual over time, and reduction in the survival rate of individual plants over the time of experiment. This reduction tends to generate unbalanced data for use in estimation of genetic parameters and in prediction of additive and genotypic values. Due to these agronomic peculiarities, coffee breeding is difficult, and the use of special methods to estimate genetic parameters and to predict genetic values is recommended (Oliveira et al. 2011). Thus, the standard analytical procedure that has been recommended for studies in quantitative genetics and also for selection in perennial plants is the mixed model methodology. This approach allows accurate and unbiased prediction of the genetic values even under imbalance, and it also facilitates simultaneous use of information on the individual and the family and of measures repeated over time, providing more accurate estimates of the components of genetic variation and of individual genetic values (Pereira et al. 2013).
The objective of this study was to use repeatability to evaluate the yield and the maturation cycle of C. canephora genotypes irrigated in the Cerrado and determine the genetic and phenotypic parameters of the population and the different maturation cycles of these genotypes in order to select high-yielding genotypes for the region.
In previous years, the area had been planted to annual crops. The trial was established in April 2009, with a spacing of 3.5 m between rows and 1.0 m between plants. The 85 C. canephora genotypes studied were obtained from natural breeding in an isolated experimental field of the cultivar Robusta Tropical (EMCAPER 8151) from the Capixaba Research and Rural Extension Company (Empresa Capixaba de Pesquisa e Extensão Rural) -EMCAPER. A center pivot sprinkler irrigation method was adopted. The irrigation management criterion was based on monitoring the climate, and the periodicity of irrigation was every five days, according to the Cerrado Irrigation Monitoring Program (Rocha et al. 2008). To synchronize flowering, irrigation was suspended on July 1 and resumed when at least 80% of the flower buds reached the E4 stage, with a final date for return to irrigation on September 4 of each year. In the years in which the experiment was performed, the return to irrigation always occurred on the final date. The practice of subjecting plants to water deficit breaks flower bud dormancy and thus concentrates flowering and fruit maturation in each plant.
In the experimental field, there were about 3,500 genotypes planted without replication repetition. From the harvest data of the 2011/12, 2012/13, and 2013/14 crop years, 85 materials were selected with minimum production of seven liters per plant in each of the three years and production variation lower than 25%. According to Silva et al. (2018), lower biannual variation is as important as yield potential for the selection of clones with higher yield stability that maintain their superiority overtime.
These materials were used in statistical analysis, through repeatability, to obtain the genetic parameter values, phenotypic values, and determination of yield. Preliminary observations, especially those performed by Carneiro et al. (2013), allowed this pre-selection and non-selection of late maturation materials, because of the risk of a later harvest operation compromising production in the following year by causing damage to flowers and flower buds. The standard deviation and the coefficient of variation of the yield of each material during the three harvests were calculated.
The maturation cycles were determined through weekly observation of the genotypes, following the scale developed by Pezzopane et al. (2003), in which a change in level was considered only when at least 80% of the fruit was at the same maturation stage. That way, considering the cherry stage as ideal for harvest, the time between the resumption of irrigation and the cherry stage, in days, was counted, ensuring that each material reached the cherry stage. The difference in days to reach the cherry stage between the earliest material (243 days) and the latest (293 days) was 50 days. Thus, three intervals were made in the mean values (13, 25, and 38 days, approximately), forming four maturation groups, which were denominated very early (243-255 days), early (256-267), medium (268-280), and semi-late (281-293). The basic model of repeatability without design is used when repeated data are taken from individual plants with no defined experimental design. The statistical model is y = Xm + Wp + e where y is the data vector, m is the vector of the effects of measurement (assumed as fixed) added to the overall average, p is the vector of the permanent effects of plants (genotypic effects plus environment effects considered to be permanent, assumed to be random), and e is the error or residue vector (random). Capital letters represent the incidence matrices for the referred effects.
The following values were then obtained: coefficient of repeatability, which, in this case, is considered equal to heritability; permanent phenotypic variance among plants (genotypic variance plus permanent environmental variance from one crop season to the next); temporary environmental variance (environmental variance of each crop season); individual phenotypic variance; average repeatability of m crops or repeated measures; accuracy of selection based on the average of m harvests or repeated measurements; and the gain from selection for yield in liters per plant. Table 1 shows the yield of the genotypes in each of the years evaluated, their averages, coefficients of variation of yield, cycle in days from return to irrigation to the cherry stage, and cycle classification.
MR Santin et al.
The highest yields in each of the harvests were obtained by genotypes 82 (20 L), 40 (20 L), and 74 (19 L) in the 2012, 2013, and 2014 harvests, respectively. The overall average yield was 10.79 L per harvest, and the average number of days to reach the cherry stage was 268.6. The average standard deviation of yield was 1.82%, while the coefficient of variation of the yield values over the three years was 16.83%. The genotypes with the highest yield averages over the three harvests were 82, with an average of 17.67 L per harvest, and 40, with an average of 16.0 L. The main difference between the two was that genotype 82 had a more stable yield, with variation of less than 12% over the 3 years. This factor, considering the desirability of materials with low bianniality, can be of great value. Furthermore, genotype 82 was classified as early, with a cycle of 263 days from return to irrigation to the cherry stage, whereas genotype 40 took 279 days. From the aspect of yield stability, some genotypes stood out through very low variation (less than 10%), or even had no variation. The results obtained from repeatability analysis in each of the situations tested are shown in Table 2.
The permanent phenotypic variance among plants, the variance regardless of temporary environmental effects, i.e., the effects of the environment in each crop season, represented 30% of the total phenotypic variation. This factor shows the lower effect of temporary variations on the response of the materials, or greater adaptation to local conditions. Individual repeatability, which in this case can be considered equivalent to heritability, reached a value over 0.38 (Table 2). According to Resende (2009), repeatability values between 0.30 and 0.60 are considered medium. However, considering the low number of repetitions (only 3) and that coffee is a perennial species, these values of repeatability will certainly be more consistent with a higher number of repetitions, or as the cycles of selection and evaluation of genotypes progress.
In addition, when considering the average repeatability (rm), the value rises to 55.5%, a value, according to the same author, very close to the values considered high for this parameter. Ramalho et al. (2016), working with robusta coffee in the state of Rondônia, obtained coefficients of repeatability of 0.43 in an analysis of four harvests, while they obtained 0.427 for heritability in the broad sense. The proximity of these two values is noteworthy, which corroborates the idea that the two parameters may be considered equivalent. Another factor to be considered is that yield is highly influenced by environment.
Repeatability can be interpreted as the maximum value of heritability in the broad sense, since it expresses genotypic variance added to the permanent effects of environment. In this sense, the repeatability values obtained in this study are close to those perceived by Ramalho et al. (2016) working with robusta coffee in the state of Rondônia. In addition, Mistro et al. (2008) observed values for the repeatability coefficient between 0.26 and 0.63, similar to those presented here. It should be noted, however, that these authors used six and nine repetitions, respectively, always with two plants per plot, whereas here, only one repetition was used, with one plant per plot, and harvests for three consecutive years.
The values of accuracy in relation to the number of measurements are shown in Table 3. Fonseca et al. (2004) cited that four successive harvests would be sufficient for selection of robusta coffee genotypes for yield with an accuracy of 80%, a value also considered quite satisfactory by Resende (2009), for whom selective accuracy between 0.7 and 0.9 is considered high. Pereira et al. (2013) obtained estimated accuracy of 0.84 in three harvests in an experiment with four repetitions in two years. In the experiment described here, the accuracy obtained in three harvests was approximately 75%, and estimated at 79% with four harvests, very close to the values obtained by the authors cited. Selective accuracy depends on the heritability and repeatability of the trait, the quantity and quality of information, and the procedures used to predict genetic values. Selective accuracy is a measure associated with accuracy in selection, that is, it refers to the correlation between predicted genetic values and true genetic values of individuals, and the greater the accuracy in an individual assessment, the greater the confidence in the assessment and in the predicted genetic value of the individual. Accuracy is also the main element of genetic progress that the breeder can change in order to maximize genetic gain, and refers to the correlation between the true genotypic value of the genetic material and the estimated or predicted value (Lopes et al. 2018). Working with the repeatability coefficient in peach, Della Bruna et al. (2012) obtained a variation of the coefficient of repeatability between 0.22 and 0.29 for the fruit yield per plant trait, with estimates of accuracy (Acm) between 60.0 and 75.0 %. The repeatability (0.30) and the accuracy (75%) obtained in this experiment are therefore within the values observed for other fruit plants. Pedro Junior et al. (2013), working with lemon tree cultivation, obtained coefficients of individual repeatability (r) around 0.1, reaching 0.35 in the average of 5 harvests. The accuracy obtained by these authors was 58% in one harvest and 84% in five harvests, values near those shown here, 54.2% and 82.1% in one and five harvests, respectively. It is noted, therefore, that an increase in the number of measurements allows an increase in the accuracy of selection. However, these increases become smaller and less significant each year, and it can be inferred that from the fifth harvest on, increases in accuracy would not justify the necessary input of resources.
The variability of the population allows expressive gains from selection of superior genotypes, up to approximately 40% of the mean of the population under study. Ramalho et al. (2016) obtained estimates of selection gains of 43.8% in the first year, working with robusta coffee in the state of Rondônia. Therefore, the results presented here are also close to those found by those authors.
The prediction of permanent phenotypic value is extremely relevant for it allows selection of individuals that will be cultivated in an environment similar to the one under evaluation, and thereby the additive, dominance, and permanent effects of environment are capitalized. In addition, it is possible to select genotypes for vegetative propagation outside the environment in which they were evaluated, and thus the additive and dominance effects are capitalized (Della Bruna et al. 2012). The phenotypic values of the individuals, as well as the selection gain inherent to each of them, are shown in Table 4.
Thus, there is genetic variability within the population studied in relation to yield and maturation cycle. Promising genotypes were found for production under irrigation within the population of Coffea canephora studied in the Cerrado region of Brazil. Materials have been identified for utilization in crosses to maximize heterotrophic effects and gene complementarity within a breeding program for robusta coffee under irrigation management. | 2019-12-12T10:35:18.951Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "b37e591a4f238c6ecc867129af44de12d7b8f6fd",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cbab/v19n4/1984-7033-cbab-19-04-387.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fcd3d9a2ed226afcfc0ecb5f083f7891df4158b7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
15455336 | pes2o/s2orc | v3-fos-license | Human Papillomavirus-16 Infection in Advanced Oral Cavity Cancer Patients Is Related to an Increased Risk of Distant Metastases and Poor Survival
Background Human papillomavirus (HPV) is an oncogenic virus causing oropharyngeal cancers and resulting in a favorable outcome after the treatment. The role of HPV in oral cavity squamous cell carcinoma (OSCC) remains ambiguous. Objective This study aimed to examine the effect of HPV infection on disease control among patients with OSCC following radical surgery with radiation-based adjuvant therapy. Patients and Method We prospectively followed 173 patients with advanced OSCC (96% were stage III/IV) who had undergone radical surgery and adjuvant therapy between 2004 and 2006. They were followed between surgery and death or up to 60 months. Surgical specimens were examined using a PCR-based HPV blot test. The primary endpoints were the risk of relapse and the time to relapse; the secondary endpoints were disease-free survival, disease-specific survival, and overall survival. Results The prevalence of HPV-positive OSCC was 22%; HPV-16 (9%) and HPV-18 (7%) were the genotypes most commonly encountered. Solitary HPV-16 infection was a poor predictor of 5-year distant metastases (hazard ratio, 3.4; 95% confidence interval, 1.4–8.0; P = 0.005), disease-free survival (P = 0.037), disease-specific survival (P = 0.006), and overall survival (P = 0.010), whereas HPV-18 infection had no impact on 5-year outcomes. The rate of 5-year distant metastases was significantly higher in the HPV-16 or level IV/V metastasis group compared with both the extracapsular spread or tumor depth ≥11-mm group and patients without risk factors (P<0.001). Conclusions HPV infections in advanced OSCC patients are not uncommon and clinically relevant. Compared with HPV-16-negative advanced OSCC patients, those with a single HPV-16 infection are at higher risk of distant metastases and poor survival despite undergoing radiation-based adjuvant therapy and require a more aggressive adjuvant treatment and a more thorough follow-up.
Introduction
Human papillomavirus (HPV) is a well-known oncogenic virus often observed in patients who have had a favorable outcome after the treatment of oropharyngeal cancers [1][2][3][4][5][6][7][8][9][10][11]. The causative mechanism is unclear but may be partially related to the radiosensitivity of the primary tumor or the less aggressive nature of tumors that are small at presentation (i.e., primary tumor [T]: T1-T2) [1,8,9,11]. HPV-positive tumors usually coincide with more regional lymph node (N) metastases (i.e., N2-N3) [4,[7][8][9]11], though some patients with HPV-positive tumors show fewer nodal metastases [6]. Analyses of failure patterns are important for the post-treatment surveillance of early disease recurrence because the only chance for survival in patients with recurrent tumors is the early detection of lesions that can serve as the targets of salvage therapy. The majority of previous studies have focused on the correlations between HPV infection and various measures of survival; few studies have addressed the failure patterns at local, regional, and distant sites [8,11].
However, the role of HPV in oral cavity squamous cell carcinoma (OSCC) remains ambiguous because of the relatively small number of recorded OSCC patients in comparison with the larger population of oropharyngeal cancer [3,6,9,10,[12][13][14][15][16]. In southern Asia, OSCC is an endemic cancer with an etiology that is distinct from that seen in the United States and Europe. Generally, OSCC patients with resectable tumors, but without distant metastasis, undergo radical surgery as the primary treatment in southern Asia. In the case of advanced OSCC (T4 lesion, lymph node metastasis, margin status of #4 mm] or extracapsular spread [ECS]), postoperative radiotherapy (RT) or concomitant chemoradiation therapy (CCRT) is used for adjuvant therapy [17,18].
Several questions about the role of HPV in advanced OSCC patients who require adjuvant therapy following radical surgery in southern Asia remain unanswered. For example, what is the incidence of HPV infections among OSCC patients? Are the clinical and biological behaviors of HPV in OSCC the same as those in oropharyngeal SCC? Are different treatment strategies and follow-up protocols appropriate in HPV-positive OSCC patients? Does HPV infection affect the outcomes of postoperative adjuvant therapy? To answer these questions, we studied a large cohort of patients with previously untreated OSCC who underwent radical surgery with or without adjuvant therapy; in particular, we focused on the impact of HPV infections on the outcomes of radiation-based adjuvant therapy for advanced OSCC. Accordingly, this study aimed to test the hypothesis that HPV infections among advanced OSCC patients are associated with a decreased risk of disease relapse, including local recurrence, neck recurrence, and distant metastasis, and therefore improve the rates of survival, including disease-free survival (DFS), diseasespecific survival (DSS), and overall survival (OS).
Patients
The Institutional Review Board at Chang Gung Memorial Hospital approved this study, which complied with the Declaration of Helsinki. All participants provided written informed consent. The inclusion criteria were as follows: a histological diagnosis of OSCC, the presence of a previously untreated tumor scheduled for radical surgery with neck dissection (ND), the absence of other suspected distant metastatic lesions detected by imaging, and a willingness to undergo imaging-guided biopsy or exploratory surgery if necessary. The exclusion criteria included a refusal or inability to undergo radical surgery.
Between 2004 and 2006, 333 patients were prospectively included in this study. All patients consented to and participated in the long-term outcome survey program of the Head and Neck Oncology Group at the Chang Gung Memorial Hospital. All participants underwent an extensive presurgical evaluation that included a medical history and a complete physical examination, flexible fiberoptic pharyngoscopy, a complete blood count, routine blood biochemistry, CT or MRI scans of the head and neck, chest radiographs, bone scans, and liver ultrasonography. Cancer staging was performed according to the 2002 American Joint Committee on Cancer 6th edition staging criteria [19].
All patients underwent radical excision of the primary tumor with $1 cm gross safety margins (both peripheral and deep margins). Classic radical or modified NDs (level I-V) were performed in the patients with clinically positive lymph node disease. Supra-omohyoid NDs (level I-III) were performed in clinically node-negative patients. Most of the uncomplicated patients underwent surgery alone except for those who unexpectedly had close margins #4 mm and/or positive lymph nodes as identified by pathological examinations. In this study, the subjects who underwent adjuvant therapy were considered as advanced OSCC patients. The indications for postoperative RT (60-66 Gy) included pathological T4 tumor, a positive lymph node, or a close margin #4 mm. ECS or multiple lymph node metastases were the reasons for the administration of CCRT with 50 mg/m 2 cisplatin biweekly plus 800 mg daily oral tegafur and 60 mg leucovorin, or 30 mg/m 2 weekly cisplatin [17,18]. In the present study, 173 (52%) of the 333 OSCC patients underwent radical surgery followed by adjuvant therapy for advanced OSCC for the reasons stated above.
Clinicopathologic Characteristics
Patient data were extracted from medical records and classified according to our previously identified risk factors for OSCC, which were described in detail elsewhere [20]. The clinical and pathologic characteristics of interest included sex, age of disease onset, alcohol drinking, betel quid chewing, cigarette smoking, tumor subsite, differentiation, pathological T-status, pathological N-status, pathological stage, ECS, level IV/V metastases, treatment mode, and patient status at the last follow-up. Tumor subsite was determined by direct oral inspection and confirmed by pathological examination. Local recurrence was defined as a positive biopsy in the area of the primary tumor after a radical surgery as determined by a negative post-treatment screen. A neck recurrence was defined as a positive cytology/biopsy in the cervical lymphatic region after primary surgery. An incident distant metastasis was identified through biopsy or by imaging, as verified by our tumor board.
HPV Detection
Excised tumor samples were collected during radical surgery. DNA was extracted from paraffin-embedded tumor samples using a Lab Turbo 48 automatic nucleic acid extraction system and a Lab Turbo Virus Mini Kit LVN500 (Taigen, Taipei, Taiwan). Finally, 50 mL of DNA solution was eluted, and 1 mL was used as the PCR template. HPV infection was diagnosed in subjects using PCR on the HPV L1 gene. HPV DNA was amplified with MY11/ biotinylated GP6+ primers, which targeted the L1 region and produced a 192-bp DNA fragment. The PCR reaction volume was 25 mL, which included a 2-mL aliquot of purified DNA. In the positive cases, the HPV L1 gene was genotyped using an HPV Blot kit ( ). HPV type-specific probes were immobilized on a nylon membrane, which was used for reverse blot hybridization to detect HPV DNA in a single assay. The HPV types were determined using a visual assessment protocol provided by the manufacturer [21][22][23].
Study Endpoints
The primary endpoint was time to disease relapse including local recurrence, neck recurrence, and distant metastasis. The secondary endpoints were DFS, DSS, and OS. DFS was calculated as the date of primary surgery to the date of disease relapse. DSS was calculated as the date of primary surgery to the date of death caused by a disease recurrence, and OS was defined as the time period between primary surgery and death caused by any reason.
Statistical Analysis
Follow-up visits continued until December 2011. All patients received follow-up examinations for at least 60 months after surgery or until death. The procedure used for selecting the optimal cutoff values for clinicopathological factors has been previously described (20). Five-year local control, neck control, distant metastasis, DFS, DSS, and OS rates were computed using the Kaplan-Meier method (log-rank test). Univariate and multivariate analyses were used to identify independent predictors of 5year outcomes. Independent prognostic factors were identified using multivariate Cox regression analysis with a forward selection procedure. Statistical analyses were performed using the SPSS software (version 17.0; SPSS Inc., Chicago, IL, USA). A two-sided P value ,0.05 was considered statistically significant.
The median duration of follow-up for the advanced OSCC patients was 58 months (mean, 47 months; range, 2-95 months). At the time of the analysis, 81 of the 173 patients (47%) were alive, and 92 (53%) were dead (59 due to the primary cancer, 20 due to other cancers, and 13 due to non-cancer causes). Twenty-five patients (15%) developed local recurrences, 31 (18%) had neck recurrences, and 35 (20%) experienced distant metastases. A total of 47 patients (27%) exhibited local and/or neck recurrence, salvage therapy was performed in 29 individuals (62%) and 21 (72%) dead at the time of the analysis.
We calculated the 5-year DFS, DSS, and OS rates among the OSCC patients according to HPV status. Compared with the HPV-negative patients, those with HPV-positive tumors had similar rates of DFS (P = 0.212; Fig. 1A) and DSS (P = 0.210; Fig. 1B), but had a less favorable OS (P = 0.041; Fig. 1C) regardless of the treatment modality. When the entire cohort was further analyzed according to the need for treatment, the impact of HPV on 5-year DFS and DSS was not significant in both the uncomplicated and advanced groups (all P.0.05; Fig. 1D & Fig. 1E). Among the uncomplicated and advanced OSCC patients, the HPV-positive cases seemed to have a shorter time to death than those without detectable HPV, although these difference were not statistically significant (P = 0.075 & 0.112, respectively; Fig. 1F).
To shed more light on the influence of the HPV genotype on the study endpoints, we further classified the study participants according to different HPV statuses (i.e., solitary HPV-infection, solitary HPV-18 infection, HPV-16 and/or HPV-18 infection, high-risk HPV infection, and HPV infection). The 5-year outcomes were calculated in both the advanced OSCC group and in the uncomplicated OSCC group. There were no significant differences in time-to-recurrence or time-to-death among the HPV subgroups in the uncomplicated OSCC group (all P.0.05; data not shown). Table 1 shows that solitary HPV-16 infection was associated with a significantly higher rate of distant metastases (56% vs. 19%, P = 0.007), lower DSS (37% vs. 68%, P = 0.025) and lower OS (25% vs. 53%, P = 0.028) in advanced OSCC patients. HPV-16 and/or HPV-18 infection was significantly related to an increased rate of distant metastases (43% vs. 18%, P = 0.031). In contrast, solitary HPV-18 infection, high-risk HPV infection, and HPV infection ( Fig. 1) did not show a statistically significant association with 5-year control and survival rates.
To further comparison, we further divided HPV infection into 3 subgroups according to genotype frequency (i.e., HPV-negative [n = 135], solitary HPV-16 infection [n = 16], and solitary HPV-18 infection [n = 12). Among the advanced OSCC patients, single HPV-16 infection was unrelated to local control or neck recurrence. Solitary HPV-16 infection was associated with a higher rate of distant metastases than the HPV-negative status (56% vs. 20%, P = 0.009; Fig. 2A). Despite postoperative adjuvant therapy, HPV-16 infection seemed to have a negative, although not statistically significant, impact on DFS compared with HPVnegative status (38% vs. 62%, P = 0.062; Fig. 2B). Moreover, advanced OSCC patients with HPV-16 infection had significantly lower DSS and OS rates than HPV-negative subjects (DSS: 37% vs. 67%, P = 0.025; OS: 25% vs. 53%, P = 0.026; Fig. 2C & Fig. 2D). Nevertheless, the differences in time-to-relapse and timeto-death between HPV-18 and HPV-negative cases were not statistically significant (Fig. 2). Table 2 demonstrates the clinical/pathological characteristics associated with lack of HPV, HPV-16, and HPV-18 infections among the advanced OSCC patients. The patients who were HPV-positive (with either HPV-16 or HPV-18) had a higher rate of poor differentiation than those who were HPV-negative (P = 0.013). The HPV-18 group seemed to have a slightly lower rate of pT3-4 than both the HPV-negative and HPV-16 groups (P = 0.069). The HPV-16 group seemed to have a higher rate of distant metastases than the HPV-negative and HPV-18 groups (P = 0.064). The remaining endpoints (i.e., local recurrence, neck recurrence, relapse, secondary primary tumors, death, and cause of death) did not differ significantly among the study groups (all P.0.05).
Combining HPV-16 with Traditional Prognostic Factors in Advanced OSCC Patients
Compared with the HPV-16-negative patients using the univariate analysis, the advanced OSCC patients with solitary HPV-16 infections had a significantly higher rate of distant metastases (56% vs. 19%, P = 0.007) and markedly lower DSS and OS rates (DSS: 37% vs. 68%, P = 0.025; OS: 25% vs. 53%, P = 0.028). The local control and neck control status were similar in both groups (local control: 87% vs. 84%, P = 0.774; neck control: 77% vs. 82%, P = 0.726). The multivariate analyses of important risk factors, including solitary HPV-16 infection, pN2, level IV/V metastases, ECS, tumor depth $11 mm, and lymphatic invasion, are shown in Table 3 Table 4 presents the demographic, clinical, pathological, and therapeutic characteristics of the 16 advanced OSCC patients who were infected with HPV-16. Eight of these patients underwent CCRT due to ECS (100%) and had significantly higher risks of distant metastases than those that underwent RT (75% vs. 13%, P = 0.041). Moreover, 75% of the 16 patients who were dead at the end of study had died of the following causes, by decreasing order of frequency: disease (56%), other cancers (6%), or other reasons (13%). Accordingly, the major impact of HPV-16 on 5year survival was due to the failure of treatment at distant sites resulting in death; DSS in patients with distant metastases was significantly lower than that in patients without distant metastasis (22% vs. 100%, P = 0.003). Of note, none of the 16 patients had level IV/V metastases (Tables 1 and 3).
Discussion
The null hypothesis of the present study was rejected, that is, HPV infections did not decrease the risk of disease relapse and was not associated with a better survival among advanced OSCC patients undergoing radical surgery and radiation-based adjuvant therapy ( Fig. 1 & Fig. 2). By contrast, advanced OSCC patients with a solitary HPV-16 infection were 3 times more likely to develop distant metastases and were 2-3 times more likely to die earlier (including those who died due to DFS, DSS, or OS) compared with HPV-negative patients (Table 3). Among the OSCC patients, including uncomplicated and advanced cases, the prevalence of HPV infection was 21.3%. The HPV-positive patients had similar rates of DFS and DSS, but a lower OS rate, compared with the HPV-negative patients (Fig. 1). Radical surgery seemed to be sufficiently effective for uncomplicated OSCC regardless of the HPV status, whereas radiation-based adjuvant therapy was unsatisfactory in treating HPV-16-positive advanced OSCC. In contrast to HPV-18 infection, HPV-16 infection had a negative impact on distant metastases, DFS, DSS, and OS despite immediately postoperative adjuvant therapy (Fig. 2). Our findings are surprisingly different from the recent HPVrelated outcome surveys in the field of head and neck cancers that have shown positive clinical impacts of HPV on DSS or OS [1][2][3][4][5][6][7][8][9][10]13,14,24,25]. Moreover, only a few studies have assessed failure patterns and time to relapse [1,8,9], especially with regard to cancers of the oral cavity [9]. This lack of attention to OSCC might be because some studies had relatively lower rates of detecting HPV (,10%), small samples, or included patients with cancers in different subsites (i.e., both the oral cavity and oropharynx) [3,4,10,[12][13][14][15]24,[26][27][28]. For this reason, different treatment modalities were used (surgery vs. RT/CCRT), ultimately leading to different outcomes. Other possible explanations are that the enrolled patients came from different regions and were exposed to different carcinogens (e.g., betel quid chewing), different cultural norms (e.g., habitual oral sex behavior) and different genetic backgrounds (e.g., HLA typing) [29]. Other cancers 17 ( The evidence regarding the clinical impact of HPV in OSCC patients remains inconclusive [6,12,13,28,30]. HPV-seropositive heavy smokers or heavy drinkers are at a significantly higher risk of having OSCC than HPV-seronegative heavy smokers or drinkers [30]. Accordingly, it is reasonable that most of the HPV-positive OSCC patients smoke cigarettes and/or drink alcohol (Table 2). Based on Maxwell's findings, cigarette smoking remarkably increases the risk of local recurrence and distant metastases among HPV-positive oropharyngeal cancer [31]. Smoking may induce genetic mutations, which facilitate the integration of HPV DNA into the host genome, and causes somatic gene errors. The sophisticated relationships among these oncogenic agents, including tobacco, smoking, and HPV, and tumor control are particularly difficult to clarify when most of the advanced OSCC patients within Taiwan have been exposed to tobacco and/or alcohol regardless of the HPV status. However, HPV-16 infection is an important, independent predictor of worse outcome among advanced OSCC patients, even those who underwent extensive operations and received adjuvant therapy. To our knowledge, longstanding betel quid chewing can damage the HPV-infected epithelia of the oral cavity and can potentially lead to a significant accumulation of chemicals, which may also influence the carcinogenic effect of HPV, and probably results in clinical and biological discriminations between HPV-positive OSCC and HPV-positive oropharyngeal cancer. Therefore, additional studies are needed to examine the significance of HPV infection in the presence or absence of betel nut chewing; this knowledge may be helpful to elucidate the genetic alterations and the molecular pathways that may underlie the observed survival differences.
HPV-18 infection is uncommon (,10% of all HPV infections) in oropharyngeal cancer [11] but is frequently found (32%, n = 12) in advanced OSCC. However, there was no difference in relapse or survival between patients with and without solitary HPV-18 infections. In this context, we further focused our study on HPV-16. Previous studies have reported that HPV-positive oropharyngeal cancers are associated with poorly differentiated histology, T1-T2 disease, N2-N3, and radiosensitivity [1][2][3][4][5][6][7][8][9][10][11]. By contrast, we observed that HPV-16-positive advanced OSCC cases have a similar status in terms of T-staging/N-staging and a remarkably higher incidence of distant metastases within two years of radical surgery (56%, 7/16) compared with HPV-negative cases. Even after adjuvant therapy, the advanced OSCC patients with solitary HPV-16 infection still had a relative higher risk of early distant metastases ( Fig. 2A). We believe that the possible survival benefit of HPV-16 might be diminished by oral habits or reduced by surgery; however, early diagnosis and adequate radical surgery are still the most important measures in the control of OSCC tumors.
In addition to solitary HPV-16 infection, we further showed that level IV/V metastases, ECS, and tumor depth $11 mm are independent risk factors for 5-year distant metastases in advanced OSCC. Four subgroups of distant metastases were thus created (Fig. 3A). We previously demonstrated that OSCC patients with ECS have a higher potential for distant metastases than other groups of patients [32]. In this study, distant metastases were found in 6 of the 7 OSCC patients with level IV/V metastases. Among the HPV-16-positive advanced OSCC patients, six of 8 cases with ECS developed distant metastases (Table 3 and Fig. 3B). Accordingly, more intensive and specific treatments should be administered in OSCC patients with level IV/V metastases or HPV-16 with ECS, such as taxane-based chemotherapy regimens as an adjuvant strategy immediately following radical surgery; transitions to alternative, palliative treatments; biotherapy; and anti-angiogenesis strategies utilized during the postoperative recovery period [33,34].
Several caveats of this study merit comment. First, a potential limitation of our report is the use of specific PCR assays for detecting the HPV L1 gene. Because PCR amplification of HPV DNA is a very sensitive technique, we ruled out the possibility of laboratory artifacts and the presence of environmental virions by performing all amplifications in duplicate and with the use of two different PCR assays. We acknowledge that reverse transcriptase-PCR assays for E6 and E7 transcripts may be more reliable for the detection of oncogenic HPV infections. Moreover, the lack of available tumor marker data (e.g., p16, p53, and epidermal growth factor receptor) does not allow us to draw any conclusion on the activity of viral oncogenes. To further characterize the possible mechanisms by which HPV-16 infections could be related to the risk of distant metastases and death, the measurement of E6 and E7 expression and HPV-associated biomarkers will be required in future studies.
In conclusion, HPV infection does not represent a favorable prognosticator in the OSCC patients who receive radical surgery regardless of radiation-based adjuvant therapy. Notably, the advanced OSCC patients with solitary HPV-16 infection require priority adjuvant treatment and follow-up due to increased risk of early distant metastases and death. Moreover, in particular, patients with level IV/V metastases or HPV-16 infection with ECS require a more intensive therapeutic protocol. Our findings suggest that different types of HPV infections present distinct clinical and biological challenges among advanced OSCC patients, and there is at least one such HPV-16 associated with unfavorable outcomes in individuals who have received conventional adjuvant treatment. The significance of HPV-16 infection should be further studied in future translational and clinical research. | 2016-05-04T20:20:58.661Z | 2012-07-12T00:00:00.000 | {
"year": 2012,
"sha1": "41440ac48f31caf8d5e374672595360169a60987",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0040767&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "41440ac48f31caf8d5e374672595360169a60987",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14277314 | pes2o/s2orc | v3-fos-license | Wood Anatomy and Relationships of Montinia
Quantitative and qualitative data are presented on wood anatomy of root, basal stem, and upper stem of Montinia caryophyllacea. Anatomy of twig wood of Kaliphora madagascariensis is compared with these results, as is the data of Ramamonjiarisoa (1980) on wood of the genus Grevea. Features common to the three genera in wood as well as other portions of the plant are reviewed; these are consistent with the interpretation that Montinia and Kaliphora are very close to each other, despite previous positions of the two genera in Saxifragaceae and Comaceae, respectively. Grevea, although more distant from Montinia than is Ka/iphora, can be included in a trigeneric family, Montiniaceae. The ordinal position of Montinia is reviewed in terms of Rosales (Saxifragales) and Myrtales. Resemblances between Montiniaceae and rosalean families are fewer than resemblances between Montiniaceae and myrtalean families. Montiniaceae have (in part) storied wood structure, a feature not found in Rosales or Myrtales sensu stricto. However, storied structure does occur in Thymeleaceae, which have sometimes been placed in or near Myrtales. Both Montiniaceae and Thymeleaceae might be regarded as outliers ofMyrtales, closer to that order than to others. The presence ofiridoid compounds in Montinia complicates the task of placement.
INTRODUCTION
The genus Montinia consists of a single species, M. caryophyllacea Thunb., native to southern Africa and Angola (Mendes 1978). Montinia has most frequently been included within Cunoniales, Hydrangeales, Rosales, or Saxifragales (concepts much alike, differing mostly in the ordinal name employed) under the families Saxifragaceae (Engler 1928, Thome 1983, Grossulariaceae (Cronquist 1968(Cronquist , 1981, or Escalloniaceae (Hutchinson 1973). Dahlgren's (1980) placement of Montinia as a separate family, Montiniaceae, under Comales is much like these concepts, because Dahlgren's Comales contains Columelliaceae, Escalloniaceae, and Hydrangeaceae. A departure from this concept was originated by Milne Redhead (1955), who first segregated Montiniaceae but claimed the family should be located in Myrtales between Onagraceae and Oliniaceae. In a study of African and Malagasy Saxifragaceae (sensu lato), Ramamonjiarisoa (1980) came to a similar conclusion. The present study is concerned with reviewing the ordinal placement of Montinia on the basis of excellent material of wood collected by the author in 1973.
IfMontiniaceae are recognized as a segregate family, the problem of its generic composition is also a matter worthy of review on the basis of wood anatomy. Grevea, which consists of three species from Madagascar and West Africa (Letouzey 1977), was included by Milne Redhead (1955) in Montiniaceae, although it had been assigned to Saxifragaceae by Baillon (1884) and had been similarly treated by subsequent authors. Hutchinson's (1973) inclusion of Grevea in Escalloniaceae is typical; Takhtajan (1987) includes Grevea as the only genus of Montiniaceae other than Montinia, in accord with the Milne Redhead treatment.
Kaliphora, a monotypic genus from Madagascar, was described by Hooker (1867) under Comaceae. This treatment was followed by such authors as Cronquist (1968) and Hutchinson (1973). Capuron (1969a) stressed similarity between Grevea and Kaliphora with respect to embryological and seed features. Nevertheless, Takhtajan (1987) placed Montinia and Grevea in Montiniaceae, but relegated Kaliphora to a neighboring family, Melanophyllaceae, under the suborder Escalloniinae of Hydrangeales. Ramamonjiarisoa (1980) contributed evidence that Grevea, Kaliphora, and Montinia should be grouped as Montiniaceae. For example, all three genera have tripartite unilacunar nodes (Ramamonjiarisoa 1980). The three genera are also alike by being dioecious and epigynous and by having ascending ovules and capsular fruits (Capuron 1969a(Capuron , 1969b. Tetramery characterizes Kaliphora and Montinia, whereas Grevea is trimerous. The three genera lack tannins. Stems of Grevea have pith bundles, both phloic and collateral, whereas the other two genera lack them (Ramamonjiarisoa 1980). Vessels of Kaliphora and Montinia are reported to have vestured pits, whereas those of Grevea are claimed to have vestured walls (Ramamonjiarisoa 1980). Details of pollen structure can be cited as suggestive of affinity between the three genera (Erdtman 1952;Hideux 1972;Hideux and Ferguson 1976). Pollen structure suggests that grouping the three genera as Montiniaceae is defensible; the data also suggest that the case for inclusion of Montiniaceae in a rosalean order is weaker than had been thought earlier. The ordinal position of Montiniaceae is therefore subjected to a review in the present paper.
This material on Montiniaceae is offered at this time so that data may be used in a forthcoming volume of the second edition ofMetcalfe and Chalk's "Anatomy of the Dicotyledons."
MATERIALS AND METHODS
Wood samples were available in dried form. Portions were boiled, stored in aqueous 50% aqueous ethyl alcohol, and sectioned on a sliding microtome. Sections were stained with safranin and counterstained with haematoxylin or fast green. Some sections were examined with an lSI WB-6 scanning electron microscope to demonstrate nature of crystals and presence of vestured pits in vessels. Macerations were prepared with Jeffrey's fluid and stained with safranin.
Cellular dimensions in the descriptions below are based on 25 measurements each, except for wall thickness of vessels and of libriform fibers, and diameter of libriform fibers; for these three features, typical expressions were selected for measurement. Vessel diameter is the diameter of the lumen at the widest point. Number of vessels per mm 2 is based on vessel number, not number of vessel groups per unit area. Vessels per group is calculated on the basis that a solitary vessel= 1.0, a pair of vessels in contact= 2.0, etc. Terminology for wood features follows the recommendations ofthe IAWA Committee on Nomenclature (1964).
Specimens documenting the woods studied are located in the herbarium of the Rancho Santa Ana Botanic Garden. The wood samples of Montinia were taken from a mature plant, in which the basal stem was 14 mm in diameter.
ANATOMICAL RESULTS
Descriptions rather than tabular summaries are offered here to achieve an economical presentation. The wood of Montinia caryophyl/acea is described completely for the root; wood data for basal stem and upper stem wood may be assumed to be the same except for the features (mostly quantitative) given under these latter headings. The wood of Kaliphora madagascariensis Hook. was taken from an herbarium specimen.
MONTINIA CARYOPHYLLACEA, root (near juncture with stem), Carlquist 4536 ( Fig. 1, 2, 6).-Growth rings present, weakly demarcated (semi-ring porous) by means of vessel diameter; vessels may be wider a little after the beginning of the growth ring, as indicated in Fig. 1. Mean number of vessels per group, 3.48. Vessels grouped mostly in the form of radial multiples (Fig. 1). Mean vessel diameter, 42 ~m. Mean number of vessels per mm 2 , 87. Mean vessel element length, 262 ~m. Mean vessel wall thickness, 3.5 ~m. Perforation plates simple. Lateral wall pitting of vessels composed of alternate circular pits, both on vessel-vessel and on vesselparenchyma contacts; pits about 4 ~min diameter. Vesturing present inconspicuously in pits of vessels. Imperforate tracheary elements alllibriform fibers (careful examination of well-stained material reveals only simple pits). Mean libriform fiber diameter at widest point, 23 ~m. Mean libriform fiber length, 493 ~m. Mean libriform fiber wall thickness, 3.7 ~m. Libriform fibers nonseptate. Tracheids (presumably vasicentric tracheids) effectively absent, only a single tracheid seen in macerations. Axial parenchyma vasicentric scanty, forming incomplete sheaths near vessels; often only a single parenchyma cell adjacent to a vessel. Axial parenchyma in strands of two cells. Rays both multiseriate and uniseriate, the former more abundant (Fig. 2). Ray cells larger than those of stem (compare Fig. 2, 3). Mean multiseriate ray height, 440 ~m. Mean width of multiseriate rays at widest point, 2. 7 cells. Mean uniseriate ray height, 120 ~m. Ray cells as seen in radial section mostly square; upright and procumbent cells about equal in number, but fewer than square cells. Ray cell wall thickness about 2 ~m. Pits among ray cells simple or bordered, the latter often on tangentially oriented walls. Starch abundantly present in ray cells ( Fig. 6) and libriform fibers. Small rhomboidal or prismatic crystals (Fig. 6, arrows) present in ray cells, often more than one per cell. Storied structure evident in wider libriform fibers (a mild form of fiber dimorphism is present), vessel elements, and axial parenchyma ( diameter, 26 ~m. Mean number of vessels per mm 2 , 83. Mean vessel element length, 269 ~m. Mean vessel wall thickness, 4.1 ~m. Vesturing in lateral wall pits of vessels composed of a small number of warts (Fig. 7). Mean libriform fiber diameter at widest point, 29 ~m. Mean libriform fiber length, 520 ~m. Mean libriform fiber wall thickness, 4.8 ~m. Mean multiseriate ray height, 511 ~m. Mean width of multiseriate rays at widest point, 2.4 cells. Mean uniseriate ray height, 1 72 ~m. Ray cell size (Fig. 3) smaller than in roots (Fig. 2). Ray cells as seen in radial section about equally commonly upright, square, or procumbent. Crystals common in ray cells, often one per cell (Fig. 3). Crystals range from large and rectangular, about half the cell diameter in length, to smaller or more slender, more than four times as long as wide (Fig. 4, 5). Many crystals obviously dual, with the two halves forming mirror images of each other (Fig. 5). Storied structure more evident in wider libriform fibers (Fig. 3, lower left) than in narrower ones (remainder of Fig. 3).
KALIPHORA MADAGASCARIENSIS, upper stem, Fosberg 52604 (Fig. 8, 9).-Growth rings not evident on sample studied, although more than one year's secondary xylem present. Mean number of vessels per group, 1.55. Vessels solitary or in radial multiples (Fig. 8). Mean vessel diameter, 37 ~m. Mean number of vessels per mm 2 , 133. Mean vessel element length, 310 ~m. Mean vessel wall thickness, 2.5 ~m. Perforation plates simple. Lateral wall pitting of vessels composed of alternate circular pits about 2.5 ~min diameter, both on vessel-vessel and vesselparenchyma interfaces. Grooves interconnecting pit apertures inconspicuously present. Vesturing present in vessel pits. Thin-walled tyloses present in a few vessels. Imperforate tracheary elements alllibriform fibers, pits apparently simple; splits extending from pits apertures in fiber walls. Mean libriform fiber diameter at widest point, 22 ~m. Mean libriform fiber length, 486 ~m. Mean libriform fiber wall thickness, 3.0 ~m. Axial parenchyma scarce, vasicentric, in strands of two cells. Rays both multiseriate and uniseriate (Fig. 9), about equal in frequency. Mean multiseriate ray height, 443 ~m. Mean width of multiseriate rays at widest point, 2.1 cells. Mean uniseriate ray height, 196 ~m. Ray cells as seen in radial section mostly square or upright; procumbent cells present in central portions of multiseriate rays. Starch present in ray cells. Crystals absent in rays. Storying vaguely apparent in some areas of libriform fibers.
Examination of radial sections which extend to pith in all four of the above categories demonstrates that primary xylem tracheary elements, as far as can be determined, all have simple perforation plates.
SYSTEMATIC CONCLUSIONS
For systematic conclusions to be valid, the roster of features presented must be accurate. There are a few disagreements between the features reported by Ramamonjiarisoa (1980) and those in the account above. Despite having excellent preparations with sensitive haematoxylin staining of primary walls, I was unable to find bordered pits in imperforate treacheary elements, which must therefore be termed libriform fibers rather than fiber-tracheids as reported by Ramamonjiarisoa (1980). Ramamonjiarisoa also reports tracheids in Montinia and Kaliphora. Although I observed a single tracheid (presumably vasicentric) in a maceration of Montinia wood, tracheids are so scarce as to be effectively absent in my material. In sections of various woods, one can occasionally obtain the appearance of tracheid presence (and tracheids have been reported in the literature accordingly) where a libriform fiber faces a vessel, and thus the lateral wall pitting of the vessel is seen within the fibriform outlines of the libriform fiber, but that is an interface between two different cell types rather than a tracheid. Ramamonjiarisoa's report of tracheid presence in Montinia and Kaliphora may be valid, but if so, tracheids are so infrequent as to be negligible, as they are in a scattering of other dicotyledons. Ramamonjiarisoa (1980) mentions occasional septate fibers in Montinia, although not in Kaliphora. I was unable to find septate fibers in either genus; Ramamonjiarisoa's claim of septate fibers in Montinia may refer to axial parenchyma (which she did not find in Montinia), which is subdivided into strands of two cells in that genus, causing a resemblance to septate fibers. Ramamonjiarisoa (1980) does not mention storying in wood of Montinia or Kaliphora; my material clearly showed it, at least in some places within the secondary xylem. Storying may be expected to increase with age, so that my relatively large stems of Montinia may have been more favorable for demonstration of storying. In all other respects, our accounts on wood of the two genera agree.
Because wood of Montinia resembles that of Kaliphora and Grevea, one can construct a description of features common to all three genera, with exceptions as noted in parentheses. Such a description follows. Wood diffuse porous to semiring porous. Vessels solitary or grouped into radial multiples. Perforation plates simple, even in primary xylem. Lateral wall pitting ofvessels composed of alternate circular bordered pits. Pits vestured (Kaliphora, Montinia) or vessel walls vestured ( Grevea ). Imperforate tracheary elements are libriform fibers, with simple pits commoner on radial than on tangential walls. Vasicentric tracheids present in very small numbers or absent. Axial parenchyma vasicentric scanty, not common, composed of strands of two cells. Multiseriate rays as common as uniseriate rays, more common, or (some species of Grevea) uniseriate rays absent. Procumbent cells present to various degrees; rays Heterogeneous Type liB ofKribs (1935) in Kaliphora and Montinia or Homogeneous Type I or II (Grevea). Multiseriate rays chiefly biseriate, but wider in some species of Grevea. Rhomboidal crystals, some with mirror-image duality, present in ray cells of Montinia (not reported in Grevea or Kaliphora). Wood storied (Montinia) or vaguely so (Kaliphora), or storying apparently absent (Grevea). Intraxylary phloem absent, but pith bundles, either phloic or collateral, present in Grevea.
The above summation shows that Kaliphora and Montinia are closely related. Certain embryological data also connect Kaliphora to Montinia (Capuron 1969a), and the two genera should be grouped as Montiniaceae despite Takhtajan's (1987) inclusion of Kaliphora in a neighboring family, Melanophyllaceae. Wood data place Grevea further from Montinia than Kaliphora is from Montinia, but Grevea is includable in Montiniaceae on the basis of wood data, as suggested by Takhtajan (1987). The three genera are linked by having tripartite unilacunar nodes (Ramamonjiarisoa 1980) and, as mentioned earlier, epigyny, dioecy, and lack of tannins. The three genera can also be said to share pollen features of phyletic significance, such as sphaeroidal grain shape, isopolar nature, and presence of three colpae provided with transverse furrows with "nexinous thickenings" (Erdtman 1952;Hideux and Ferguson 1976). Recognition of Montiniaceae as a trigeneric family seems a defensible taxonomic concept on the basis of present information. In this regard, one notes that phytogeography is suggestive, since the three genera are limited to Africa and Madagascar.
With respect to ordinal position of Montiniaceae, the view of various authors that it should be placed near Saxifragaceae can be critically examined in terms of anatomy and other features. Epigyny, characteristic of Montiniaceae, occurs in such rosalean genera as Ribes. Dioecy, another feature ofMontiniaceae, is much rarer in the order. However, relative frequency of features is not as important as the constellation of features that are likely indicators of relationship. Wood of Rosales shows many primitive characateristics, but simple perforation plates occur in a number of genera-very likely an evolutionary adaptation to seasonal availability of water. However, even in those genera of Rosales with simple perforation plates in secondary xylem, scalariform perforation plates may be found in primary xylem, because primary xylem tends to be a refuge for primitive features (Bierhorst and Zamora 1965). Montiniaceae, on the contrary, have simple perforation plates in primary xylem. Vestured pits have not been reported in vessels of Rosales (note: Fabaceae is considered here as sapindalean, in agreement with recent views). The photograph of wartiness in vessels of Ixerba of the Escalloniaceae (Meylan and Butterfield 1978) may represent merely a chance deposition of a secondary plant product, for those authors characterize vessel walls as smooth in Ixerba, and also figure that condition. Imperforate tracheary elements of Rosales are mostly tracheids; fiber-tracheids occur in a few genera, whereas Montiniaceae' have libriform fibers (Metcalfe and Chalk 1950; terminology in accordance with IA W A Committee on Nomenclature 1964). Axial parenchyma of Rosales is mostly apotracheal (diffuse); scanty vasicentric axial parenchyma, the type found in Montiniaceae, occurs in only a few genera ofHydrangeaceae, such as Philade/phus (Metcalfe and Chalk 1950). The strands of axial parenchyma are two cells long in Montiniaceae, but mostly longer in Rosales. Rays of Rosales are mostly taller than those of Montiniaceae, and conform to more primitive types. Storied wood structure, reported here in Montiniaceae, is absent in Rosales unless Greyiaceae are regarded as rosalean. Intraxylary phloem has not been reported in rosalean genera (Metcalfe and Chalk 1950). However, iridoid compounds have been reported in Montinia (Dahlgren, Jensen, and Nielsen 1977). Iridoid compounds are characteristic of some comalean (rosalean, saxifragalean) families such as Alseusosmiaceae, Escalloniaceae, and Hydrangeaceae (Jensen, Nielsen, and Dahlgren 1975), but have not been reported in Myrtales.
The idea of Milne Redhead (1955) that Montiniaceae are closer to Onagraceae than to rosalean families is worthy of consideration in view of the lack of congruence between Montiniaceae and rosalean families with respect to wood anatomy. Montiniaceae should be compared not merely with Onagraceae, but with other Myrtales as well. Vliet and Baas (1984), who offer a valuable summary of wood of Myrtales, unfortunately did not consider Montiniaceae in their survey, nor did Dahlgren and Thome (1984) take Montiniaceae into account in their attempt to circumscribe the order Myrtales. Tetramery and epigyny, characteristic of Ka/iphora and M ontinia, typifies Onagraceae and some other myrtalean families (Dahlgren and Thome 1984). Perforation plates in secondary xylem of Myrtales are predominantly simple (Vliet and Baas 1984), as they are in Montiniaceae. Of greater significance, however, is that myrtalean families such as Combretaceae, Lythraceae, and Melastomataceae have simple perforation plates in primary xylem as well (Bierhorst and Zamora 1965:701), as in Montiniaceae; such specialized xylem characterizes only a small number of dicotyledon families. Vestured pits, a noteworthy feature of Kaliphora and Montinia, are cited as one of the ordinal features of Myrtales (Vliet and Baas 1984), but they occur in only a scattering of other dicotyledonous orders. Although tracheids occur in some myrtalean families, such as Penaeaceae (Carlquist and DeBuhr 1977), libriform fibers occur in others, such as Onagraceae (Carlquist 1975). Mirror-image double crystals, a peculiar feature of Montinia, have been reported in Onagraceae and Melastomataceae but very few other families (Carlquist 1988). Storied wood structure, present in Montiniaceae, has not been reported in the families of Myrtales in the narrow sense utilized by Dahlgren and Thome (1984), but a special case discussed below is worthy of consideration. Intraxylary phloem is absent in Kaliphora and Montinia, but pith bundles are present in Grevea. Pith bundles of this sort have been reported in numerous genera of Melastomataceae (see Metcalfe and Chalk 1983:209).
The fact that storied wood structure is absent in Myrtales (Vliet and Baas 1984) but present in Montiniaceae is worthy of further consideration because storied wood structure has been reported from only a limited number of orders of dicotyledons, such as Malvales and Sapindales. However, storied wood structure does occur in two families that have been referred to Myrtales by some authors, Eleagnaceae and Thymeleaceae (Metcalfe andChalk 1950, 1983). Thymeleaceae are regarded as myrtalean by Cronquist (1984), although Dahlgren and Thome (1984) demur. Vliet and Baas (1984) do find xylary grounds for regarding Thymeleaceae as myrtalean: Thymeleaceae have vestured pits and intraxylary phloem, two features regarded as diagnostic of Myrtales. Perhaps Thymeleaceae should be regarded as an outlier of Myrtales, closer to Myrtales than to other orders; Vliet and Baas (1984) do not find evidence from wood anatomy to support the idea offered by Dahlgren and Thome (1984) that Thymeleaceae is closer to Euphorbiales than to Myrtales. If Thymeleaceae is an outlier of Myrtales, perhaps Montiniaceae is another such outlier, which may not fit within the tight grouping of the families in the narrow sense of Myrtales, but which may be includable in a Myrtales defined more loosely. This concept is the one to which Milne Redhead (1955) and Ramamonjiarisoa (1980) subscribe. The presence or iridoid compounds (absent in Myrtales and in Thymeleaceae) in Montinia, however is troubling. The chromosome number of Montinia, n = 34 (Goldblatt 1976) does not appear helpful: it is an unusual number not found (or if so, very rarely) in the families and orders most prominently mentioned as possibly containing Montiniaceae. One might expect that even if one can define a particular order well, there may be families that are peripheral to such orders, and one is tempted to exclude them so that the order can be defined with more numerous features and thus circumscribed more clearly. However, ultimately such families must be placed. Such families as Barbeyaceae, Daphniphyllaceae, and Sabiaceae come to mind: no matter what orders, respectively, ultimately receive these families, the definitions of those orders will have to be widened (unless one opts for monofamilial orders, in which case the problem is merely transferred to a superordinallevel). | 2017-02-17T08:44:35.884Z | 1989-01-01T00:00:00.000 | {
"year": 1989,
"sha1": "00e740502e5738c9ffce80582c375c5a530914f9",
"oa_license": "CCBY",
"oa_url": "http://scholarship.claremont.edu/cgi/viewcontent.cgi?article=1446&context=aliso",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "00e740502e5738c9ffce80582c375c5a530914f9",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
233861805 | pes2o/s2orc | v3-fos-license | Fire Safety External Audit Procedure
The paper examines fire safety audit as well as algorithm, main functions and aims of its performance. Existing procedure for evaluation of fire risk is investigated. Methodology was developed for external audit of safety systems in public mass-attendance facilities. It also covers new types of outsourced fire safety services aimed at ensuring fire safety being a fundamental aspect of guaranteed safe operation, economic stability of a company and confidence in the future.
Introduction
Fire safety status of buildings and structures can be defined by the relevant norms approved by laws of the Russian Federation. Facility is considered safe in case all the requirements of fire safety are complied with, risk of fire occurrence and propagation of flame is reduced to a minimum. The article reviews the problem of the absence of developed procedure to perform expert examination of fire safety conditions at public facilities under protection.
Novelty of investigations' results, fullness and validity of using available literature on the subject lie in the development of algorithm to perform external audit of fire safety by specialized companies. Proposal is given regarding the indicative range of questions to be covered while rendering such services.
General terms and designation
A certain procedure -independent evaluation of fire risk values or fire safety audit -is conducted to establish compliance of capital construction facility with the requirements of fire safety, universally accepted legislative documents. Fire safety audit includes a set of actions which ensure facility protection level against ignitions [3].
Main target of independent evaluation of fire risk is to estimate and develop actions aimed to increase protection level of human health, material assets as well as areas from fire and its consequences.
This procedure plays a key role in the facility operation because set of actions taken within its framework covers a comprehensive range of essential processes: 1. Procedure of fire safety analysis of a facility is established by norms and it includes: • examination of a facility according to the established norms; • evaluation of space-planning and structural solutions of a building; • performance evaluation of fire safety systems of a building; • collection of data regarding possible location of fire occurrence including categories of rooms according to their fire and explosion hazards. • evaluation of organizational and technical arrangements to ensure fire safety of a facility. 2. Definition of design values of fire risk and their comparison with the allowed regulated values. 3. Control of correspondence of technical condition of a building with the requirements of regulations and standards. 4. Elaboration of measures to ensure fire safety while using equipment and carrying out technological processes. Most facilities subject to repair or rearrangement require performance of fire safety audit for further safe operation as well as minimization of expenses needed for its assurance. Aside from such buildings and structures, carrying out such procedure is efficient for: • facilities with high attendance; • facilities pertinent to critical infrastructure category; • facilities neighboring with industrial facilities of categories A, Б, В according to fire and explosion hazards.
Fire safety audit procedure
Expert organizations are authorized to perform independent evaluation of fire risk values. This procedure ensures protection against corruption of state fire authorities and it allows to save budget of a company by complying with all fire safety rules. The basis for commencing independent evaluation of fire risks is the execution of contract with the owner or the renter of facility. Fire safety audit of buildings and structures is performed in several stages which are regulated by legislative documents.
The analysis of design, technical and other documentation is performed in the course of fire safety audit. All plans and drawings are examined to define "vulnerability" of system that ensures fire safety. First and foremost, record is made regarding: • absence of emergency exits, reduction of capacity of emergency exits and evacuation routes; • malfunctioning, incompleteness of fire fighting systems; • violation of requirements to ensure prevention of fire propagation or generation of fire outbreak (absence of fire partitions, non-conformity of fire-resistance degree of a building or fire-resistance ratings of bearing structures, etc.). At this stage, if needed, the elements of fire fighting systems are tested: smoke dampers, fire extinguishing complexes, fire staircases, etc. Apart from that, structures and finishing materials are tested, required calculations are made pertinent to definition of possible risks of fire occurrence and fatalities.
It is noteworthy that while evaluating fire risk of buildings with various fire-resistance degrees and with designated safe areas it is necessary to specify the value -"average frequency of fire occurrence considering functional profile of buildings along with their main fire and technical properties" [13]. Intermediate results of facility examination are discussed with the Client in order to make decisions regarding the organization of measures and actions which ensure fire safety of the facility. Final result of fire audit service is the development and issuance of conclusion of an independent evaluation of fire risks with statement of findings and results of the work done. Conclusion includes data regarding possible risks of fire occurrence and fatalities, estimated value of fire damage and other information.
Conclusion is certified with a stamp of expert organization and signatures of specialists that carried out the procedure. Conclusion is also mandatorily registered at the regional authorities of Russian Ministry of Emergency Situations (EMERCOM) with assignment of an individual number. The issued conclusion of an independent evaluation of fire risk has an influence on the category of risk of Facility under protection, i.e. in standard situations, it is recommended to reduce the category of risk, which consequently has an influence on the frequency of scheduled inspections by state fire inspectors, i.e. reduction of the amount of such inspections. 4. Fire safety audit: forms, targets, tasks, functions Currently, evaluation of compliance of a building with the requirements of fire safety is carried out in various forms based on needs of society and economy. Each room, public area of a facility is subjected to the risk of fire occurrence if all hazardous fire factors are not identified and relevant safety measures are not taken [2].
Commitment to complying with the fire safety rules helps to preserve human lives, health, their property and material assets.
Fire safety of a facility is considered ensured in case all fire safety requirements are fully met and fire risk is minimal. It is required to carry out an independent evaluation of fire risk in order to give an expert evaluation of facility under protection for its compliance.
Fire risks might not be fully eliminated but they can be minimized due to fire safety checks done by relevant auditing organizations.
Audit can be carried out in reference to a company, real estate property, fire fighting means and systems, equipment and products. Expert organizations are entitled to carry out fire safety audit and issue conclusions.
Fire safety audit is performed to become aware of the real state of affairs in a company as well as increase the safety level of a facility, identify and rectify violations and faults of fire safety system of facility under protection in due time. Company management provides and defines time frame of fire risks evaluation as they may deem appropriate.
Investigation gives evaluation whether management is scrupulous about fire safety instruction regulated by legislative framework of the Russian Federation. The analysis of the documents which characterize fire safety is reviewed for that purpose. Buildings are examined in order to identify the level of compliance with fire hazard requirement and the route of possible start and propagation of potential fire. Measures to rectify violations are also recorded.
Close cooperation between service provider and client is required to render efficient fire audit service. As per evaluation rules, interaction between client and fire auditor is expressed through the client providing the auditor with documents which indicate fire hazard conditions of a facility, as well as allowing employees of an expert organization or other individuals specified by an expert organization the access to examine the territory of facility under protection.
Pursuant to Federal law No. 123-ФЗ dated 22/07/2008 "Technical regulation of fire safety requirements", article 144, part 2 defines the procedure for evaluation of facility under protection compliance with the established fire safety requirements by means of an independent evaluation of fire risk [1].
Main functions of fire safety audit may include: • identification of possible occurrence and development of fire and impact of fire hazards on people and material assets; • examination of operability and proper functioning of fire protection, fire-fighting, public address, ventilation, smoke removal systems available in the building; • development of optimum safety diagram and maximum protection measures depending on identified fire risks; • fire risks are calculated; • analysis of documentation (fire safety certificates, designs of fire safety systems, act of operability of fire protection systems, etc.) available at facility for completeness and compliance with the regulatory requirements of fire safety. Let's have a more detailed review of the algorithm of fire safety audit procedure for a building. 1. Receipt and review of client's request for an independent evaluation of fire risk. The request for fire safety audit shall contain: • data of the owner or renter of the facility to be audited (building); • constituent documents, details; • name of a facility and its actual data: address, total area of buildings, fire safety class, number of people in each room. After review of the request, a contract is also executed with an expert organization which prescribes timeframe, price and payment procedure, work terms, algorithm for submission of conclusion, and auditing services are paid.
2. Documentation analysis. At this stage, client shall provide the expert organization with design, procedural and technical documentation which characterizes fire safety of a building.
3. Audit of facility under protection.
Building is examined to obtain unbiased information regarding fire safety conditions at this facility under protection. Possibility of fire occurrence and development is identified. Impact of fire hazards on people and material assets is evaluated. Evaluation is also made regarding the compliance of this building with fire safety norms and requirements established in the Russian Federation.
4. Additional expert appraisals and investigations, independent calculation of fire safety risk. Calculations of fire risk evaluation can be additionally made and required investigations and expert appraisals can be performed by employees of the expert organization or by assigned specialists of fire test laboratory. 5. Results processing and preparation of a report. At this stage, results of all measures taken are processed, documentation on independent fire risks evaluation is prepared, conclusion is drawn regarding compliance of the building with regulated fire safety requirements and conclusion is made whether terms of compliance with these requirements are respected or not.
6. Issuance of conclusion. Based on the results of fire safety audit, a conclusion is prepared which mandatorily specifies details of parties involved in fire safety audit, data for all the available facilities which were involved in the audit, methods and ways of fire hazard evaluation. In case fire safety requirements are not complied with, measures (rectification plan) are developed to ensure fulfillment of conditions which will allow for compliance of facility under protection with fire safety requirements.
Requirements to expert organizations and changes in legislation
As of 25 November 2018, EMERCOM Order No.660 dated 25/11/2009 according to which all expert organizations providing independent evaluation of fire risks (hereafter IER) shall be accredited has become no longer valid.
Currently, there is no document which would require accreditation. This is also confirmed by clarification letter of EMERCOM of Russia.
Each expert organization providing independent evaluation of fire risk shall employ an expert for whom it will be a primary employment [7].
Apart from that, one of core activities of an expert organization shall be specifically an independent evaluation of fire risk (fire safety audit).
IER experts shall pass qualification. Each expert shall pass qualification to prove possession of specific knowledge of fire safety which is required for an independent evaluation of fire risk. This requirement is established by the Decree of the RF Government "On qualification of officials engaged in fire risk evaluation [11].
In order to pass qualification, an expert shall meet the requirements: • higher or secondary professional education majoring in "Fire safety"; • work experience in the field of "Fire safety" of not less than 5 years; Official Russian EMERCOM website displays "The list of questions for qualification exam to check whether officials engaged in fire risk evaluation that applied for qualification have specific knowledge in the field of fire safety required for IER.
List of questions for IER qualification exam contains 890 questions pertinent to fire safety requirements, functioning of fire-fighting systems, approval of design documentation with state fire authorities, etc.
Questions are compiled in accordance with it. 16 of "Qualification rules for officials engaged in fire risk evaluation [12].
Fire safety outsourcing
In the recent years, the so-called fire safety outsourcing has gained greater popularity -independent expert examination of fire safety status of facilities under protection or fire safety audit by external experts.
It is hard to find time and competent specialists in modern business in order to arrange wellmanaged work ensuring fire safety. The most suitable way-out in this case is to employ fire safety outsourcing. (Outsourcing is transferring certain production functions to be performed by another company specialized in the relevant field).
Such service is rendered for organizations, companies with the aim of ensuring fire safety being a fundamental aspect of guaranteed safe operation, economic stability of a company and confidence in the future [10].
Outsourcing can be full and partial as desired by Client [4].
Outsourcing scope (full scope) includes: • performing functions of fire safety inspector; • carrying out fire safety analysis of a company; • developing documentation: orders, instructions, decrees, and other local regulations that establish the required fire safety arrangements in a company; • preparing fire safety declaration for a company • developing and (or) correction of fire evacuation plans; • organizing fire safety briefings and basic fire safety course and monitoring their attendance by all employees; • consulting and informing employees of a company including its manager regarding fire safety issues; • organizing and performing fire trainings with employees; • participating in investigation, registration and recording of fire cases, ignitions, fatalities and injuries caused by fire, estimation of fire-caused damage to property in a company; • monitoring of keeping fire safety means and systems operable including primary fire-fighting means; • testing internal fire hose reel cabinets for water supply and rewinding fire hoses; • preparing reports on actions taken to ensure fire safety and work plans for company manager; • conducting fire-fighting public awareness campaign; • analyzing fire safety violations done by employees and other individuals at facility under protection; • monitoring execution of improvement orders given by state fire authorities; • optimizing costs for fire safety measures. The following questions can be raised at the discretion of experts during external fire safety audit [8,9] review correspondence received from state fire authorities -prepare written data for Client regarding compliance with regulatory requirements and federal legislation pertinent to fire safety measures specified in improvement order.
preparation of substantiated objections to fire authorities (data, letters) regarding discontinuation of ungrounded (illegitimate) items of improvement order by state fire authorities.
-analyzing legitimate requirements of fire authorities' Improvement order and preparing suggestions for Client regarding execution of Improvement order with lower costs or with compensating regulatory and technical requirements of fire safety. Results of analysis made on fire safety system with indication of legitimacy of their application and possible solution options are shown in Table 2. Kitchen is not separated with fire partitions SP 4. It. 5.2.6 Food units as well as parts of building, groups of rooms or separate production, storage and technical rooms (laundry rooms, ironing rooms, workshops, storage rooms, switchboard rooms, etc.) in class Ф1.1 and Ф1.2 facilities, except for category В4 and Д rooms, should be separated by fire walls not lower than type 2 (type 1 partitions) and floors not lower than type 3 (in 1 degree fire-resistance buildings -type 2 floors).
2) Replacement of fire doors with water curtains while developing PSTS.
Conclusions
Therefore, advantages of making various fire safety analyses will be that the owner of building (room) is provided with objective, reliable and unbiased information regarding compliance of facility protection with valid norms and rules of fire safety, detailed recommendations which allow to enhance protection of building and lower risk category of facility under protection. This work provides methodology of external fire safety audit, it also reviews algorithm, main functions and aims of its performance. Existing procedure for evaluation of fire risk is investigated as well as most relevant issues of organizing fire safety outsourcing.
The result of investigation is the developed new procedure for objective, reliable information regarding compliance of facility protection with relevant norms and rules of fire safety, detailed recommendations which allow to enhance fire protection of building and lower risk category of facility under protection. | 2021-05-07T00:04:30.661Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "89503776b33f4183ec42b536b5af36c7eddbc836",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/666/5/052011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "16329e5e2cd67e9630292475cd575211baaf941b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
236279996 | pes2o/s2orc | v3-fos-license | Semi-Parametric Models for Longitudinal Data Analysis
Longitudinal studies are widely used in various fields, such as public health, clinic trials and financial data analysis. A major challenge for longitudinal studies is the repeated measurements from each subject, which cause time dependent correlations within subjects. Generalized Estimating Equations (GEE) can deal with correlated outcomes for longitudinal data through marginal effect. Our proposed model will be based on GEE, with a semi-parametric approach, to provide a flexible structure for regression models: coefficients for parametric covariates will be estimated and nuisance covariates will be fitted in kernel smoothers for the non-parametric part. The profile kernel estimator and the seemingly unrelated kernel estimator (SUR) will be used to obtain consistent and efficient semi-parametric estimators. We provide simulation results for estimating semi-parametric models with one or multiple non-parametric terms. Financial market data is a major component of data analysis; thus, we focus on the financial market in the application part. Credit card loan data will be used with the payment information for each customer across six months to investigate whether gender, income, age, or other factors will influence payment status significantly. Furthermore, we propose model comparisons to evaluate whether different models should be fitted for different subgroups of consumers, such as male and female.
Introduction
For statistical scientific studies, experiment designs depend on the different types of system under study and the different goals for research. Longitudinal studies allow for the investigation of change over different time points and the effects of different factors on the change. One distinctive feature of longitudinal studies is repeated measurements at different time points within each subject (or cluster), which considers the time series correlation. For example, the financial market plays an important role in daily life. Financial institutions, such as commercial banks, investment banks, insurance companies, and brokerages are major players trading in financial markets. Most financial data analysis involves time series because time is valuable, and we want to track the temporal tendency of subjects. Therefore, once we obtain time changing measurements for each subject, as well as covariates, we can conduct longitudinal studies for financial data analysis.
A variety of longitudinal models have been applied in financial analysis. Petersen [1] pointed out that previous research focuses mainly on three major methods: the Fama-MacBeth procedure (Fama and MacBeth) [2] estimates, dummy variables in each cluster such as the fixed effect model and adjustments within cluster correlation such as Generalized Estimating Equations (GEE). Different methods should be applied depending on different interests. For a subject specified effect, the Generalized Linear Mixed Model (GLMM) will provide a nice estimator for individual subjects. When covariates are involved in general factor or policy, GEE can be applied to investigate the relationship between the response and covariates.
In order to capture the complex relationship in longitudinal data analysis, semi-parametric and nonparametric models have been developed for financial data analysis in longitudinal studies. Sam and Jiang [3] propose a non-parametric estimator for a short rate diffusion process with yields in longitudinal structure. In this paper we will introduce a class of semi-parametric regression models with GEE, which provide a flexible structure for longitudinal data analysis. Simulation studies will be conducted to compare the performance of our proposed models with other types of models. The semi-parametric regression models will be applied to credit card loan data and models for different subgroups will be examined.
Different estimation methods have been developed for non-parametric and semi-parametric regression models when observations of the response are independent. For non-parametric regression models, kernel estimation methods based on local likelihoods and splines based on penalized likelihoods can be used; for semi-parametric regression models, partial linear models, which specify the mean of the outcome variable as a parametric function with respect to some covariates and non-parametric functions with respect to other covariates, can be used. More specifically, local polynomial kernels, smoothing splines, regression spline, and penalized splines have been introduced for non-parametric and semi-parametric regression estimation methods. Local polynomial kernels provide a different weight for neighborhood observations. Smoothing splines fit the non-parametric function through a spline function with a set of covariates. Regression splines model the non-parametric regression part with spline basis functions, with a small number of knots and penalized splines present it puts the penalty of smoothing splines on regression splines.
For longitudinal data analysis, non-parametric and semi-parametric regression should be able to deal with within-subject correlation for repeated measurements. Estimating equations based methods and likelihood based methods can be used on non-parametric regression and semi-parametric regression with kernel and spline smoothing methods. Lin and Carroll [4] proposed a kernel GEE estimator through local polynomial kernel estimating equations by the extension of a generalized linear model. Unlike the parametric GEE developed by Zeger and Liang [5], kernel GEE has limited conditions for a consistent estimator and cannot reach efficiency bound if accounting for within-subject association. Wang [6] provided the seemingly unrelated kernel (SUR) estimator which fulfills both consistency and efficiency if we consider withinsubject association. For likelihood based settings, spline smoothing includes the generalized smoothing spline estimator, P-splines, and regression splines, and the smoothing spline estimator has a close relationship with linear mixed models.
Whether semi-parametric regression can be applied in marginal models and linear mixed models will depend on the goal. If we focused on semi-parametric regression in marginal models, several estimation methods have been developed to deal with the within-subject correlations. Lin and Carroll [7] developed profile-kernel estimating equations which estimate the parametric part by a profile method and the non-parametric part by the kernel GEE with local polynomial kernels, which we mentioned above. The estimator from profile-kernel methods is consistent only when ignoring within-subject correlation and it is not semi-parametric efficient even without the within-subject correlation for non-parametric part. Wang, Carroll, and Lin [8] used the SUR kernel model for the non-parametric part and remained estimating the parametric part with the profile method, providing an estimator with consistency and semi-parametric efficiency. For semi-parametric linear mixed models, we can also use the profile SUR kernel methods to fit the model and the spline method as well.
The rest of this article will be organized as follows. In Section 2, we will display mathematical details for the semi-parametric model and semi-parametric kernel estimating equations. Different estimators with different approaches will be fully developed with closed form solutions, such as kernel average estimator (Lin and Carroll) [7] and the SUR kernel estimator (Wang, Carroll, and Lin) [8]. In Section 3, we will show a simulation study that follows the models in Section 2. Results with estimated coefficients and overall fitting mean square errors for parametric estimators and semi-parametric estimators will be provided, showing the difference between parametric models and semi-parametric models. For each model, we display two setups with separated training and testing datasets. Section 4 would be data application part. Data description will be provided first, showing details of predictors and responses variables in credit card loan dataset. We conduct an overall model first, and we provided results for analysis when fitting model separately based on different level of factors. Section 5 shows conclusion and discussion, we will provide a summary and we will also discuss some challenges when we conduct semi-parametric models.
Models
In this section we propose semi-parametric models for GEE. We will provide local polynomial kernel GEE estimator and the seemingly unrelated kernel estimator, which are two main tools for model fitting. The difference of consistency and efficiency between those estimators will be displayed when accounting for association within subjects.
Profile-Kernel Estimating Equations with Two Kernel Smoothers
We follow Lin and Carroll's [7] method, using a backfitting algorithm to calculate the profile-kernel estimator, which had three steps in general: for a given β and other kernel smoother terms, we can estimate one of the non-parametric terms, using non-parametric estimating equations. After we estimate that non-parametric term, we can estimate the rest kernel smoother terms and after we have finished the estimator of all non-parametric terms, a traditional generalized estimating equation can be used to obtain β estimator.
Suppose we have a semi-parametric model with two kernel smoother terms: where we define is the covariate for the nonparametric part and = ( 1 , … , 2 ).
Step 1: Given β and 1 � 1 �, the estimating equation for 2 ( ) is: . and are vectors: and we use the identity link function. ℎ ( ) = ℎ ( 2 − ) are kernel weighs of the target value for ℎ subject.
and ( , , 1 ) = −1 , in which is a scale parameter and is known weight. 1 is an invertible working correlation matrix for 2 ( ) where we construct some structures such as AR(1) or exchangeable correlation forms.
Through the estimating equations, the local average kernel GEE estimator has a closed form solution: Step 2: After we obtain � 2 ( 1 ) and given β, we can proceed to calculate 1 ( ) by another estimating equation. Still, through the estimating equations, the local average kernel GEE estimator � 1 ( 1 ) has a closed form solution.
Step 3: After estimating the non-parametric parts � 1 ( 1 ) and � 2 ( 2 ) , we can proceed to estimate β through solving the adjusted generalized estimating equations: and 3 is a working correlation matrix. We followed Fan and Li [9], providing that the estimating equation has a closed form solution for : where is covariates matrix and is response variable.
Profile SUR Kernel Estimator with Two Kernel Smoothers
Following Wang, Carroll, and Lin's [8] method, we propose the SUR kernel estimator for semi-parametric model with two kernel smoother terms. Still, a back-fitting three-step iteration can be used for the estimation.
A closed form solution with identity link can be obtained by: Step 2: Given and � 2 � 2 � we obtained from last step, the estimator of 1 ( ) can be calculated by another kernel equation with a closed form solution.
Step 3: After we obtain the estimators for two kernel smoothers, we can calculate β by solving the adjusted estimating equation. And still, we can update β by: Then we can run a full iteration through those backfitting steps until convergence.
Simulation Results
In this section, simulations are conducted to compare different estimation methods. Bias, standard deviation, and mean square error for estimators will be used to evaluate the performance of different approaches in parametric and semi-parametric models. Different scenarios based on the local polynomial kernel GEE estimator and the SUR estimator will be used to display when and which unbiased estimator will achieve the least standard deviation under given conditions. For estimating the non-parametric part, the Gaussian density kernel will be used to construct kernel weights in the non-parametric smoother and the least square cross validation method (Silverman) will be used to select bandwidth parameter ℎ which is critical for kernel regression models. In this simulation part, we will focus on the estimation of β and overall fitting of different estimators. Mean and standard deviation of the estimated β will be displayed. Overall fitting performance of different approaches will be examined based on the mean square error. A training dataset and test dataset will be used to evaluate the performance of semi-parametric models and parametric models.
Semi-Parametric Model with One Kernel Smoother
Consider a model with the non-parametric part and linear part in the form: where denotes the ℎ subject and denotes the ℎ time point. In the equation, (. ) is a kernel smooth function, denotes covariates in the non-parametric part, denotes covariates in the parametric part, and β is the coefficient vector. In this simulation, data is generated with the following set-up: • Each run with 100 subjects, each subject with and 4 or 10 time points and 200 replicates. = ( 1,…, ) ′ is a vector that follows multivariate normal distribution with mean zero and correlation coefficient matrix , which is an AR(1) working correlation matrix with a lower entry = 0.3 and upper entry = 0.7, respectively. For estimating the semi-parametric model in (3.1), a semi-WI estimator with independent working correlation matrix = and semi-True estimator with true working correlation matrix will be used in different scenarios. Parametric estimating approaches, such as estimators based on the following three parametric models, will be used in this simulation:
Local Kernel Estimator with One Kernel Smoother
In this section, we first show the results of local polynomial kernel GEE estimator with one kernel smoother for semi-parametric regression and other estimators for parametric regression with various scenarios, as we discussed in Section 3.1. Table 1 shows β estimates in the semi-parametric model (3.1) and parametric models in (para1-para3). The results for two setups show the standard errors of the β estimates based on semi-parametric estimators are at least Journal of Finance and Economics 97 3 times less than the standard errors of the estimators from the three parametric models (para1-para3). Similarity, we found that the standard errors for = 0.7 are lower than those for = 0.3 in the parametric estimators and semiparametric estimator for the first setup, while in the second setup, standard errors are higher when = 0.7. For overall fitting mean square errors in the semi-parametric model (3.1) and parametric models (para1-para3). The mean square errors in the parametric estimators are larger than the mean square errors in semi-parametric estimators for both training dataset and test dataset. The gain is larger when we applied the second setup. by the profile kernel estimator. The black line shows the true value, the blue line shows the fitting result using the independence working correlation matrix, while the red line shows fitting result using the true working correlation matrix. The three lines almost overlapped, which indicates that the results, using different working correlation matrices, deliver similar results in the non-parametric fitting part. Table 2 shows β estimates with Gaussian density kernel and ten time periods. The results are similar to results with four time periods, however the standard deviation for semi-parametric estimators are less than the situation when only four time points are involved, which indicates that the semi-parametric estimator gains more efficiency than parametric estimators when we have longer time periods. Moreover, we found that the standard errors for = 0.7 are slightly higher than those for = 0.3 in the parametric estimators. For the overall fitting mean square errors with the Gaussian density kernel and ten time periods, the results show that the mean square train and test errors for the semi-parametric approach are much lower than parametric estimates in all cases. Among parametric cases, the polynomial model performs best, but still much worse than semi-parametric fitting. Furthermore, mean square errors for semi-parametric estimators are less than the situation when only four time points are involved, which indicates that the semi-parametric estimator gains more accuracy than parametric estimators when we have longer time periods. Finally, we found that the mean square errors in training and testing datasets for = 0.7 are slightly lower than those for = 0.3 in the semi-parametric estimators. When we extend the time periods to ten, the coefficient estimators of the second setup have less bias and standard deviation when compared to the first setup. The semi-parametric estimator still gains more when we have longer time periods. The result in the two tables shows that semi-parametric estimators with a stronger correlation, longer time period, and a more complicated pattern in the non-parametric part will benefit more when compared to parametric estimators with the same scenarios. According to the conclusion in Lin and Carroll [7] and results from our simulation, WI estimators perform better than an estimator compiled with a true correlation relationship, which conflicts with the properties of the GEE estimator. Another approach proposed by Wang [6] will be displayed in the next part, which delivers the estimator with the highest efficiency when fitting with true within subject association.
The SUR Estimator with One Kernel Smoother
The SUR estimator (Wang) [6] will be displayed in this part for the semi-parametric regression, running a simulation that follows the setups in the first part. Still, different setups will be applied in the simulation results. Table 3 shows β estimates in the semi-parametric model (3.1) and parametric models in (para1-para3). The results show that the standard errors of the β estimates based on semi-parametric estimators are at least three times less than the standard errors of the estimators from the three parametric models (para1-para3) and for semiparametric estimators, semi-True has smaller standard errors than semi-WI. Similarly, we found that the standard errors for = 0.7 are not higher than those for = 0.3 in the parametric estimators and semi-parametric estimators. Among the parametric models, the polynomial model (para) has the smallest mean square error for the test dataset. Similar to the kernel estimation in the last part, in the second setup, the overall fitting accuracy in the semiparametric model (3.1) gains more than the parametric models (para1-para3).
Semi-Parametric Model with Multiple Kernel Smoothers
Consider another model with two kernel smoothers in the non-parametric part: Still, denotes the ℎ subject and denotes the ℎ time point. In the equation, 1 (. ) and 2 (. ) are kernel smooth functions, 1 and 2 denote the covariates in the nonparametric part. In this simulation, data are generated with the following set-up: • Each run with 100 subjects, each subject with four time points and 200 replicates. • The first setup is 1 � 1 � = sin(4 × 1 ) in the first non-parametric term and 2 � 2 � = sin(4 × 2 ) in the second non-parametric term; the second setup is 1 � 1 � = exp(2/ 1 ) in the first nonparametric term and 2 � 2 � = exp(2/ 2 ) in the second non-parametric term. . We use the same setting for the working correlation matrix and parametric models as the last part: semi-WI estimator with independent working correlation matrix = and semi-True estimator with true working correlation matrix will be used in estimating nonparametric part. Estimators based on the following three parametric models and one generalized additive model will also be used in this simulation: Table 4 above shows β estimates in the semi-parametric model (3.2) and parametric models in (para4-para6). The results in Table 4 based on the Gaussian Kernel density show that the standard errors of the β estimates based on semi-parametric estimators are at least three times less than the standard errors of the estimators from the three parametric models (para4-para6). The overall fitting mean square errors in the semi-parametric model (3.2) and parametric models (para4-para6) show that for the training dataset and test dataset, the mean square errors in the parametric estimators are higher than the mean square errors in semi-parametric estimators. Still, the result in this table shows that semi-parametric estimators with a stronger correlation and more complicated pattern in nonparametric part will benefit more compared to parametric estimators with the same scenarios. When compared to models with one kernel smoothers, the MSE for parametric models (para4-para6) increased at least four times, but the MSE of profile kernel GEE model (semi-WI and Semi-True) increased two times, indicating that profile kernel GEE estimator is more robust for MSE than parametric models.
Application
Credit card loan data are a major type of financial data owned by banks and other financial institutions and play an important role for longitudinal data analysis as we discussed in the introduction: for each subject, which is the customer, we have records of monthly payment history for multiple time points. The semi-parametric models and the GEE method can be applied to this dataset and first we give a detailed description of a credit card loan dataset. Our main purpose for this application is to investigate which factors will influence the customer's payment status by using different approaches and to explore the difference between parametric estimators and semiparametric estimators.
Description of the Dataset (Statistics and Data Analysis)
The dataset used in this application comes from UCI (University of California Irvine) Machine Learning Repository Website [10] with 30000 subjects and eight variables. A basic summary of statistics for those eight variables is as the follows: 1 Bill amount: Amount should be paid by each customer for current month, with minimum -339603 and maximum 1664089. A negative number shows there are credits from last month.
2 Payment amount: Amount customer paid for current month, with minimum 0 and maximum 1684259.
3 PAY: A categorical variable with values from -2 to 8$ (11 categories), denoting how many delayed periods the customer had. A negative number shows that payment is made before due day. 4 LIMIT BAL: Limit amount for each customer, with minimum 10000 and maximum 1000000. 5 SEX: With 1 denoting male and 2 denoting female. 6 EDUCATION: Education level for each customer: 1 denotes graduate school; 2 denotes university; 3 denotes high school; 4, 5, 6 denotes others. 7 MARRIAGE: Marital status: 1 denotes married; 2 denotes single; 3 denotes others.
From the eight variables, two variables can be constructed to address our main concerns. The first response variable called remaining amount, is the difference between bill amount and payment amount, showing whether the customer made full payment or not. The second response variable is the delayed pay periods denoted by PAY: PAY= 1 denoting there is a delay, no matter how long for that delay and PAY= 0 denoting no delay, which means payment was made duly or before due day. Five variables are in the list left as predictors: gender, education, marriage, age, and limit balance.
Primary parametric GEE regression will be conducted as the first step for analyzing credit card loan data. For example, after fitting a linear GEE regression with the response variable remaining amount and five predictors we discussed in the last paragraph, we get a result that four predictors are statistically significant with p-values less than 0.05, while the variable age is not statistically significant. In our semi-parametric models, the four significant predictors can be used in the parametric part, while the variable age will be treated as a non-parametric covariate. Different semi-parametric models will be estimated with different working correlation matrices, and the results from semi-parametric models will be compared with the results from parametric models.
Using Remaining Amount as Response Variable
In this part, the remaining amount we defined in Section 4.1 will be used as the response variable to explore the relationship between the amount of owed payments and other predictors: such as gender, education level, limit balance, marital status, and age. The following two parametric GEE models will be fitted: Table 5 shows the estimation results for the first parametric model (Para1) and the second parametric model (Para2) using different working correlation matrices. Based on the signs of the estimated coefficients, we found that relative to male consumers, female consumers have less remaining amount. Relative to consumers with graduate degrees, customers with only college degrees or high school degrees have more remaining amount. Relative to married customers, customers with single marital status tend to have more remaining amount. The predictor limit balance has small coefficients, which denotes that limit balance has a positive correlation with remaining amount for both parametric model setups; age has a p-value larger than 0.05, showing that age is not statistically significant in both parametric models.
Different working correlation matrices such as independence, exchangeable, and AR1 are used in parametric models. The estimated parameters by those three working correlation matrices are quite similar across different settings in associations between time periods.
Since age is not significant with parametric patterns, such as linear and quadratic terms, we consider semi-parametric models with kernel smoother on the predictor age, investigating the changes on estimated coefficients for other predictors fitted with linear patterns and seeing if semi-parametric models are more advanced than pure parametric models.
The estimation results for the semi-parametric model with kernel smoother on the predictor age use different working correlation matrices. The result of the estimated coefficients is similar to the estimation in parametric models. Based on the signs of the estimated coefficients, we found that relative to male consumers, female consumers have less remaining amount. Relative to consumers with graduate degrees, customers with only college degrees or high school degrees have more remaining amount. Relative to married customers, the semi-parametric model shows that customers with single marital status tend to have more remaining amount, and the coefficient for predictor single (0.086) is higher than the coefficient in parametric models (0.048 in Para3). The predictor limit balance has a coefficient of 0.002, which denotes that limit balance has a positive correlation with remaining amount.
Using Payment Status as Response Variable
In this part, the payment status, which is whether the client has a default we defined in Section 4.1 will be used as the response variable. We would like to explore the relationship between whether the customer will default to pay the bills and other predictors, such as gender, education level, limit balance, marital status, and age. We consider the parametric GEE model with linear form as the following: where is the probability of default. Table 6 shows the estimation results for the parametric model (Para4) using different working correlation matrices. Based on the signs of the estimated coefficients, we found that relative to male consumers, female consumers have less probability to default. Relative to consumers with graduate degrees, customers with college degrees or high school degrees have more probability to default. Relative to married customers, customers with single marital status tend to have more probability to default. The predictor limit balance has a negative coefficient, which denotes that limit balance has a negative correlation with the probability of default, and age also has a negative correlation with the probability of default. Different working correlation matrices such as independence, exchangeable, and AR(1) matrix are used in Para1. The estimated parameters by those three working correlation matrices are quite similar.
Results and Discussion: Gender Analysis
In this section, we evaluate the difference between models for male customers and female customers.
Following the overall analysis in Section 4.2, three parametric models and one semi-parametric model are fitted for analysis, and we used two different outcomes: remaining amount and payment status as the response variable. Estimated coefficients for all models are reported for the purpose of exploring the difference among the fitted models for different gender. We provide mean square error as the evaluation measurement for the comparison of parametric and semiparametric models when using remaining amount as response variable.
Using Remaining Amount as Response Variable
Model Setups: The remaining amount we defined in Section 4.1 will be used as the response variable to explore the relationship between the amount of owed payments and some predictors, such as education level, limit balance, marital status, and age for male and female customers. As an overall analysis in Section 4.2, the following three parametric GEE models will be fitted for male and female separately: Table 7 shows the estimation results for the first parametric model (Para1) using different working correlation matrices for male and female separately. We found that unlike the overall analysis, different working correlation matrices will identify different significant variables: for male, the variables single and age are significant when using the independence working correlation matrix, while not significant when using exchangeable or AR(1) working correlation matrix; for female, age is not significant when using only the independence working correlation matrix. Relative to those with graduate degrees, male and female customers with only college degrees or high school degrees have more remaining amount. Furthermore, male customer with high school degree have slightly more remaining amount than those with college degrees under independence, exchangeable and AR(1) working correlation matrices. On the other hand, female customers with a high school degree have less remaining amount than those with a college school degree under all the three types of working correlation matrices. Table 8 shows the estimation results for the second parametric model (Para2) using different working correlation matrices for male and female separately and a quadratic form on age. We found that unlike the overall analysis, still, different working correlation matrices will identify different significant variables: for male, the quadratic form on age is significant only when using the independence working correlation matrix; for female, the quadratic term is significant when using the exchangeable working correlation matrix or AR(1) structure. Although the quadratic form of age is significant, it has a tiny impact on the response variable remaining amount because the number of coefficients is nearly zero. For male and female customers, still, relative to those with graduate degrees, the one with only college degrees or high school degrees has more remaining amount. The marriage factor single is significant for male under the independence working correlation matrix, but it is not significant under any working correlation matrices for female customers. The results from two parametric models show that age may be not significant using some working correlation matrices, or has a tiny effect with parametric patterns, such as quadratic terms. We consider semi-parametric models with kernel smoother on the predictor age, investigating whether semi-parametric models are more advanced for male or female than pure parametric models. Table 9 shows the estimation results for the semi-parametric model (Semi) with kernel smoother on the predictor age using different working correlation matrices. The result of the estimated coefficients is similar to the estimation in parametric models. Based on the signs of the estimated coefficients, we found that male and female have the same direction for all predictors. Relative to consumers with graduate degrees, male and female customers with only college degrees or high school degrees have more remaining amount. Relative to married customers, the semi-parametric model shows that customers with single marriage status for either male or female tend to have more remaining amount.
Using Payment Status as Response Variable
In this part, the payment status we defined in Section 4.1 will be used as the response variable to evaluate the difference between male and female customers for whether the customers will default to pay the bills. Predictors such as education level, limit balance, marital status, and age will be used in our models. Especially, we would like to investigate whether male customer and female customer should be fitted with different models.
We first consider three parametric GEE model with linear form of age for male and female as the following: where is the probability of default. Table 10 shows the estimation results for the parametric model (Para4) using different working correlation matrices. Relative to male and female consumers with graduate degrees, customers with college degrees or high school degrees have more probability to default. For male customers, relative to married customers, customers with a single marital status tend to have more probability to default. The predictor limit balance has a negative coefficient for male and female, which denotes that limit balance has a negative correlation with the probability of default. For male, age is not significant under any types of working correlation matrices but for female, age is significant with negative coefficients with the probability of default for all four types of working correlation matrices. Different working correlation matrices such as independence, exchangeable, and AR(1) are used in Para4 for both male and female. The estimated coefficients by those three working correlation matrices are quite similar.
We did the same approach for the Education analysis, using two parametric models and one semi-parametric model to detect the impact for different education levels: customers with high school degree or university/graduate degree. When remaining balance is the response variable, the results from the parametric models show that age may be significant or has a tiny effect with parametric patterns. If we apply the semi-parametric models with kernel smoother on the predictor age, investigating whether there is a difference for customers with different education levels, the result of the estimated coefficients has some similarities to the estimation in parametric models. Based on the signs of the estimated coefficients, we found that relative to male consumers, female consumers have less remaining amount. The predictor limit balance has a positive coefficient, which denotes that limit balance has a positive correlation with remaining amount. Except for single, all other predictors are significant for all models when using any types of three working correlation matrices. When payment status is the response variable, we only used the parametric model and found that for all models, female customers tend to have less probability to default because of negative coefficients. The predictor limit balance has a negative coefficient, which denotes that limit balance has a negative correlation with the probability of default. Age is not significant for customers with high school degrees but is significant for customers with university or graduate degrees. The variable single is not significant for all models under any types of working correlation matrices.
We also did an approach for Marriage analysis, using two parametric models and one semi-parametric model to detect the impact for different marital status: single customers or married customers. When remaining balance is the response variable, the results from two parametric models show that age may be not significant using some working correlation matrices or has tiny effect with parametric patterns, such as quadratic terms. Still, if we apply the semi-parametric models with kernel smoother on the predictor age, investigating whether there is a difference for customers with different marital status, the result for the estimated coefficients has some similarities to the estimation in parametric models. Based on the signs of the estimated coefficients, we found that for Limit balance, university and graduate are significant and positive correlated with remaining amount while variable female is significant and negative correlated with remaining amount for both single and married customers. When payment status is the response variable, we still only used the parametric model and found that the result is very similar to the Education analysis.
Conclusion and Discussion
In summary, the simulation result shows semiparametric estimators are more robust with less standard error comparing to parametric estimators. For overall fitting, semi-parametric models have less mean square errors. Furthermore, semi-parametric estimators with stronger correlation, longer time period, and a more complicated pattern in the non-parametric part will benefit more when comparing to parametric estimators with the same scenarios.
In application part, we run the analysis based on credit card loan data and the result display that the parametric estimators will show clear patten when we treat some features as kernel smoother. We recommend applying parametric model first, figuring out the non-significant features and setting up them with kernel smoothers. By modeling with semi-parametric structure, we find the different behavior for customers with different gender, education level and marriage status.
In semi-parametric GEE study, estimating the working correlation matrix is critical when using data from the real world. One challenge comes from the application part, which is the dataset resource. Most financial datasets used in longitudinal studies reach the individuals level, which violates the privacy policies in most institutions in the United States. Our dataset, which comes from the UCI website, is based on credit information in Taiwan. In the future, we would like to use our model on other available credit loan datasets or other types of financial datasets in the United States.
When using semi-parametric models, another challenge arises from the application part. Based on the evaluation metrics, such as Mean Square Error and predictive accuracy, we observed that the advantage of the semiparametric model with kernel smoother is not huge. We would like to use semi-parametric models in other financial datasets with a longitudinal perspective to investigate whether our semi-parametric models with kernel smoother will be better than any other types of parametric models for other financial data.
The third challenge comes from the scheme of the semi-parametric approach. When assigning the nonparametric term in the semi-parametric approach, most applications in biological datasets use previous experience. In our approach, we used age as a non-parametric term because it is not significant under several parametric GEE approaches. We would like to try other continuous variables and create a robust approach for identifying which variable should be used as a non-parametric term.
The last challenge is a traditional issue for the GEE approach: estimating the working correlation matrix. In a semi-parametric GEE study, estimating the working correlation matrix is critical but more difficult than the parametric GEE approach. Fan, Huang, and Li [11] proposed a scheme of estimation procedure, using profile weighted least squares approach to estimate working correlation matrix. We would like to try this approach in the future to investigate whether this estimation method will provide more efficient semi-parametric estimators with fully specified working correlation matrix when we applied it in financial datasets.
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license. | 2021-07-26T00:06:09.855Z | 2021-06-08T00:00:00.000 | {
"year": 2021,
"sha1": "1f3ab2ad87b243660f8edfa5eb1b12515b1db2eb",
"oa_license": null,
"oa_url": "https://doi.org/10.12691/jfe-9-3-1",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "88621e478e3d097fed303da11c1a382155be4467",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
218908543 | pes2o/s2orc | v3-fos-license | Use of web-based game in neonatal resuscitation - is it effective?
Background Knowledge and skills decline within months post simulation-based training in neonatal resuscitation. To empower ‘Millennial’ learners to take control of their own learning, a single-player, unguided web-based Neonatal Resuscitation Game was designed. The present study investigates the effectiveness of the game on retention of resuscitation knowledge and skills. Methods The study evaluated 162 healthcare professionals who attended simulation-based training in neonatal resuscitation. Following standard simulation-based training, participants were assigned to either a gaming group (Gamers) with access to the web-based Neonatal Resuscitation Game or a control group (Controls) with no access to the game. Although Gamers were given access, game utilization was completely voluntary and at will. Some Gamers chose to utilize the web-based game (Players) and others did not (Non-players). Knowledge and skills in neonatal resuscitation were assessed upon completion of training and 6 months post-training using a multiple-choice question test and a manikin-based skills test. Changes in scores were compared statistically between Gamers vs Controls, Players vs Controls, and Players vs Controls + Non-players using two-sample t-tests. Results At the final assessment, declines in knowledge scores were seen in all groups. Mean change from baseline in knowledge and skill performance scores at 6 months, adjusted for baseline skill performance and MCQ test scores, did not differ significantly between Players vs Controls and Players vs Controls + Non-players. Conclusion The web-based game in its current format may not be effective in facilitating retention of knowledge and technical skills in neonatal resuscitation.
Background
The first minute after the birth of a newborn, also termed the golden minute is a period of anxiety for parents and healthcare providers. The newborn undergoes rapid, significant physiological changes to adapt to the extra-uterine environment [1]. Most newborns go through the transition with minimal or no assistance and initiate spontaneous respiration within 10-30 s of birth. However, approximately 10% require drying and stimulation, 3% initiate respiration after positive pressure ventilation (PPV), 2% require intubation to support respiratory function and 0.1% require chest compressions and/or adrenalin [2][3][4]. Failure to provide the necessary assistance at this critical time could result in birth asphyxia, with long-term complications and even death [5].
Accessibility of neonatal resuscitation is essential in all clinical settings where babies are born and cared for. Although inexpensive and cost effective, studies indicate that the need for neonatal resuscitation may not be recognised or anticipated and hence not initiated, or the methods used are inadequate or wrong [6]. Training programs in neonatal resuscitation have shown positive changes toward anticipating and preparing for neonatal resuscitation and improved resuscitation performance [7,8]. The introduction of standardised formal neonatal resuscitation training programs has resulted in reduced early neonatal mortality in low-and middle-income countries [9]. However, knowledge and skills acquired deteriorate with time, especially when learning attained is not used or exercised for extended periods of time [10]. Mosley and Shaw [11] report deterioration of skills within 3 to 5 months post attendance at a newborn life support course.
The challenge in resuscitation retraining is to find an effective and efficient way to provide practical, hands-on experience to ensure that essential knowledge and skills are sustained once attained by an individual and/or team. Studies in computer technology-based simulation in particular games have shown acceleration of learning, increments in motivation and support for development of higher order cognitive thinking skills [12,13]. However, the role of computer simulated games supporting self-practice in neonatal resuscitation has not been studied. Computer-based simulated games allow learning by trial and error in a risk-free environment, while maintaining a high level of realism [14].
'Serious games' are defined by Bergeron [15] as "interactive computer applications, with or without significant hardware components, created for the purpose of imparting knowledge or skills (such as procedural techniques, decision-making, problem-solving etc.), and which incorporate an element of scoring as well as challenging goals and engaging design" (p.122). This instructional design is known to enjoy the same advantages as simulation in that it enhances patient safety, is adaptable to specific learning objectives, standardises training at reduced operating costs and has wide accessibility [16]. Through text, sounds and pictures incorporated into the game, learning occurs as the learner reflects on an outcome relative to decisions made, performs abstract conceptualization and participates in active experimentation of complex subject matter [13,17,18]. Learning is reinforced with gameplay repetition as learners develop mental reasoning skills to solve unprecedented problems [19].
The opportunity to rehearse using a web-based game can potentially improve patient safety without involving the use of high-fidelity manikins and expensive simulation laboratory facilities that increase the cost of training. Evidence of possible benefits of game-based learning in healthcare education and the lack of relevant games in neonatal resuscitation to facilitate self-practice in advanced neonatal resuscitation led us to develop a singleuser, unguided web-based game in neonatal resuscitation. An evaluation was performed to assess the game's effectiveness and suitability with respect to its designated purpose and application. By measuring learning outcomes, improvements in neonatal resuscitation knowledge and skills would assure users of game viability and convince learners of the value of a web-based game as a medium for self-practice. Educators and game developers may be guided by results of outcome measures in justifying and recommending web-based games as an effective training tool. This paper reports findings of a study on the effectiveness of a web-based game as an aid for retention of knowledge and technical skills in neonatal resuscitation subsequent to 'standard' simulation-based training in neonatal resuscitation.
Methods
The study is considered an educational audit and hence exempted from ethical review by the SingHealth Centralized Institutional Review Board (CIRB). Given the lack of prior data, a 'medium' effect size of 0.5 was targeted as a clinically meaningful difference between the comparative groups. A sample size of 64 per group was used, based on a two-sample t-test, to detect an effect size of 0.5 with 80% power at α = 0.05 [20].
A. Participant Selection and Intervention
2) Interventions: Gamers received access to the webbased simulation game 3 months after completion of structured training. The timeframe was decided based on a report by Mosley & Shaw [11] which showed that knowledge and skills decline as early as 3 months posttraining. A user-specific password was emailed to the participants in the experimental group (Gamers) and reminders to access the game were sent monthly via email. A revised edition of the game is available at https://resuscitation.i-maginary.eu/ and details on web-based game product is attached in Additional file 1: Appendix I.
1) Outcome Measurements and Assessment Process:
A study coordinator uninvolved in the data analysis monitored the assignment, frequency and duration of each gameplay and assisted in scheduling the follow-up assessments. Participants were not informed of the exact date of the follow-up assessment at enrolment. Email communication was made with Directors of the Clinical Departments and Nursing Division to excuse the respective study participants from their shift. To minimise disruption and inconvenience to all participants, arrangements were made in advance of the targeted test date(s). At assessments, participants complete a questionnaire detailing demographics, history of prior experience in resuscitation, and additional learning in resuscitation between training and evaluation.
A separate team of neonatal resuscitation workshop instructors not involved in the study methodology conducted the assessments. Knowledge in resuscitation was measured at two time-points using two different sets of multiple-choice questions: baseline, performed immediately post-training; and final assessment, conducted 6 months post-training. The scores from sixty MCQs were quantified and compared as a percentage of the total.
Evaluators blinded to group allocation attended standardization training on the steps, conduct and grading of the skill test prior to the start of the evaluation process. To evaluate the technical skills of neonatal resuscitation, participants were presented with a standardised scenario involving a term baby and performance in three basic key skills: 1) initial steps of resuscitation inclusive of airway manoeuvres, 2) bag-mask ventilation, and 3) chest compression were assessed using the Laerdal newborn task-trainer manikin (Laerdal Medical, Texas, USA). Baseline and final manikin-based skill tests were performed using different scenarios involving term infants. A checklist (Additional file 1: Appendixes IIa and IIb) was used to grade participants across the criteria required for the performance of key technical skills. The choice of appropriate equipment and the correct use of equipment were scored on a scale of 0 to 2. An incorrect choice or a persistently wrong application was scored as 0, and a correct choice or a consistently correction application received a score of 2. Tests were administered individually and in private by an evaluator.
2) Data Analysis: The baseline and final MCQ scores, and skill performance scores of Controls were collated and contrasted against that of Gamers and Players. Knowledge and skill retention was measured by differences between the mean percentage point change in MCQ scores and skill performance tests scores obtained at baseline and final assessments. Per-protocol analysis was performed to provide an estimate of efficacy of the web-based game intervention. In addition, an 'asassigned' analysis comparing Players against pooled Controls and Non-players was performed.
Continuous variables were summarised as mean and standard deviation or median and range where data is skewed, and categorical variables as frequencies and percentages. In univariate analysis, mean or median values of continuous participant characteristics, MCQ and skill performance tests scores were compared between study groups using a two-sample t-test, and proportions were compared using Fisher's exact test. Differences in MCQ and skill assessment scores were compared between Gamers and Controls after adjustment for baseline MCQ and skill performance test scores. Only participants who completed assessments at baseline and final testing were included in the final analyses. Statistical significance was set at p < 0.05. All analyses were performed using SAS v9.4 [21].
A. Study Enrolment
In all, 162 healthcare professionals attended the course over the period October 2016 to July 2017. Upon completion of the workshop training, 143 (88.3%) of course participants (74 controls, 69 gamers) consented to participate in the study and completed the baseline assessment. Fifty-three (71.6%) controls and 50 (72.5%) gamers completed the final full tests at 6 months ±2 weeks post-training. Of the 50 experimental participants, 27 (54%) visited the game website, of whom only 16 (32%) attempted the gameplay (Fig. 1). Declines were observed in the skill performance scores for all study groups (Fig. 2). The most significant decline was observed for Players (p = 0.016) while Controls and Non-players showed smaller, non-significant decline with p = 0.079 and p = 0.449, respectively (Fig. 2). Chest compression exhibited a statistically significant decline even among Players (p = 0.012).
1) Control vs intervention based on group assignment (Controls vs Gamers and Controls and Non-Players
E. Efficacy and effectiveness of web-based game on retention of knowledge and technical skills: Perprotocol analysis measures the efficacy of the webbased game on retention of knowledge and technical skills in neonatal resuscitation. The mean knowledge test scores for Players compared to Table 2). Adjusting for baseline skill levels, the decline in total skill and sub-skills performance scores from baseline to final assessment remained greater for Players compared with Controls ( Table 2). Per-protocol analysis reflects a decline in technical skills of resuscitation in the Players but bias due to exclusion of participants who did not access gameplay cannot be ruled out. The 'as-assigned' analysis comparing Players vs Controls + Non-players yielded similar changes in knowledge, total skills and sub-skill performance from baseline to 6 months posttraining (Table 3).
Modified intention-to-treat analysis (ITT) including study participants who completed both the baseline and final tests showed that the use of web-based game resulted in no significant differences in the mean knowledge test scores and technical skill performance scores, both total and sub-skills at final assessment between the Gamers (Players and Non-players) and Controls (Table 4). After adjusting for baseline skill performance levels of participants, the change in total skill and subskill performance from baseline to 6 months posttraining in both Controls and Gamers exhibited no statistically significant differences.
Discussion
Our study demonstrated significant declines in both knowledge and technical skills at 6 months following a structured neonatal resuscitation course attended by participants assigned to control and web-based game intervention. In the control group, it was observed that technical skills encompassing total performance and sub-skills exhibited greater declines on average (− 3.1 to − 6.5 percentage points) than theoretical knowledge (− 2.6 percentage points) 6 months after standard simulation-based training. Dunn et al. [7] compared knowledge and skill levels in 166 nurses at initiation and after 6 months. At the 6-month follow-up, none of the participants in either groups, 'controls' or 'trained', passed the skill test, while 85% of the 'trained' group obtained a passing mark of 80% on MCQ tests, compared to 23% of the controls, suggesting faster deterioration in skills than in knowledge.
Our findings of diminished retention of knowledge and technical skills in all study groups over time are consistent with the literature. Although all studies differed with respect to the targeted population, the format of initial neonatal resuscitation training, and in measurements and analytical methods on outcomes studied, retention of knowledge and skills exhibited similar patterns of decline within 3 to 6 months after attendance of structured formal training. Differences in degree of decline in retention among studies can be explained by variation in study populations, differences in clinical practice among study populations post-training, and variation in criteria used at testing.
A. Effect of web-based game on retention of knowledge
In our study, knowledge in all groups declined over the 6-month study period. Despite higher final mean knowledge scores for Players compared to Controls (p = 0.072), and Players compared to Controls + Non-players (p = 0.046), the changes in mean knowledge scores from baseline to final evaluation in the Players compared with (i) Controls and (ii) Controls + Non-players were not There are no comparable serious games that teaches advanced neonatal resuscitation. Video games and VR games developed to reinforce theoretical and basic neonatal resuscitation skills have evaluated learner's attitudes toward the game but report on educational and clinical outcome is lacking. Of 14 nursing students enrolled to e-baby, a computer-based game designed to teach the assessment of oxygenation in preterm babies for 2 weeks, 57% reported game to be easy to use with 72% motivated to use it for learning [22,23]. Umoren, Bucher, Purkayastha, Kshatriya, & Avanigadda [24] evaluated the use of Electronic Helping Babies Breathe (eHBB), a computer game developed to practise knowledge in neonatal resuscitation and skills on administering PPV on 24 healthcare workers in resource-limited settings reported that game was educational, easy to use and enabled 'learning without stress'. Umoren et al. [25] evaluated the use of NRP e-SIM, a simulator game developed to reinforce cognitive skills and the steps in neonatal resuscitation before a hands-on NRP course [26,27] in a randomised trial involving 255 NRP providers. Findings showed better accuracy in the performance of several steps of the NRP algorithm in the group randomised to pre-NRP course preparation of e-SIM + NRP textbook compared to the textbook only group. However, there was no difference in time needed to perform key NRP steps inclusive of time to start PPV and chest compressions. The effect of repeated e-SIM practice on knowledge and or skill retention was not addressed in the study.
However, studies on the associations between hours of simulation-based practice in Paediatric cardiopulmonary resuscitation training and learning outcomes indicate a dose-response relationship, with more practice producing higher outcome gains and minor variation among learners in time required to achieve a level of mastery [28][29][30]. Findings showed that the percentage of healthcare provider performance rated 'excellent' increased from 26 to 65% with use of low dose, high frequency booster training [29]. A study that compared 6 min of monthly practice on a voice advisory manikin to no practice showed that monthly practices resulted in improved skills over baseline [30]. It is also unknown whether repeated tests over a longer duration of gameplay would have a longer-term impact.
The modified ITT analysis comparing final assessment scores showed no difference in change in mean knowledge score from baseline to 6 months for either Controls or Gamers (Players and Non-players), suggesting that the web-based game, as presented to the study participants may not be effective in motivating selfpractice or increasing knowledge retention in neonatal resuscitation.
B. Effect of web-based game on Skill Performance
In this study, the total skill performance and performance at chest compression were statistically significantly lower at final test for Players compared to Controls and compared to Controls + Non-players. Modified ITT analyses showed similar patterns of decline in performance scores in both total and sub-scales in both the Control and Gamers groups, with the larger decline from baseline to final test in the Players sub-group (p = NS). These tests were conducted using low-fidelity manikins, hence it may be more difficult for the participants to suspend disbelief compared to a real-life situation. However, both the baseline and final tests were done using similar manikins. A study by Campbell, Barozzino, Farrugia, & Sgro [31] that compared the use of low and high fidelity on knowledge and skill acquisition pre-and posttraining showed a greater but statistically non-significant improvement in written test scores and shorter times to intubation in the group trained using the high-fidelity manikin. Hence, it remains unclear whether the use of a low-fidelity manikin influences scores in this study.
There is no comparative game in neonatal resuscitation, but a report on the use of MicroSim, a computer game that simulates medical emergencies and procedures designed to support retraining in Advanced Life Support showed no impact on retention of skills [32]. In the study, 20 % of participants elected not to use the intervention program during the one-year study and of those who accessed the MicroSim program, less than 50 % of the users completed the program. Study participants reported lack of social interaction and the lack of motivation to use the tool as reasons for the absence of a significant effect with the use of MicroSim.
Social interaction during training facilitates learning and sharing of resources [33], provides opportunities for learners to draw on their peers to fill gaps in their own knowledge [34], provides learners with emotional support, and makes learning more active and engaging [35,36]. Our single-user, unguided web-based game provides no opportunity for collaborative interaction, hence missing in extrinsic motivation. Motivation has several effects on learning and learner behaviour. A motivated learner is more likely to initiate and persevere at a task, increasing effort for activities related to their goals [37] and better performance. Elements of fun, challenge, feedback, and rewards embedded in a web-based neonatal resuscitation game can entice and sustain users at gameplay repeatedly and for prolonged sessions, leading to better performance. The significant decline in mean percentage scores in the total and sub-skill performance tests among participants who practiced using the web-based game suggest that not only is retention absent, but exposure to gameplay may have resulted in negative transfer of skills, hence interfered with skill performance at retest [38]. Negative transfer occurs when previous learning or experience inhibits or interferes with performance in a new context [39]. This may occur in a situation where differences or discontinuity in terms of spatial location and timing of movements between practice using webbased game scenario and the 'real' scenario occurs, and game users may modify their performance in a negative way. In addition, if game users have made a connection between skills viewed in the game and they differ from skills that are required in real life, the result will be negative transfer. These two factors can occur using games to train skills in resuscitation, hence accounting for the inappropriate transfer of skills associated with gameplay. In our study, negative transfer noted in the small cohort of 16 participants raises concern but in the absence of statistical significance, the results cannot not be generalised. Further investigation involving a larger sample size is needed.
C. Study Limitations and Strengths
The use of controls, standardised MCQs and manikinbased tests, blinding of evaluators and standardisation of skill evaluation among evaluators strengthen the scientific and statistical validity of our study.
Although the demographics of the study groups are similar, unrecognised differences in clinical practice and variations in exposures to sentinel events involving newborn resuscitations among study participants between the two time-points (i.e. baseline and final evaluations) may occur leading to change in resuscitation practices, hence confounding outcomes measured.
The high drop-out rate of 28% in all study groups at final evaluation led to a reduction in sample size. With only 23% of the enrolled cohort accessing the game, the power of the study declined leading to risk of a type 2 error. Next, measuring the learning outcomes at a single time-point of 6 months post-training further restricted the observation on skill performance and MCQ tests. In addition, the MCQ tests and the checklist used were not formally validated. Factors such as learner motivation to train using this approach, variation in entry levels of participant skill in the use of computer-based technology and resource use that may interfere with the outcomes measured were also not evaluated in our study.
Conclusions
In summary, this study reaffirms results of previous studies that knowledge and technical skills associated with neonatal resuscitation decline 6 months posttraining, hence practice and more frequent retraining is necessary to maintain knowledge and technical skills in neonatal resuscitation. The evaluation suggests that the game in its current format and making it available to learners 3 months after the standard simulation-based training on a voluntary basis may not be effective in supporting self-practice or in aiding retention of knowledge and technical skills in neonatal resuscitation. However, given the relatively low statistical power and internal validity concerns of the study, the statistical evidence supporting conclusions about the relationships between use of the web-based game and its effect on knowledge and skill retention is weak [40]. More evidence is needed for a complete evaluation of game effectiveness. | 2020-05-28T09:13:57.508Z | 2020-05-26T00:00:00.000 | {
"year": 2020,
"sha1": "30bfbef764e9acd8549f336d084bad3dbc470b35",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-020-02078-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c75c8b4a2a6978f8b31ec99eca27c517a274d77",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
265034561 | pes2o/s2orc | v3-fos-license | Multi-region calcium imaging in freely behaving mice with ultra-compact head-mounted fluorescence microscopes
ABSTRACT To investigate the circuit-level neural mechanisms of behavior, simultaneous imaging of neuronal activity in multiple cortical and subcortical regions is highly desired. Miniature head-mounted microscopes offer the capability of calcium imaging in freely behaving animals. However, implanting multiple microscopes on a mouse brain remains challenging due to space constraints and the cumbersome weight of the equipment. Here, we present TINIscope, a Tightly Integrated Neuronal Imaging microscope optimized for electronic and opto-mechanical design. With its compact and lightweight design of 0.43 g, TINIscope enables unprecedented simultaneous imaging of behavior-relevant activity in up to four brain regions in mice. Proof-of-concept experiments with TINIscope recorded over 1000 neurons in four hippocampal subregions and revealed concurrent activity patterns spanning across these regions. Moreover, we explored potential multi-modal experimental designs by integrating additional modules for optogenetics, electrical stimulation or local field potential recordings. Overall, TINIscope represents a timely and indispensable tool for studying the brain-wide interregional coordination that underlies unrestrained behaviors.
INTRODUCTION
A fundamental goal of neuroscience is to understand the neural basis of behavior and cognition, which involves the coordinated activity of multiple brain structures distributed across cortical and subcortical areas [1 -3 ].Several studies have leveraged multi-site electrophysiological recordings to reveal the brainwide circuit-level interactions underlying complex brain functions [4 -7 ].Compared to electrophysiological approaches, calcium imaging has indispensable advantages in research requiring cell-type specificities, precise spatial positions, or long-term tracking of individual neurons.However, due to the technical constraints of imaging systems, most multi-region calcium imaging data were collected in the superficial brain areas of head-stabilized animals [8 -11 ], limiting the applicability of certain behavior assays or the exploration of targeted deep brain regions.Thus, it is highly desirable to have techniques capable of imaging multiple cortical and subcortical regions spanning different depths (i.e.thalamus, hippocampus, and cortex) simultaneously during unrestrained behaviors, especially in mice [12 ].
Prior investigations have exploited flexible optical fibers to conduct light into and out of the deep brain, enabling the recording of neural dynamics while ensuring unrestricted animal behavior [13 -15 ].This fiber-based recording approach can be easily extended to multiple implantations for multisite recording [16 ,17 ], although the collected signals were averaged from many neurons in a volume.In addition, dense coherent optical fibers can be bundled for cellular resolution imaging [13 ,14 ], and multiregion ne uronal recording is possible when multiple fiber bundles are implanted [18 ].However, these fiber-optic microscopes hold several inherent drawbacks that hinder their application, such as limited spatial resolution, small field of view (FOV), low transfer efficiency of fluorescence, and unpleasant susceptibility to animal motion [19 ].Moreover, a (Gonzalez et al. ) [30 ] n.a.n.a. 2 600 × 479 30 2 Featherscope [27 ] 1.0 8.3 × 16.25 × 9 1 10 0 0 × 10 0 0 30 Note: 1.The specifications were from its website ( http://miniscope.org/index.php/Main_Page).2. The dimensions of miniScope and Featherscope were obtained from their open-source files of housing without the assembled PCBs.3. Data from a specification sheet of the nVista system, Inscopix.
(Continued from previous page) complicated well-designed commutator is needed to relieve torsional strain within the bundle as the mouse functions [14 ].Miniature head-mounted fluorescence microscopy is another class of imaging modality supporting cellular-resolution recording of neural activity in freely moving animals [20 -22 ].In contrast to the fiber-optic microscope, it integrates all its optical components within a small lightweight housing carried by animals.This design yields superior performance in terms of optical sensitivity, field of view, attainable resolution, mechanical flexibility, cost and portability [19 ].The past decade has witnessed great advances in exploring the cellular dynamics underly ing behav ior, cognition and sensation [21 ,23 ,24 ].To enable multi-area recording, several works have customized designs to achieve large FOVs [25 -29 ].However, this strategy is mainly suitable for superficial cortical areas because implanting large GRIN lenses into subcortical areas may cause severe brain damage.Since the targeted regions are usually spatially distant and at different depths, simply increasing the FOV is not flexible enough to hand le al l experimental scenarios.Thus, a more practical solution is to image each brain region with a separate head-mounted microscope.However, the implantation of more than two microscopes on the mouse brain remains challenging due to constraints in available head space and the significant weight of the equipment [30 ,31 ].Recently, an optimized design reduced the weight from ∼1.9 g to 1.0 g [27 ], but the device size is sti l l not ready for multiple-region recording on small animals such as mice and songbirds.
Here, we report the development of an opensource ultra-compact head-mounted microscope, named TINIscope, that has substantially reduced size and weight (0.43 g, Table 1 ), achieved through extreme optimization in optical, electrical and mechanical designs.As standalone equipment, TINIscope reaches a new level of miniaturization in head-mounted microscopes, reducing the burden added to smaller or developing animals like juvenile songbirds.Based on the TINIscope design, we further systematically developed an experimental paradigm, including multi-site implantations and a commutator for untangling, for multi-region calcium imaging in freely behaving mice.In proof-of-concept experiments, we achieved simultaneous neural activity recording of over 10 0 0 neurons in four subregions of the hippocampus in free-moving mice and extracted neural activ ity w ith spatially modulated properties in these regions.A detailed analysis of recorded neural activity revealed clustered neuronal populations spanning all four subregions.The compact design of TINIscope enables a flexible combination with both optogenetic or electrophysiological tools to achieve a more versatile experimental design.In our experiments, we simultaneously collected individual neuronal activities in four hippocampal subregions in response to optogenetic or electrical stimulations in the anterior cingulate cortex (ACC).In addition, we jointly recorded calcium signals and local field potentials (LFPs) in each hippocampal subregion and analyzed population activity patterns concurrent with ripple onsets.These initial efforts suggest that TINIscope can be applied to recording neural activity in multiple brain regions at the single-cell level as well as integrating with a variety of techniques to become a multifunctional tool for exploring neural mechanisms.
Design of the TINIscope system for multi-region recording
The key component of our multi-region recording system is the ultra-compact TINIscope mounted on the animals head.It uses a GRIN lens as its objective and a short-pass dichroic mirror to reflect emitted fluorescent light to an image sensor located on the side of the 3D-printed main body of the device (Fig. 1 A, Supplementary Fig. S1).The fundamental optical pathway of TINIscope is the same as that of classical head-mounted microscopes [19 ,21 ], whereas essential modifications were made to enable multi-region imaging with three design principles: compatibility with multiple implantations, compactness and lightness.First, the design of TINIscope deviates from the conventional setup, with the placement of the CMOS sensor on the side and LED excitation on the top (Fig. 1 A, Supplementary Fig. S1).In the conventional setup, accommodating multiple pieces of equipment on the head of the mouse, especially in closely located areas, is challenging due to the spatially conflicting large CMOS sensors in the upright direction.Since the LED excitation pathway is much smaller, flipping the excitation and emission pathways can alleviate this issue.Thus, we kept the i l lumination arm straight while using the dichroic mirror to reflect emission fluorescence to the side-placed CMOS imaging sensor.When four TINIscopes are implanted on the mouse brain, we can always rotate them to spatially distribute their CMOS imaging sensors.In practice, we designed a virtual simulation pipeline for arranging equipment to guide our surgical implantation ( Supplementary Note 1).
Second, TINIscope incorporates an ultracompact light pathway to match the selected smaller LED device (1.3 mm × 1.7 mm) and CMOS imaging sensor (pixel size of 3 μm).In the i l lumination arm, a half-ball lens (HL) with a diameter of 2 mm is used to collect excitation light from the LED.This design not only matches the small LED size but also allows for a shorter distance between the HL and the GRIN lens (GL), resulting in a further reduction in equipment size.Zemax optical modeling showed that an HL-GL distance of 6 mm can i l luminate a 0.63 mm-diameter area ( Supplementary Fig. S1A).Additionally, TINIscope employs CMOS sensors with a small pixel size (3 μm), allowing a shortened pathway with a smaller magnification factor ( ∼2.4) that can achieve comparable spatial resolution and reasonably smaller FOV ( ∼450 μm × 450 μm) compared to those of existing systems ( Supplementary Fig. S1B-S1G).Third, the image sensor directly transmits serialized data without utilizing a serializer chip, which was a major component in previous head-mounted scopes, thereby simplifying the peripheral circuit of the PCB (Fig. 1 B, Supplementary Fig. S2).Further-more, other components, such as the power supply chip and osci l lator, were removed from the circuit, leaving only essential capacitors and one resistor on the board (Fig. 1 B right and Supplementary Fig. S2A, B).A customized HDI rigid-flex PCB was subsequently developed to further reduce the size of the head-mounted PCB and enable flexible connectivity to the DAQ board ( Supplementary Note 2).Following this, mechanical modifications were implemented to enhance the compatibility of the headmounted microscope with multi-site implantations (Fig. 1 C, D and Supplementary Note 3).
In TINIscope, multiple image sensors are connected to the data acquisition (DAQ) board through flexible PCB cables.To mitigate the possibility of entanglement caused by the motion of the mouse, a commutator was developed (Fig. 1 E).The commutator employs a stepper motor to rotate an electrical slip ring and unravel any twisted cables according to their shapes monitored by a camera ( Supplementary Fig. S4).
With such a configuration, the head-mounted module of TINIscope weighs only 0.43 g, which is substantial ly smal ler and lighter than all other existing devices while sti l l preserving image quality (Fig. 1 F, Table 1 and Supplementary Video S1).Meanwhile, our quantitative comparisons substantiated that the installation of multiple TINIscopes had no notable impact on the mobility of mice ( Supplementary Fig. S5).Furthermore, multisession studies (days 1, 3, and 5) verified the feasibility of imaging and registering the same population of neurons with TINIscope ( Supplementary Fig. S6).This compact design makes it possible to implant up to four devices into a mouse brain (Fig. 1 F, Supplementary Fig. S3D).
Recording of four hippocampal subregions with TINIscope
The hippocampus is an essential brain area involved in spatial navigation, learning and memory, and mental disorders.Subregions of the hippocampus have been shown to be functionally and anatomically distinct.We therefore conducted concept experiments to investigate hippocampal neural coding in freely behav ing mice w ith TINIscope, aiming to validate the feasibility of simultaneous four-region recording in a mouse (Fig. 2 A).Four subregions of the bilateral hippocampi, including the right intermediate hippocampus (RiHP), right dorsal hippocampus (RdHP), left dorsal hippocampus (LdHP), and left intermediate hippocampus (LiHP), of 8-week-old mice were selected for infection with an adeno-associated virus (AAV2/9-hSyn-GCaMP6s) to express the calcium-sensitive fluorescent protein GCaMP6s.Three weeks later, a GRIN lens was implanted in each of these regions.Viral infection and lens implantation were later verified through examination of brain slices (Fig. 2 B).The subsequent experiments presented in the main text employed the same recording sites and basic experimental procedures.
After two weeks of recovery and adaptation, simultaneous cellular-resolution imaging of neural activities was conducted using four separate TINIscopes while the mouse was moving freely in various behavioral paradigms (Fig. 2 F, I and Supplementary Video S2).Subsequently, the CNMF-e [32 ] method was employed to identify neurons and extract their temporal traces from the raw videos (Fig. 2 C, D and Supplementary Video S2).The field of view (FOV) of each region was ∼450 μm × 450 μm and contained a few hundred neurons (Fig. 2 E, n = 20 sessions from 6 mice; the minimum numbers of recorded neurons in RiHP, RdHP, LdHP, and LiHP were 137, 121, 249, and 208, respectively, while the maximum numbers were 868, 1030, 1046, and 962, respectively).
To elucidate the spatial encoding properties of the hippocampus, experiments were conducted in a T-maze and a modified open field arena.During the T-maze test, a water-deprived mouse was required to enter the starting zone located at the end of the vertical arm to initiate a trial.Upon entering the starting zone, an LED would be i l luminated, and one of the two ends of the horizontal arms would be randomly designated as the available option for water.The mouse was then allowed to freely explore the T-maze until they obtained a reward, which signified the completion of a trial (Fig. 2 F).We employed mutual information analysis to identify neurons exhibiting spatial tuning.All recorded regions contained spatially tuned neurons with a proportion of approximately 1/5. Figure 2 F-H shows a mouse with 22/167, 49/231, 41/259 and 83/340 spatially tuned neurons in the RiHP, RdHP, LdHP, and LiHP, respectively.
During the modified open field test, we introduced steps of varying heights and a small piece of chocolate into the standard open field arena, providing the mouse with a dynamic and changing environment for exploration (Fig. 2 I).The mouse, equipped with four recording devices, was able to freely navigate and explore different contexts ( Supplementary Video S2), demonstrating that the lightweight TINIscope had no obvious impacts on their behavior.This further supports the suitability of TINIscope for recording neural activity across multiple brain regions.Figure 2
Multi-modal experiment design with TINIscope
To understand how brain-wide neural activity supports complex behaviors, it is usually necessary to employ multiple complementary modalities for recording or manipulating neuronal activity during behavior.The compact dimensions and lightweight construction of TINIscope facilitate an effortless combination of optical or electrophysiological modules, without incurring significant weight or size constraints (Fig. 3 A, Supplementary Videos S3 and S4).To demonstrate this advantage of TINIscope, we first implanted two optical fiber cannulas and four recording devices in the bilateral ACC with ChrimsonR expression ( Supplementary Fig. S7A) and four hippocampal subregions with GCaMP6s expression, respectively, to investigate their induced neuronal activity in the hippocampus (Fig. 3 B).
We observed an increase in the mean fluorescence of each region, as well as the activity of certain individual neurons following stimulation (Fig. 3 C, D and Supplementary Fig. S7), in agreement with the reported ACC-hippocampal connections [33 ,34 ].
To further validate the neural activity induced by this optogenetic stimulation and to explore the possibility of integrating TINIscope with the electrical stimulation module, we bi lateral ly implanted electrodes in the ACC and recording devices in four subregions of the hippocampus (Fig. 3 E-G, Supplementary Fig. S8).Each stimulation cycle started with 1 second of 30 Hz 300 μA current stimulation and then rested for 60 seconds.The mean fluorescence of all regions significantly improved after stimulation (Fig. 3 G and Supplementary Fig. S8D, E, P < 0.001, Wilcoxon matched-pairs sign rank test).After extracting all neuronal activity, we identified stimulation-responsive neurons by comparing their mean activities 2 seconds before and after stimulation (Fig. 3 G).We repeated these experiments on two mice and observed that all imaged regions contained a proportion of neurons that responded to stimulations on either side of the ACC (Fig. 3 H).These results suggest that TINIscope is ideal for seamless combination with established brain manipulation methods, enabling comprehensive investigation of functional connectiv ity w ithin neural circuits.
Due to the slower response of neuronal calcium signals compared to electrical signals, the high-frequency component of neural activity is lost in calcium imaging recording.To overcome these limitations, a viable approach is to simultaneously perform calcium imaging and electrode recording at the same site.To investigate the feasibility of using TINIscope for calcium imaging in conjunction with electrode recordings in the same brain region, we co-implanted the GRIN lens and electrodes in four hippocampal subregions (Fig. 3 I).TINIscope provided calcium signals with single-cell resolution, while the electrodes enabled the acquisition of electrical signals with high temporal resolution.With this configuration, we successfully conducted electrophysiological recordings concurrently with TINIscope imaging at all four recording sites ( Supplementary Fig. S9A, B).By filtering the LFPs, we were able to reliably detect signal patterns such as sharp-wave ripples (SWRs), a distinct waveform that propagates in the hippocampus [35 ,36 ], in all recorded regions (Fig. 3 J, Supplementary Fig. S9C, D).In conjunction with the results from TINIscope, we were able to investigate the firing patterns of neurons within multiple regions, coinciding with SWRs (Fig. 3 K, Supplementary Fig. S9C, E).The spatial organization of these synchronized neurons may yield valuable insights into the formation and propagation of SWRs.
Decoding mouse position from extracted neuronal activity
In our previous experiments, we observed a substantial amount of spatially modulated neurons in the recorded hippocampal subregions.Thus, we further explored how spatial information is distributed in those regions.To address this inquiry, we trained a machine learning model to decode mouse locations from neuronal activities [37 ].The LSTM algorithm with specific hyperparameters (units = 200, dropout = 0.25, number of training epochs = 10) was selected because it has empirically higher decoding accuracy for hippocampus data.Prior to decoding, the extracted calcium traces were temporally aligned with mouse locations, and silent periods at the start or end of trials were manually removed.The rest of the data were equally split into 10 folds, with 9 folds used for training an LSTM decoder and the last fold used for calculating the prediction error using a 10-fold cross-validation procedure (Fig. 4 A).
By examining the decoding results from a representative fold, we observed that across both T-maze and open field paradigms, the integration of neuronal activities across multiple brain regions resulted in superior prediction of mouse locations compared to utilizing activities from a single brain region alone (Fig. 4 B, D and Supplementary Video S5).To determine the chance level of decoding error, the same procedure was performed on location-shuffled data, where the location vector was flipped in time and randomly cycle-shifted for at least 20 0 0 frames.All decoding results exhibited significantly better performances above the chance level (Fig. 4 C and E).
During the T-maze exploration, we observed that neuronal activities from a single brain region accurately decoded the mouse's position, with a median prediction error approximately equal to the length of the mouse body.However, during a more random and complex exploration within the open field, relying solely on neuronal activities from a single brain region was no longer sufficient to accurately decode the mouse's position.Instead, the participation of neuronal activities from multiple recorded regions significantly improved the decoding accuracy, enabling a more precise estimation of the mouse's position ( n = 10 folds in one mouse, cross-validation, Wilcoxon's rank-sum test, Fig. 4 C and E, with median errors of 6.13 ± 0.57 cm in RiHP, 4.94 ± 0.28 cm in RdHP, 4.46 ± 0.26 cm in LdHP, 4.03 ± 0.32 cm in LiHP, and 3.62 ± 0.26 cm in all data in the T-maze test and 9.41 ± 0.53 cm in RiHP, 7.69 ± 0.70 cm in RdHP, 6.98 ± 0.45 cm in LdHP, 6.95 ± 0.42 cm in LiHP, and 5.84 ± 0.43 cm in all data in the open field test).Overall, these results suggest that the encoded spatial information is IDs and SCE IDs were ordered to match the found assemblies.The SCEs to the left column were not involved in any assembly, while the SCEs to the right column were involved in multiple assemblies.(E) The mouse's position when SCEs associated with a specific assembly occurred.(F) The number of neurons in each region (color coded as labels below) and their spatial footprints in different assemblies.(G) Neurons associated with each assembly showed repeated activation sequences.Left is the concatenated calcium traces, as illustrated in the top diagram.Only time intervals within ± 10 second windows of all SCEs associated with an example assembly (assembly 3, including neurons above the red line) will be preserved.Neurons were grouped by their associations with different assemblies.Within each assembly, neuron IDs were sorted according to their peak intensity time.Right, repeated activation sequences of neurons associated with each assembly.distributed in a larger area than the FOV range of a single scope, consistent with previous studies [38 ].
Multiple-region recording reveals distributed activity patterns
Dynamic activation of neuronal assemblies is considered a key mechanism underlying cognition and behavior [36 ,39 ,40 ].TINIscope provides a great opportunity for studying behavior-relevant assemblies that span multiple brain regions.Hence, we proceeded to investigate the formation of neuronal assemblies among the recorded neurons and examine their correlation with the spatial information of the mouse.
In the T-maze and open field tests, we analyzed potential assembly dynamics by identifying synchronous calcium events (SCEs), clustering them, and assigning single neurons to putative assemblies (Fig. 5 A-D and Supplementary Fig. S10A-C).Assemblies were identified using the procedure described in an early hippocampus study [36 ].This method first detected SCEs where at least M neurons were active (value of z -scored calcium trace > 1) for more than 500 ms.M was determined by traversing a wide range of candidate values and selecting the one that maximized the number of detected SCEs (note: a small value of M results in longer but fewer SCEs).Once detected, each SCE used a binary vector to indicate the status of all neurons within that SCE.Then, a two-stage robust K-means clustering algorithm (Euclidean distance) was applied to cluster these SCE vectors.In stage 1, 100 conventional K-means were applied to find K cluster centers, while in stage 2, these 100 cluster centers were clustered again using K-means clustering, yielding stable cluster centers.The value of K was chosen between 2 and 10, and the one with the best average si l houette value was used.Afterwards, each neuron was defined as participating in an SCE cluster if the neuron showed a significantly higher activation rate (above the 95th percentile of reshuffled data) in the SCEs of that cluster.In this way, each SCE cluster was associated with a cell assembly comprised of neurons participating in that cluster.
SCEs were grouped afterwards based on the assemblies they activated.We proceeded to generate plots of mouse locations corresponding to the occurrences of these neuronal assemblies, without any prior knowledge of their actual locations.In the T-maze task, assemblies 1-3 exhibited a clear preference for specific locations, whereas assembly 4 did not show such a preference (Fig. 5 E).Notably, these neuronal assemblies were spatially distributed throughout all the recorded brain regions (Fig. 5 F).The assemblies associated with explicit location preference displayed recurring firing sequences, while the other assemblies did not exhibit such recurring patterns (Fig. 5 G).Similar results were obtained in the open field experiment ( Supplementary Fig. S10D-F), suggesting a brainwide dynamic population encoding of environmental information.
DISCUSSION
The neuroscience community has an urgent demand for tools that can perform concurrent cellularresolution calcium imaging in multiple brain regions of freely behaving animals.In this work, we have demonstrated the applicability of TINIscope to meet these requirements, enabling fundamentally new explorations of the brain-spanning coordinated cellular dynamics that underlie sensation, cognition and action.TINIscope represents the lightest and smallest head-mounted miniature microscope with a large margin.In the context of single region recordings, employment of TINIscope wi l l yield a remarkable decrease in the burden placed on very small and developing animals when compared to other microscopes within its class.
To achieve the goal of multi-region recording, a comprehensive system design is necessary, which involves more than simply reducing the size and weight of the microscope.In this work, considerable efforts have been directed toward optimizing the optical path, housing, and baseplates to enable the spatial arrangement of multiple TINIscopes.Additionally, a virtual simulation workflow was developed to guide our surgical implantation.Our demonstration experiments successfully recorded four closely situated hippocampal subregions with severe spatial limiting issues, thereby indicating support for nearly any combination of four brain regions in mice.To avoid potential entanglements of flexible PCBs, we also developed a customized commutator for our experimental system.Moreover, we developed our data acquisition system and GUI software to streamline data collection.The entire system's design is open source and can be further customized to include additional functionalities such as e-focus, dual-color imaging, in situ optogenetics stimulation, and volumetric imaging [41 ].The technical approaches proposed for existing miniature microscopes, such as μTlens for z-focusing [42 ], dual excitation light sources [43 ], interleaved readout of CMOS sensors, and micro lens array for volumetric imaging [44 ], can be adapted for TINIscope.The primary challenges lie in integrating these techniques into the ultra-compact design of TINIscope and customizing the necessary electronic components to support them.
The investigation of numerous neuroscience questions necessitates the use of multimodal combinations of recording and manipulating tools.Our TINIscope provides a much-needed toolkit for addressing neuroscience questions regarding multiregion interactions during specific brain functions, with its ultra-compact design facilitating integration with other tools.We validated its easy combination with classic tools for optogenetics, electrical stimulation and LFP recordings, providing insights into factors related to imaged calcium activity.The family of head-mounted fluorescence microscopes is rapidly expanding [20 ], with TINIscope being a new member featuring the smallest size and lightest weight.We expect to see future experimental designs combining TINIscope with other members noted for large FOV [25 -28 ] or multi-photon imaging [45 -48 ].
Given the rapid advances in our anatomical knowledge of brain-wide neuronal connectivity [49 ,50 ], TINIscope is a timely and valuable tool for enhancing our understanding of functional neuronal connectivity during complex behaviors and cognitive tasks performed in unrestrained conditions.We expect that our technique wi l l help pave the way toward understanding brain circuit function from a more holistic perspective in conjunction with novel computational methods to extract insights from this wealth of information [2 ,51 -53 ].
MATERIALS AND METHODS
For detailed materials and methods, please see the supplementary data.All animal experiments were
Figure 1 .
Figure 1.Tightly integrated neuronal imaging fluorescence microscope (TINIscope).(A) Section diagram of TINIscope.BL, blue LED; HL, half-ball lens; ExF, excitation filter; DM, dichroic mirror; GL, GRIN objective lens; CL, convex lens; EmF, emission filter; CC, CMOS camera.(B) Photos of an image sensor with HDI rigid PCB and layout of stacked layers (from top to bottom layer: red, brown, cyan and blue) with blind and buried vias (bicolor).(C) Photo of TINIscope and a nickel shown for scale.(D) Dimensions of TINIscope.(E) Schematic diagram of the experimental system with TINIscope.(F) Top: photo of a mouse with four head-mounted TINIscopes.Bottom: simultaneously recorded images of 4 hippocampal subregions.
Figure 2 .
Figure 2. Simultaneous calcium imaging of 4 hippocampal subregions in different behavioral experiments.(A) Diagram of 4-region recordings in mice.(B) Fluorescence images of brain slices around the four recorded hippocampal subregions.The white dashed lines indicate GRIN lenses.(C) Calcium traces of example neurons from the RiHP (yellow), RdHP (blue), LdHP (magenta), and LiHP (cyan).(D) The spatial contours of example neurons in (C) and the corresponding maximum intensity map (MIP) of the background-subtracted videos.(E) Number of identified neurons in all sessions and their averages ( n = 20 sessions, 6 mice).(F) Paradigm of the T-maze task.The water reward was given at a random end in each trial.Red dot: start point; blue droplet: reward side; gray droplet: non-reward side.(G) The proportion of spatially modulated cells during the T-maze task.(H) Place fields of example spatially modulated cells in the T-maze box.(I-K) same as (F-H) but the mouse was in an open field exploration experiment.The mouse explored freely in an open field with changing environments.
I-K shows the same mouse with 33/137, 68/318, 46/249, and 112/379 spatially tuned neurons in an open field environment in stages 2 and 3.These results indicate that the hippocampus exhibits widespread encoding of spatial information across subregions.
Figure 3 .
Figure 3. Combination of optogenetic/electrophysiological modules and TINIscope.(A) Top: diagram of potential experimental paradigms that combine multiple-TINIscope imaging and other technique modules, as listed in the dashed box (optogenetics, electrical stimulation and electrophysiological recordings).Bottom: photos of mice carrying 4 TINIscopes together with two electrical stimulating electrodes (left) or four extracellular recording electrodes (right).(B) Diagram investigating the ACC-HP circuit by combining optogenetics with 4-region TINIscope imaging.(C) Mean of the background-subtracted fluorescence signals averaged over all left-ACC stimulation trials ( n = 21 trials, 1 mouse, t = 0 indicates the stimulation onset).(D) Normalized activity of example ACC-responsive neurons in different trials (top) and the corresponding mean activity over trials (bottom).The traces were normalized using the estimated noise level and centered around the value at the onset of stimulation (t = 0).(E-G) same as (B-D) but replacing optogenetic stimulation with electrical stimulation ( n = 42 trials, 2 mice).(H) Left: spatial footprints of neurons responding to left (red) or right (green) ACC stimulation, respectively; right: same as the left but with the other mouse.(I) Top: illustration of joint calcium imaging and LFP recording in four hippocampal subregions.Bottom: photo of the electrode-lens complex.(J) Raw LFP and filtered signals (150-250 Hz) when all four regions show SWRs together.Right, spectrograms of LFP signals in four regions.(K) Synchronous calcium traces (left) and their spatial footprints (right) concurrent with the SWR in (J).Shaded areas in (C, D, F, J) correspond to the mean ± s.e.m.
Figure 4 .
Figure 4. Decoding mouse position from extracted neuronal traces.(A) Schematic illustration of machine-learning-based decoders for predicting mouse position from neuronal activity.Bottom right: the extracted calcium traces were split into 10 folds, and one testing fold was sequentially chosen to calculate the decoding error with the decoder trained from the remaining 9 folds.Decoding the position at each time point requires the population neuronal activity of the previous N (here, N = 5) frames.The LSTM network was used as the decoding model in this work.(B) Example of decoded mouse positions (color lines) on testing data and the true position (black lines) in the T-maze experiment.The number of temporal bins in this testing fold is 2670 (bin size = 100 ms).(C) The mean decoding error using different sets of neuronal traces in the T-maze experiment.Gray data points correspond to the chance level decoding errors where the mouse positions were randomly shuffled.*** P < 0.001, Mann-Whitney test, n = 10 folds, 1 mouse.## P < 0.01, # P < 0.05, Wilcoxon matched-pairs sign rank test, n = 10 folds, 1 mouse.The results show the mean ± s.e.m. (D and E) same as (B and C) but the mouse was in the open field experiment (temporal bins = 2059).
Figure 5 .Figure 5 .
Figure 5. Neuronal assemblies during the T-maze task.(A) Detection of SCEs from extracted calcium traces.Left: activities of the example neurons were z -scored and thresholded at 1 standard deviation (SD); right, neuronal activity of all neurons participating in an SCE.The vertical lines indicate the onset of an SCE.(B) Raster plot of neuronal participation in all SCEs.(C) Correlation maps of the detected SCEs (left) and the identified neuronal assemblies (right).The SCE IDs were ordered
Table 1 .
Key feature comparison of TINIscope and other miniature fluorescence microscopes. | 2023-11-07T14:16:40.751Z | 2023-11-02T00:00:00.000 | {
"year": 2023,
"sha1": "adaed4fd6a9ac9b4c83769747545cace13b26be2",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nsr/advance-article-pdf/doi/10.1093/nsr/nwad294/53629712/nwad294.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27f219bda72b417e9edd2deb39fc31ae0c14aaee",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119104446 | pes2o/s2orc | v3-fos-license | b ->d Penguins: CP Violation, General Lower Bounds on the Branching Ratios and Standard Model Tests
With the wealth of new data from the B-factories, b ->d penguin decays become available for study, in addition to their b ->s counterparts that have proven an indespensable tool for the exploration of new-physics effects in flavour physics. A prominent example of the b ->d penguin transitions is $\bar B^0_d \to K^0 \bar K^0$. We show that this decay can be charaterized in the Standard Model by a surface in the observable space of the direct and mixing-induced CP asymmetries and the branching ratio. The form of this surface, which is theoretically clean, implies a lower bound for the branching ratio that has recently been confirmed experimentally. If future measurements of the CP asymmetries yield a point away from the SM surface, this would be an interesting signal of new physics. We point out that the hadronic parameters in $\bar B^0_d \to K^0 \bar K^0$ that parameterize the position on the SM surface are related to hadronic parameters in the B ->pi K system. The fact that the branching ratio of $\bar B^0_d \to K^0 \bar K^0$ is very close to its lower bound yields interesting implications for B ->pi K even without knowledge of the CP asymmetries of $\bar B^0_d \to K^0 \bar K^0$. The mechanism that produces the lower bound for $\bar B^0_d \to K^0 \bar K^0$ is actually much more general; we derive lower bounds for various other b ->d penguin-induced processes, including B ->rho gamma and $B^\pm \to K^{(\ast)\pm} K^{(\ast)}$. Some of these theoretical lower bounds are very close to the current experimental upper bounds.
Introduction
Flavour-changing neutral-current (FCNC) processes, possible in the Standard Model (SM) only through loop diagrams, are an extremely important probe for new physics (NP). The good agreement between experiment and theory in processes induced by b → s FCNCs has already put important constraints on physics beyond the SM. Due to the excellent work of the B-factories, we are now entering the era where b → d penguin-induced processes -typically suppressed by a factor of 20 with respect to the corresponding b → s penguin transitions -can be used to test the SM more rigorously than it was possible before.
The flavour structure of the SM, more specifically the order of magnitude of the individual elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, allows us to derive certain relationships between different observables in b → d-induced decays, and between b → s-and b → drelated observables. These relationships allow us to test the SM in those cases where the corresponding observables have already been measured and to make predictions where observations are still missing.
B 0 d → K 0K0 : CP Violation and the Branching Ratio
In the SM, we can write the amplitude for the decay B 0 d → K 0K0 as where the λ (d) q ≡ V qd V * qb are CKM factors, and the P KK q denote the strong amplitudes of penguin topologies with internal q-quark exchanges, which receive tiny contributions from coloursuppressed electroweak (EW) penguins and are fully dominated by QCD penguin processes. Eliminating λ (d) t with the help of the unitarity relation λ c of the CKM matrix, we can write the amplitude as where P KK tc ≡ P KK t − P KK c , and ρ KK e iθ KK is a function of the P KK q that we treat as an unknown hadronic parameter. Figure 1: The surface in observable space.
The direct and mixing-induced CP asym- In the SM, any measurement of the three observables has to lie on this surface, which is theoretically clean. Sufficiently accurate measurements of the branching ratio will give strong constraints on possible values for the asymmetries.
The form of the surface implies a theoretical lower bound for B that can be converted into a lower bound for BR(B d → K 0K0 ) using input from b → s penguin decays (see [1] for details). With the help of this lower bound, the recent measurement of B d → K 0K0 [3] was correctly predicted in [1]. Using the latest experimental input and the central values of the factorizable SU (3)-breaking parameters, we update the bound in (3) of [1] to BR(B 0 d →K 0 K 0 ) > 1.43 +0.17 −0.25 , nicely consistent with the old result and the recent measurements (see Table 1).
We observe that the measured BR(B 0 d →K 0 K 0 ) is right at the lower theoretical bound (bottom of the surface in Fig. 1). This implies a value of ρ KK significantly different from 0, with a small phase θ KK ; ρ KK can be related to a hadronic B → πK parameter through ρ c = ερ KK , where ε ≡ λ 2 /(1 − λ 2 ) = 0.053. This quantity is usually neglected. However, a value of ρ c ∼ 0.05, as suggested by BR(B 0 d →K 0 K 0 ), would be rather welcome in the analysis of the B → πK system [4].
General Lower Bounds on the Branching Ratios of b → d Penguin Processes
The mechanism that provided the lower bound on BR(B 0 d →K 0 K 0 ) is actually more general. We will now first use it to derive lower bounds on b → dγ processes, and then discuss the general b → d penguin case. The amplitude for the decayB → ργ can be written as where c ρ = 1/ √ 2 and 1 for ρ = ρ 0 and ρ ± , respectively, and A = |V cb |/λ 2 . Moreover, P ργ tc ≡ P ργ t − P ργ c , where P ργ t and P ργ c are matrix elements of operators from the standard weak effective Hamiltonian (see [2] for details). ρ ργ e iθ ργ is again a hadronic parameter that we will treat as essentially unknown. Let us now use the information offered by the b → s counterpart of our b → d transition, which is well measured and takes an amplitude of the following form: where ε was introduced above. The ratio of the corresponding BRs is then given by where Φ ργ and Φ K * γ denote phase-space factors, and Although ρ K * γ e iθ K * γ is here strongly suppressed by ε, we can straightforwardly include the corresponding corrections by using the flavour-symmetry relation ρ ργ e iθ ργ = ρ K * γ e iθ K * γ ≡ ρe iθ . Treating then (ρ, θ ) as completely free parameters, we can derive the following lower bound: which can be converted into a lower bound forB → ργ through (3.3) and the measured value of thē B → K * γ branching ratio.
For comparision, we also show the bounds that result from neglecting the SU (3)-breaking corrections ("naïve bound"). The bounds are consistent with the experimental results for B 0 d → ρ 0 γ and B + → ρ + γ, although of course the well-known isospin-breaking puzzle of the Belle result remains. In a similar way we can also derive theoretical lower limits for other b → d penguin decays. We list bounds for B ± → K ( * )± K ( * ) -together with the respective b → s decay that was used for the bound -in Table 1 (experimental data are taken from [5]); some more channels, including also B ± → π/ρ ± ℓ + ℓ − modes, are discussed in [2]. For the currently most interesting decays, the theoretical predictions and measurements by BaBar and Belle are also plotted in Fig. 2. It will be interesting to confront our bounds with future data. | 2014-10-01T00:00:00.000Z | 2005-11-29T00:00:00.000 | {
"year": 2005,
"sha1": "6294626395cf6986fd2e4ba40ed4cf72afe7b503",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0f9bf9b04b2bc0ccf79440f79de556c7d211fc13",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53061475 | pes2o/s2orc | v3-fos-license | ECTOPIC BEATS : HOW MANY COUNT ?
Premature atrial and ventricular contractions, or ectopic beats, are frequently detected on routine electrocardiogram monitoring. They are often considered to be benign with no pathological significance; however, the literature suggests that higher ectopic burdens may have clinical importance. This paper reviews the current literature and provides the treating physician with an understanding of when ectopic beats should be deemed significant and when treatment may be appropriate.
INTRODUCTION
Premature atrial and ventricular contractions, or ectopic beats, are frequently detected on routine electrocardiogram (ECG) monitoring.They are often considered to be benign with no pathological significance; however, data suggest that higher ectopic burdens may have clinical importance.
PREMATURE ATRIAL COMPLEXES
Worldwide, stroke is a leading cause of mortality and the burden of disease on healthcare services is steadily increasing. 1 Approximately 30-40% of ischaemic strokes are cryptogenic in nature with no clear pathological cause; 2 it is thought that many could be secondary to subclinical or asymptomatic atrial fibrillation (AF).There is growing evidence that premature atrial complexes (PAC) may be associated with the development of AF, and therefore with an increased risk of stroke.Haïssaguerre et al. 3 investigated the link between PAC and the onset of AF.They identified individuals with frequent episodes of paroxysmal AF and mapped which rhythms commonly preceded the onset of AF.The trigger was often a PAC originating from a pulmonary vein, and radiofrequency ablation of this area of ectopic activity led to decreased recurrence in arrhythmic activity. 3ïssaguerre et al.'s 3 findings prompted further research toward the link between higher burdens of PAC and the risk of developing AF.Wallmann et al. 4 recruited patients who had suffered an acute ischaemic stroke without prior documented AF and performed 7-day Holter monitoring at baseline, 3 months, and 6 months.Patients were then grouped according to their total burden of PAC over the 7-day period.A higher burden of PAC was classified as ≥70 within the first 24-hour period.In individuals with a higher burden of PAC, 26% had AF, which was five-times more than those in the low burden group. 4e Copenhagen Holter study 5 investigated AF prevalence and its effect on morbidity and mortality and was one of the largest studies of its kind in healthy individuals.Investigators contacted all men aged 55 years, and all men and women aged 65, 70, and 75 years in two different areas of the city.Individuals with previous cardiovascular ill health were excluded, leaving 678 participants who went on to complete 48-hour ambulatory ECG monitoring.In 2010, Binici et al. 6 used these data to explore the link between excessive atrial ectopy and the primary endpoints of death or stroke, and secondary endpoint of AF.Excessive atrial ectopic activity was defined as ≥30 PAC per hour or a single run of ≥20.Over a median follow-up period of 76 months, it was found that excessive PAC were associated with a >60% increase in the risk of death or stroke, and a 2.7-fold increase in the development of AF.
Using a more clinically applicable approach, Larsen et al. 7 took data from the same study in an attempt to further clarify the link between higher burdens of atrial ectopy and stroke with a longer followup period (median: 14.4 years).Individuals were also risk stratified using the CHA 2 DS 2 VASc scoring system (congestive heart failure, hypertension, aged >75 years, diabetes, previous stroke or transient ischaemic attack, vascular disease, aged 65-74 years, female sex). 8It was found that those with increased atrial ectopic activity had an increased adjusted risk of stroke (hazard ratio [HR]: 2.02; 95% confidence interval [CI]: 1.17-3.49),and a significantly higher risk of stroke (p=0.0002) was identified in subjects with excessive PAC (≥30 per hour or a run of ≥20).Subjects with both excessive PAC and a CHA 2 DS 2 VASc score >2 had an absolute risk of stroke equal to 2.4% per year. 7e intervention arm of the EMBRACE trial 9 was used to investigate the prevalence of subclinical AF in patients who had suffered either a transient ischaemic attack or cryptogenic stroke.Recruits underwent 24-hour ambulatory ECG monitoring and, if AF was not detected on initial monitoring, they were then assigned to 30-day external loop recording.The overall 90-day AF detection rate was 16%, and the probability of detecting AF increased with higher atrial ectopic activity.Patients with <100 PAC/24 hours had a probability of AF detection of <9%, whereas the probability increased to 40% in those with a burden of >1,500 PAC/24 hours. 9This again serves to highlight the need for much longer periods of monitoring, especially in patients with higher PAC burdens.
PREMATURE VENTRICULAR COMPLEXES
Higher burdens of premature ventricular complexes (PVC) post myocardial infarction are associated with a poorer prognosis. 10Traditionally it has been thought that the use of anti-arrhythmic therapy in such patients may reduce the risk of sudden cardiac death; however, findings from CAST 11 demonstrated that suppression of ventricular ectopy with Class 1a anti-arrhythmic agents was in fact associated with higher rates of death due to their proarrhythmic properties.
In patients with underlying structural heart disease, PVC can trigger ventricular arrhythmia, 12 but in individuals with structurally normal hearts they are often considered a benign process that does not require treatment or intervention. 13However, Engel et al. 14 showed the presence of ventricular ectopics on resting ECG to be significant.Patients with resting ventricular ectopy had a significantly increased risk of all-cause and cardiovascular mortality.They also categorised patients by heart rate and showed mortality to increase with heart rate and to double in the presence of PVC. 14 This increased adrenergic drive was proposed as a possible mechanism for incidental heart failure in tachycardia-driven PVC states.
The presence of PVC has been linked with incidental heart failure.In the ARIC study, Agarwal et al., 15 found that participants, who at baseline had no heart failure or coronary artery disease, had an increased risk of incidental heart failure if PVC were present on baseline ECG.Further work by Agarwal et al. 16 demonstrated that the presence of PVC was associated with nearly a two-fold risk of systolic heart failure.
A high frequency of PVC may result in left ventricular systolic dysfunction.In 2010, Baman et al. 17 sought to quantify what burden of ventricular ectopy was associated with an increased risk of developing an ectopic-induced cardiomyopathy. 17They took patients with persistent ventricular ectopy despite best medical therapy, and calculated their PVC burden and left ventricular ejection fraction (LVEF) pre and post ablation.PVC-induced cardiomyopathy was defined as an improvement in LVEF of ≥15%.The investigators identified that a total PVC burden of >24% per 24 hours was associated with an increased risk of developing a cardiomyopathy with almost 80% sensitivity and specificity for the diagnosis of PVC-induced dilated cardiomyopathy.However, the minimal burden of PVC seen with cardiomyopathy is 10% over a 24-hour period. 17he affect that a PVC has on ventricular filling and contractility, as well as the reversal in left ventricle systolic function seen post ablation, could make this a possible mechanism for left ventricle dysfunction in higher burdens of PVC. 18nela et al. 19 further highlighted the importance of suppressing ventricular ectopy in patients with systolic function poor enough to mandate primary prevention implantable cardioverter defibrillator (ICD) implantation.Patients with high ectopic burdens who met the criteria for ICD implantation had the ICD withheld and instead underwent ventricular ectopic ablation.They were followed up at 6 and 12 months.The investigators found that at 12 months the LVEF had increased from a baseline of 28% ± 4% to 42% ± 12% at 12 months after PVC ablation. 19This emphasises the need for consideration of PVC ablation in those patients with indication for ICD and the potential for LVEF to recover, such that indication for ICD implantation can be reassessed at 12 months.
Dukes et al. 20 further investigated the association between PVC burden and myocardial dysfunction.Participants with normal LVEF and no history of heart failure were studied; 1,139 were randomly assigned to 24-hour ambulatory ECG monitoring.Baseline echocardiography was performed and 842 participants went on to have repeat echocardiography after 5 years.Over the study period, it was shown that a two-fold increase in PVC burden from baseline was associated with a statistically significant greater chance of reduction in LVEF.Patients with known systolic dysfunction and higher burdens of ventricular ectopy showed a higher incidence of congestive cardiac failure (HR: 1.08; 95% CI: 1.03-1.17)and higher burdens of ventricular ectopy were also associated with increased mortality. 20e use of ambulatory monitors in many studies is reflected in clinical practice, as patients are often monitored for either 24 or 48 hours.Loring et al. 21emonstrated that 75% of patients that reach a PVC burden of ≥20% will do so within 24 hours of monitoring.However, only 53% of patients who reached a PVC burden of 10% did so in the same 24-hour timeframe. 21The yield continued to increase throughout the 14 days of monitoring; thus, leaving the possibility that almost half of this 10% PVC burden group may go undetected within a 24-hour monitoring period.
Increased ventricular ectopic activity is often seen during exercise stress testing; however, its clinical significance is poorly understood.Identifying the relationship between adverse outcomes and frequency of exercise-induced PVC has proven difficult.Schweikert et al. 22 reported greater thallium perfusion defects with higher ectopic burdens, but these findings have not been shown to correspond with angiographic severity of disease. 22Other studies have suggested that whilst a causal link between exercise-induced ectopy and coronary artery disease does not exist, it may be a marker for increased risk of exerciseinduced ventricular arrhythmia. 23uven et al. 24 performed exercise testing in 6,106 asymptomatic male volunteers and measured ventricular ectopic burden.Excessive ventricular ectopic activity was defined as >10% of all ventricular depolarisations during a 30-second ECG recording or a run of ≥2 consecutive PVC.Frequent ventricular ectopy was identified in 138 participants and, over a 23-year follow-up, was associated with an increased risk of death from a cardiovascular cause (relative risk: 2.67; 95% CI: 1.76-4.07).
Frolkis et al. 25 retrospectively examined a large cohort of 29,244 patients who had previously undergone exercise testing.Frequent ventricular ectopy was defined as >7 PVC per minute, or the presence of bigeminy, trigeminy, ventricular tachycardia, or fibrillation.Frequent ventricular ectopy was identified in 3% of individuals during exercise and 2% during recovery.Over a mean follow-up of 5.3 years, frequent ventricular ectopic activity in recovery was associated with a higher rate of death when compared with ectopy seen on exercising (11% versus 5%; HR: 2.4; 95% CI: 2.0-2.9;p<0.001).An assessment of LVEF had been made on 6,421 participants and a higher proportion of patients with ventricular ectopy during recovery had a LVEF of <40% (27% versus 18%).
Morshedi-Meibodi et al. 26 retrospectively examined ventricular ectopic activity in 2,885 individuals who had undergone exercise testing as part of the Framingham offspring study.Their definition of excessive ventricular ectopic burden differed from Jouven et al.'s 24 as only 0.1% of participants would have met the required standard.They instead used a model based on the median number of ventricular ectopics measured in participants whilst exercising, which was one ectopic every 4.5 minutes (0.22 ectopics/minute), and excessive activity was seen in 792 (27%) participants.Their primary endpoint was 'hard' cardiovascular disease, which included angina symptoms, myocardial infarction, and sudden cardiac death.It was concluded that there was no association between high PVC burden and any of the 'hard' cardiovascular disease endpoints; however, there was an increase in allcause mortality over a follow-up period of 15 years.
DISCUSSION
AF and atrial flutter are the most common arrhythmias associated with ischaemic stroke.There is now growing evidence supporting an association between atrial ectopic activity and the development of atrial arrhythmias.Individuals with high atrial ectopic burdens are at a greater risk of stroke and death, possibly due to co-existing AF.Patients with AF and a CHA 2 DS 2 VASc score of two have an annual stroke risk of 2.2% and should be appropriately anticoagulated unless contraindicated.Data presented in this review suggest that individuals with a high PAC burden of >30 per hour and CHA 2 DS 2 VASc score of two also have an increased risk of stroke.Randomised trials are needed to assess whether anticoagulation (and the inherent risks that come with it), suppression of atrial ectopics, or modification of other risk factors decrease stroke risk in patients with high PAC burden.Clinical suspicion of undiagnosed AF should be higher in patients with a high CHA 2 DS 2 VASc score, an atrial ectopic burden of ≥30/hour, or an episode of ≥20 PAC.More extended periods of ambulatory monitoring, including implantation of loop recorders, may increase the chance of diagnosing asymptomatic AF. 27 As many as 50% of cases of congestive cardiac failure are labelled as idiopathic, yet many may be secondary to excessive ventricular ectopic activity. 28In a meta-analysis, Zang et al. 29 demonstrated an improvement in LVEF following ablation of PVC.The mean burden of PVC referred for an ablation was 24.0% and the overall increase in LVEF post ablation was 7.7%. 29Patients with high PVC burdens should therefore be considered for regular echocardiographic assessment and those who show deterioration in LVEF should be assessed for treatment with catheter ablation.The significance of ectopic burden during exercise testing, or in the recovery period, is as of yet unknown.Data does suggest a link to all-cause mortality over long follow-up periods, but there is no current evidence to suggest that suppression of PVC in this setting would be beneficial.As such, catheter ablation of PVC in this setting is an untested area and, in symptomatic patients, clinicians may first wish to employ a less invasive approach, such as anti-arrhythmic therapy.
CONCLUSION
Higher burdens of PAC are associated with a greater risk of developing AF.High frequency of PVC is associated with a reduction in LVEF and subsequent heart failure, which may be reversible with ablation.In all cases, longer periods of monitoring allow a greater yield of information and better prediction of those higher risk patients.Longer periods of monitoring of ≤2 weeks should become commonplace.If you would like reprints of any article, contact: +44 (0) 1245 334450. | 2020-04-23T09:03:38.065Z | 2017-10-12T00:00:00.000 | {
"year": 2017,
"sha1": "a31e56316271b1ef323c9a47f264274cfb58d8ac",
"oa_license": "CCBYNC",
"oa_url": "https://emjreviews.com/wp-content/uploads/sites/2/2017/10/Ectopic-Beats.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "acd8e274214dbac4b382b96f89cbe93dc994a8a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
245119266 | pes2o/s2orc | v3-fos-license | Evaluation of the effects of sound exposure and low field electromagnetism on growth and antibiotics susceptibility of some microorganisms
Background: The emergence of antibiotic‐resistant microorganisms has been largely associated with drug misuse, drug abuse, and indiscriminate disposal of drugs; however, the interactions between some environmental factors and antibiotic resistance by microorganisms have been understudied. In this study, the effects of sound and electromag‐ netic field on the growth and antibiotic susceptibility of selected microorganisms to antibiotics were investigated. Results: Microorganisms used include Gluconobacter oxydans ATCC 19,357, Rhodobacter sphaeroides ATCC 17,023, Citrobacter freundii ATCC 33,128, Yersina pestis ATCC 11,953, Bacillus subtilis 6633, Acetobacter aceti ATCC 15,973, Escherichia coli ATCC 25,922, Pseudomonas aeuriginosa ATCC 9027, Streptococcus pyogenes ATCC 19,613, Klebsiella pneumonia ATCC 25,955, Staphylococcus aureus ATCC 25,923 and Serratia marcescens ATCC 14,766. The antibiotics used were: Cip‐ rofloxacin 5 μg, Imipenem 10 μg, Ampicillin 10 μg, Ceftazidime 30 μg and Tetracycline 30 μg for Gram‐negative bac‐ teria while Pefloxacin 10 μg, Gentamycin 10 μg, Amplicillin + Cloxacillin 30 μg, Cefuroxime 20 μg, Amoxacillin 30 μg, Ceftriaxone 25 μg, Ciprofloxacin 10 μg, Streptomycin 30 μg, Co‐trimoxazole 30 μg, and Erythromycin 10 μg for Gram‐ positive bacteria, respectively. Acoustic treatment had varying effects on the antibiotics susceptibility profile of all test bacterial culture. Before exposure, P. aeruginosa had the highest zone of inhibition of 34 ± 3.4 mm, while B. subtilis had least inhibition zone of 12 ± 2.8. After exposure to acoustic treatment at 5000 Hz/90 dB at 72 h, C. freundii had highest zone of inhibition of 32 ± 0.7 mm and the least zone of 11 ± 1.4 mm observed in P. aeruginosa. At 1125 Hz/80 dB after 72 h, R. sphaeroides had highest zone of 34 ± 0.7 mm while A. aceti had least zone of inhibition of 10 ± 0 mm. Effect of electromagnetic flux treatment of 15 min showed E. coli to be the most inhibited having a growth rate of 0.08 log cfu/ mL, antibiotics testing showed G. oxydans to have the highest zone of inhibition of 28 ± 3.5 mm and least zone was observed in B. subtilis having a zone of 13 ± 2.8 mm. Conclusion: This study showed that environmental factor such as sound and electromagnetic flux (EMF) could inter‐ fere with the physiology of bacteria including resistance/susceptibility to antibiotics. However, further investigation will be needed to understand full mechanisms of action of sound and electromagnetic field on bacteria.
. Audible sound has a wide distribution in natural world and nearly all life forms are surrounded by different kinds of sound and they also interact with these sounds and these have reportedly increase over the years (Sarvaiya and Kothari 2015;Kumar et al. 2018;Vitkauskaite and Grubliauskas 2018). Sound waves can create a mechanical stress to organisms exposed to them, and detection of such physical stress is a mechanism common to mounting a response against a broad range of environmental stresses. The possibility of the mechanical stress created by sound to trigger the expression of stress response genes in the organism exposed to such stress as also been reported (Walley and Dehesh 2010;Robinson et al. 2021).
In recent times, the misuse and abuse of antibiotics in therapy have thrown up facts of antibiotic susceptibility and resistance, respectively, with a correlation between them (Crofts et al. 2017;Li et al. 2020). Furthermore, the mechanisms of antibiotic activity on bacterial cells as well as cellular resistance to antibiotics are still subjects of active and intense scientific research (Abushaheena et al. 2020;Li et al. 2020). However, the biophysical factors that may affect antibiotic efficacy on microorganism have been underexplored and with the increasing human population and anthropogenic activities accompanied by gradual increase in the volume of sound waves and electromagnetic flux in our environment (Kumar et al. 2018;Vitkauskaite and Grubliauskas 2018;Movahedi et al. 2019;Robinson et al. 2021), these factors can have a great physiological effects on microbial growth and response to antibiotics, hence the need to properly investigate this phenomenon. In this study we report the effect of sound of different frequencies and decibels, and electromagnetic radiation on growth and antibiotic susceptibility of some selected Gram-positive and Gram-negative bacteria. This physiological study of bacterial response to sound waves and electromagnetic flux with respect to antibiotic susceptibility can shed more light on the interaction between biophysics and antibiotic efficacy and can also form a base-line physiological fact on the influence of gradually increasing sound pollution as well as electromagnetic flux proliferation on the emergence and persistence of antibiotics resistance. 923 and Serratia marcescens ATCC 14,766. The organisms were purchased from the American Type Culture Collection (ATCC), and they were stored at 4 °C on Nutrient Agar (Oxoid Uk) slants and sub-cultured fortnightly.
Effect of sound of different frequencies on growth and antibiotic sensitivity pattern
Exposure of the different organisms to different acoustic treatment was done in an anechoic chamber constructed locally as described by Guet al. (2010) using polyurethane foam of density 1 g/cm 3 , wood and cotton fabric material in a cylindrical shape. The dimensions of the cylinder were 15 inches height, 12 inches diameter and had a rectangular orifice of dimension 5.6 inches by 6 inches plate 1. Each of the test organism was inoculated into sterile nutrient broth and incubated for 24 h, 48 h and 72 h, respectively, and thereafter exposed to acoustic treatment (produced using a Bluetooth speaker which was remotely powered and placed directly above the inoculated nutrient broth-containing the test organisms) at different exposure time ranging between 24 and 72 h, frequencies and decibel of 5000 Hz/90 dB and 1125 Hz/80 dB and growth observed on Jenway Spectrophotometer at 540 nm. Sample without sound treatment served as control group. Incubation was done at 37 ± 1 °C and growth was also checked on spectrophotometer after 24-72 h incubation duration (Guet al. 2010). Antibiotic susceptibility pattern was done before and after the acoustic exposure experiment as described earlier.
Effect of electromagnetic field on growth and antibiotic susceptibility pattern
To investigate the effect of electromagnetic on the growth and antibiotic susceptibility pattern of the different Page 3 of 14 Garuba et al. Bulletin of the National Research Centre (2021) 45:216 microorganism, actively growing bacterial cultures were inoculated into sterile nutrient broth and incubated at 37 C for 24 h and these were then exposed to low electromagnetic field as described by Garuba et al. (2020). The tubes of the bacteria to be exposed were placed in the middle of the coil by using supports to get a homogenous and higher magnetic field strength in which the ends of the solenoid were connected to variac fed from the mains (220 V). The magnetic field intensity was measured by means of hand held Gauss/Tesla meter, as the result proved its homogeneity among all the volume and was within ± 5% at the ends. Equal volumes of the bacterial culture were exposed for 15 min at their mid-to-late exponential phase (maximum rate of active growth) to the electromagnetic field at an intensity of (5 mT), after exposure the growth rate was measured in a spectrophotometer and compared with the control (Mona et al. 2013).
Statistical analysis
Results obtained in this study were subjected to analysis of variance using ANOVA, and separation of means was carried out by Duncan's multiple range test (Duncan 1955).
Results
Twelve bacteria were subjected to two different acoustic treatments (5000 Hz/90 dB and 1125 Hz/80 dB) for duration of 24-72 h after which growth and antibiotic susceptibility pattern of each of the isolate measured. The results showed that all the bacteria culture at 5000 Hz/90 dB had an increase growth (p < 0.05) as compared with control except E. coli, K. pneumoniae and B. subtilis which decreased at 72 h acoustic exposure ( Fig. 1). Similarly, exposure to sound of 1125 Hz/80 dB for 72 h had varied effects on the growth of all the organisms investigated except for S. marcescens and A. aceti which had a decline after 72 h exposure (Fig. 2). Results of the antibiotics susceptibility pattern after the cultures were exposed to sound treatment at 5000 Hz/90 dB and 1125 Hz/80 dB is presented in Tables 1, 2, 3 and 4. The results revealed that the exposure of S. marcescens to acoustic treatment at 5000 Hz/90 dB showed increased resistance (p < 0.05) to Ciprofloxacin at 24 h (30 mm) and at 72 h (31 mm) and maintained its resistance profile to tetracycline and Ampicillin throughout the exposure duration. P. aeruginosa became susceptible to Ampicillin after 72 h (11 mm) having an initial resistance profile and treatment to Tetracycline showed resistance. G. oxydans which was initially susceptible became resistant to Ceftazidime after 24 h exposure while A. aceti became resistant to Ampicillin after exposure to treatment; B. subtilis showed increased susceptibility to Ampicillin after 24 h (27 mm) at p < 0.05 and then became resistant after 48 and 72 h exposure, respectively, as shown in Tables 1 and 2. Similarly, an acoustic treatment of 1125 Hz/80 dB, R. sphaeroides showed increased susceptibility at 72 h (28 mm) (at p < 0.05) to Ceftazidime and also to Imipenem (34 mm), respectively. A. aceti which had an initial susceptibility profile (16 mm) became resistant to Ampicillin at 48 h and 72 h exposure treatment, exposure of S. aureus to acoustic treatment showed increased resistance to Pefloxacin, Gentamycin, Ampicillin, Amoxacillin, Streptomycin all through the exposure duration, respectively, as shown in Tables 3 and 4.
Plate 1 Schematic and Pictorial Representation of the Anechoic chamber used in this study
Page 4 of 14 Garuba et al. Bulletin of the National Research Centre (2021) 45:216 Exposure of the test organisms to electromagnetic flux of 5 mT for 15 min was carried out and the growth alongside antibiotic susceptibility pattern investigated. The results presented in Tables 5 and 6 and Fig. 3 showed that Y. pestis, K. pneumonia and A. aceti had an increased growth (p < 0.05) rate compared to control while other test bacteria had a decrease in growth compared to the control (Tables 5 and 6).
Electromagnetic exposure was also found to have varying effects on the antibiotic susceptibility pattern of the organisms under investigation. Exposure of G. oxydans showed an increase susceptibility to Tetracycline (18 mm), Y. pestis also showed an increase in susceptibility to Ceftazidime (19 mm) after electromagnetic treatment. B. subtilis became resistant to Ampicillin and S. aureus became resistant to Gentamycin, Ampicillin, Cefuroxime and Amoxacillin, respectively. S. pyogenes was observed to have an increase susceptibility to Pefloxacin (20 mm), Gentamycin (17 mm), Ampicillin (13 mm), Ceftriaxone (19 mm), Ciprofloxacin (23 mm), Streptomycin (23 mm), Cotrimoxazole (24 mm) and Erythromycin (24 mm), respectively, as shown in Table 5 and 6.
Discussion
In natural world, nearly all life forms are surrounded by different kinds of sound and electromagnetic fields from different sources (Dãnet 2005,) and there are few reports documenting the interactions between microorganisms, sound and electromagnetic fields (Gu et al. 2016;Movahedi et al. 2019;Robinson et al. 2021). However, the data provided by literature is not adequate in elucidating the physiological response to interactions between biological systems and these physical factors (that either occurs naturally or as a result of anthropogenic activities) (Guet al. 2010;Said-Salman et al. 2019). Furthermore, the use of physical signals such as sound waves, by the microorganisms for their intercellular communication has not received due attention (Chandra et al. 2018).
In this study, effect of low electromagnetic field and acoustic sound on the growth and antibiotics susceptibility pattern of twelve bacteria were investigated and some organisms were observed to respond to sound stimulation with a positive effect on growth, whereas sound waves had an inhibitory effect on the growth of some other microorganisms. This suggests that the action of sound on microbial growth showed obvious frequency peculiarities. This peculiarity has been previously reported by Kothari et al. (2018). Similarly, Souvik et al. (2018) investigated the effect of three types of sound frequencies between 20 Hz and 20 kHz on the growth and the antibiotic susceptibility pattern of E. coli. The results indicated that E. coli growth increased when exposed to sound frequency below 1 kHz (432 Hz) while at frequencies above 1 kHz, a poor growth was recorded. The altered growth rate observed has been suggested to be due to an altered movement of key molecules across the cell membrane of the organism as sound has been reported to increase membrane permeability in some microorganisms (Sarvaiya and Kothari 2015). The increased growth observed could be as a result of an adaptive response of the test bacteria to the sound exposure having an influence on microbial growth due to increase in membrane permeability while reduced growth observed could be as a result of mechanical stress created by the sound exposure after a long duration which could trigger the stress response of the bacteria hence resulting in the death of the organisms (Shah et al. 2016;Sarvaiya and Kothari, 2017;Kothari et al. 2018). This is because sound waves have been reported to travel through any medium giving rise to sonic vibrations which are thought to be sensed by the test microbial population through mechanosensory receptors (which act as safety valves against osmotic shock, and in higher organisms they participate in sensing touch and sound waves) (Kung et al. 2010;Ward et al. 2014;Shah et al. 2016). The microbial population may then modulate its behaviour in accordance to the magnitude and duration of the sonic vibration by generating appropriate physiological responses (Martinac 2012) such as opening of mechanosenstive ion channels which in turn affects movement of certain key ions across the cell membrane, ultimately resulting in an altered pattern of growth and metabolism (Vitkauskaite and Grubliauskas 2018). Antibiotic susceptibility pattern of the bacteria investigate in this study was also found to vary considerably with some of the becoming more susceptible while other became resistant as previously reported (Sarvaiya and Kothari 2017;Souvik et al. 2018). The altered antibiotic pattern of antibiotic sensitivity by the test organisms has been suggested to result from an altered transport (increased uptake) of the antibiotic used through the cell membrane via the mechanosensitive channel which responds to mechanical stress by ± 3.5 a) 18 changing its conformation between an open and closed state and this response is dependent on the frequency of sound, type of microorganism and exposure duration (Sarvaiya and Kothari 2017). Non-ionising, non-thermal, extremely low frequency and electromagnetic fields (ELF-EMF) have been reported to produce a number of biological effects during their interaction with biological systems (Belyaev et al. 2011;Nguyen et al. 2015;Taheri et al. 2017a;Movahedi et al. 2019). In this study, there exists a decrease in growth rate after exposure to electromagnetic field of 5mT. This observation is in accordance with Inhan-Garipet al.
(2011) who carried an experiment on three Gram-negative and three Gram-positive bacteria and observed a decrease in growth of all test cultures after exposure to electromagnetic flux. This reduced cell growth observed could be a result of increased level of reactive oxygen species prompted by exposure to electromagnetic field which may have affected macromolecular synthesis and caused protein injury, ultimately leading to a decrease in bacterial growth rate as suggested by Souvik et al. (2018) After exposure to electromagnetic field, some of the bacteria were found to developed resistance to antibiotics they were previously susceptible to while some became more susceptible. Chiefly among the several factors affecting antimicrobial susceptibility are the physical properties of the magnetic field, such as frequency, duration of radiation, the structure of the bacterial cell wall and the nature of the peptidoglycan properties in the cell wall of the Gram-positive and Gram-negative bacteria (specifically with changes in the bacterial cell wall protein as suggested by Taheri et al. (2017a) and Said-Salman et al. (2019). Mechanisms of acquired susceptibility or acquired resistance Table 3 Antibiotics susceptibility profile for selected Gram-negative bacteria as read in mean diameter of zones of inhibition (mm) after exposure to acoustic treatment at 1125 Hz/80 dB compared with control a) -Control b) -24 h c) -48 h d) -72 h R-Resistance CAZ-Ceftazidime CIP-Ciprofloxacin TET-Tetracycline IMP-Imipenem AMP-Ampicillin *Data are mean of replicate ± S.D ± 5.7 a) 16
Bacteria
± 0 a) 12 Page 10 of 14 Garuba et al. Bulletin of the National Research Centre (2021) 45:216 (after exposure to EMF) are reported to be largely due to the importance of the membrane potential of the bacterial cell wall, these waves can have an effect on the antibiotic susceptibility (Torgomyan et al. 2011). Furthermore, increased antibiotic susceptibility may be due to the interaction of electromagnetic fields on the water molecules in the cell. An increase in the permeability of the bacteria which might make the ion channels to be kept open in the bacterial wall, efflux pumps and ion channels in the cell wall, could also be playing an important role in the absorption of antibiotics. Interaction between the electromagnetic waves and the molecules of the antibiotics (such interaction affecting the Hydrophilicity or hydrophobicity of the antibiotic, antibiotic activity and antibiotic molecule size) thereby altering the charge distribution on antibiotics molecule has also been reported to affect the transportation of antibiotics across the cell membrane (Taheri et al. 2017b). So far, the process of antibiotic resistance (with accelerated increase through various anthropogenic activities) seems to proceed unabated. Results of this study can further help in understanding the various mechanisms of antibiotic resistance thereby adding valuable angles in the development of effective antibiotic therapy for treatment of infectious diseases. Based on findings in this work, the roles of mechanosensitive channels present on bacterial membranes with respect to permissively or selectively admitting antibiotic molecules crossing the membrane barriers can be further studied. The effects of electromagnetic flux and sound waves demonstrated in this study could be incorporated in the evaluation of mechanosensitive channels as pharmacological targets for novel antibiotics design. It has recently been identified that the antibiotic streptomycin can act as an unlocking agent of Table 5 Antibiotics susceptibility profile of selected Gram-negative bacteria as read in mean diameter of zones of inhibition (mm) after treatment to electromagnetic flux at 5mT intensity compared with control a) -Control b) -15 min R-Resistance CAZ-Ceftazidime CIP-Ciprofloxacin TET-Tetracycline IMP-Imipenem AMP-Ampicillin *Data are mean of replicate ± Standard deviation Bacteria culture CAZ CIP TET IMP AMP Page 11 of 14 Garuba et al. Bulletin of the National Research Centre (2021) 45:216 large mechanosensitive channels in bacteria, as the drug uses the channels as a route of entry into the cytoplasm (Blount and Iscla 2020). Further research has also identified the potential promiscuity of mechanosensitive channels in some species in allowing access to multiple antibiotics into the cytoplasm depending on the size and vibrational effect (Wray et al. 2019). Earlier reports have identified bacteria to possess potentials to respond specifically to vibrational patterns that induce ionic flow and electrical charges on the membrane which most likely are unique to each species (Norris and Hyland 1997). It was shown that soundinduced extracellular vibrations could be detected as unique acoustic signals that influence intracellular vibrations linked to the movements of intracellular organelles like molecular motors, cytoskeletons, and chromosomal package in a specific pattern (Reguera 2011). Electromagnetic flux has also been identified as particularly important in modulating microbial metabolism in a method that can be detrimental or beneficial to cellular proliferation depending on the specifics of the flux frequency and intensity (Beretta et al. 2019). Specific examples with respect to Pseudomonas and Enterobacter species showed that zero magnetic fields induced antibiotic resistance in these bacteria in a magnetosensitive pattern (Creanga et al. 2004). This clearly means that understanding of the exact electromagnetic and sound frequencies that control membrane mechanosensation and intracellular vibrations can be utilised in species-specific microbial responses to sound and electromagnetic flux. To consolidate on the effects of sound, there have been practical observations of acoustic frequencies emitted by a B. subtilis strain detected at a range of 8-43 kHz (Matsuhashi et al. 1998), while a lower range of 0.9-1.6 kHz was observed for Saccharomyces cerevisiae cells (Pelling et al. 2004). Apart from cellular vibrational patterns, it is understood that sound waves are propagated by mere vibration of particles within a medium (Reguera 2011), this therefore means that there can also be distinct sound patterns as observed within certain microbial ecological environments. These sound patterns could also be further studied as signals and frequencies and built upon to determine the specific responses of such bacteria to modulated sounds; a process that can be harnessed in designing therapeutics targeting sound-sensitivity in bacteria. With the exposition of this knowledge, it could then be easier to design adjuvants or small sized frequency specific agonists that can be used to complement the existing antibacterial drugs in a synergistic fashion against target bacteria, especially drug resistant phenotypes. Data generated in this work can also potentially enrich aspects of sonobiology and sonogenetics for further studies of microbial membrane and intracellular behaviour. Studies involving recently applied biomolecular ultrasound techniques have elucidated species-specific acoustic modulation of bacterial membranes and intramolecular materials for higher resolution imaging and characterisation of intracellular activities (Maresca et al. 2018). Results from this work could be further expounded upon to provide a basis for deeper experimentation on variable species behaviour for improvement into direct sonar scanning and imaging of bacteria in the future.
Industrially, the information from this study can be useful for strain improvement of microorganisms for enhanced production of desired microbial metabolites where applicable. This current study can also be applied in advancing the techniques of sonoporation in bacteria, as a combined application of electromagnetic pulses and sound can be specifically made to target viable cells. Pores formed on the cells can be sites of DNA entry in transformation experiments, as well as points for seepage of viable intracellular metabolites of molecules (Song et al. 2007). These data could also be a basis for design of more effective sonobiorecators (Chisti 2003) and electrobioreactors (Mohtasham et al. 2016) for bioprocess optimisation. Ultimately this new research area should open a new frontier for multidisciplinary work at the interfaces of molecular biology, biochemistry and biophysics especially bioacoustics and sonobiology.
Despite the proper evaluations of potential application of the data in this study, it is important to note that, specific experiments linking measured cell membrane mechanosensation in line with the permissiveness to antibiotics as well as growth patterns were not fully elucidated here. We however, have extrapolated potential reasons explaining the behaviour of the bacterial cells under the influence of sound vibrations and electromagnetic flux. In line with this it is therefore necessary to further investigate mechanosensory biology, along with transcriptome and/or metabolomic profiling of sound and low electromagnetic stimulated cultures using different microorganisms in other to provide meaningful understanding in the area of cellsound and cell-EMF interactions.
Conclusions
In this study, it has been shown that physical factors such as sound and electromagnetic flux (EMF) that abound in the environment interfere with the physiology of bacteria either by enhancing or inhibiting its growth as well as its response to antibiotics. This further point to the fact these factors can affect biomass accumulation and synthesis of intracellular molecules via a range of pathways, and that certain frequencies and amplitudes may favour some microbial species over others. This work also identifies the need for deeper investigations into the biomechanistic approaches microbial cells utilise in balancing membrane mechanosensitivity under the influence of sound and electromagnetic frequencies. The desired frequencies that could be applied in modulating cellular activities positively or negatively for applications in more efficient antimicrobial therapies and bioproduction processes are a major point of reference going forward.
Abbreviations ATCC : American type culture collection; EMF: Electromagnetic field. | 2021-12-13T14:22:42.109Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "2db7fc1a789faa6ecf511c69b0b02798f7702a9f",
"oa_license": "CCBY",
"oa_url": "https://bnrc.springeropen.com/track/pdf/10.1186/s42269-021-00674-z",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "acc4c53f886f5a2944e30000a7811fbb63039751",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
150371350 | pes2o/s2orc | v3-fos-license | Adaptability, Stability and Multivariate Selection by Mixed Models
The aim of this work was to estimate the adaptability
and stability of grain yield per hectare and percentage of crude protein of
maize grains combined in an index, and to establish a multicharacter selection
through mixed models based on an objective character and 15 auxiliary traits.
The trials were conducted in the 2013/2014 agricultural year in four growing
environments of the Rio Grande do Sul, BR state. The experimental design was
randomized blocks arranged in a factorial scheme, being four growing sites × 15
single cross maize hybrids, arranged in three repetitions. The genotypic index,
composed by the grain yield and the crude protein percentage in the grains, is
the best selection strategy to achieve maize superior genotypes. The
multivariate genotypes selection, considering grain yield and crude protein, is
efficient. The genotypes FORMULA TL®,
AS1656PRO®, P30F53Hx®,
LG6304YG® and 30F53 are more adapted and stable
for grain yield and percentage of crude protein, in the conditions of this
study. The mixed models were efficient to employ the multicharacter selection
and to contribute for maize genetic breeding.
Introduction
Maize (Zea mays L.) is one of the most produced cereals in the agribusiness scenario. Its importance is justified because of the wide utilization in animal nutrition, with 70% of the production in the form of silage or feed formulation, and even ethanol production in some countries such as USA [1]. The maize cultivation covers a wide range of growing environments. However, the genotypes may present differential behaviors as function of the environment modifications known as genotype environment interaction (G × E). The G × E interaction causes implications for breeding programs of any species, since the evaluation or recommendation of cultivars. Therefore, the study of this interaction is extremely important in order to find alternatives to minimize its effects, mainly by identifying genotypes highly responsive to environmental improvement, which are characterized by broad adaptability, predictable behavior and good stability [2].
Currently, breeding programs seek to identify high yielding genotypes, and posteriorly target their selection strategies in the quality of the grains, thus, the search for the ideal genotype that gathers productive and qualitative superiority demands elevated financial and labor resources of the breeding program, as well as suitable selecting strategies [3] [4] [5] [6]. An alternative to minimize this obstacle is the multivariate selection, which aims to select a set of simultaneous traits. In this way, the selection index proposed [7] [8] is used, which consists of a linear function of the predicted phenotypic or genotypic values of the characteristics pondered by estimated coefficients in order to maximize the correlation between the index and the true genetic values [9]. Therefore, genetic gain may be maximized when compared to direct selection, or selection individually performed for each trait [10]. The selection indexes have been successfully used in species of agronomic interest such as popcorn [11] [12], baby-corn and green corn [13]. However, there are few studies involving the selection of maize genotypes with high grain yield and protein content simultaneously.
Given the lack of information regarding multivariate selection in maize, this work aimed to estimate the adaptability and stability of grain yield per hectare and percentage of crude protein of maize grains combined in an index, and to establish a multicharacter selection through mixed models based on an objective character and 15 auxiliary traits.
Materials and Methods
The trials were conducted in the 2013/14 agricultural year, in four growing environments of the Rio Grande do Sul-BR state ( Table 1). The climate for all growing environments is classified by Köppen as Cfa subtropical [16]. The experimental design was randomized blocks arranged in a factorial scheme, being four growing environments × 15 single cross maize hybrids, arranged in three repetitions. The genotypes used were: 1) 2A106, 2) 30F53, 3 [14]. **Historical averages of temperature and precipitation [15]. 13) MAXIMUS VIP3®, 14) DEFENDER VIP® and 15) IMPACTP VIP3® ( Table 2). The experimental units were composed by four lines of five meters length, spaced 0.5 meters, totalizing 10 m 2 [17]. No-tillage system was used for all growing environments, with population of 80 thousand plants per hectare. It was used 300 kg•ha −1 of NPK in the formula (10-20-20) as base fertilization, and 135 kg•ha −1 of N in the amidic form as topdressing, applied at V 4 and V 6 vegetative stages. The management of weeds, pest and diseases were carried out preventively, in order to reduce interferences in the experiment's results.
The traits of interest were measured in the useful area of each experimental unit, which was composed by two central lines, discarding 0.5 m of each edge. The measured traits were: spike diameter (SD), results in millimeters (mm); spike length (SL), results in centimeters (cm); spike mass (SM), results in grams (g); cob diameter (CD), results in millimeters (mm); cob mass (CM), results in grams (g); spike insertion height (SH), results in meters; number of rows with grains in the spike (NRG), results in units; plan height (PH), results in meters (m); number of grains per row in the spike (NGR), results in units; prolificity (PRO), results in units; mass of a thousand grains (MTG), results in grams (g); grain yield (GY), results in kg•ha −1 [5] [17]; percentage of crude protein (CP) and mineral material (MM) in the grains [18].
The phenotypic index (PI) was generated by the product of grain yield per hectare and the percentage of crude protein of each genotype's grains [19].
where: PI = phenotypic index combining grain yield per hectare and percentage of crude protein in the grains; GY = grain yield per hectare; CP: percentage of crude protein in the grains; GY S = standard deviation of grain yield; CP S : standard deviation of crude protein. Equal relative economic weights were attributed to both traits (GY and CP), i.e., this phenotypic index was taken as objective character. where: y, b, g, ge, and e are the data vectors. The model fixed effects are given by the average of the blocks through the sites, aleatory genotypic effects, aleatory G × E interaction effects, respectively. X, Z and W are matrices of incidence for b, g and ge, respectively [20]. The joint selection by PI, and the genotype's stability and adaptability were based on the statistic called harmonic mean of the relative performance of predicted genotypic values (HMRPGV) [20]. In this model, the interaction free predicted genotypic values consider all growing environments, are given by u + g, where u refers to the average of all environments. The predicted values for each trait in the univariate form were used in the genotypic selection index exemplified below. In addition, the genotypic correlation was obtained between the analyzed traits to elaborate the selection index. All the analyses were performed through Selegen software (Reml/Blup) [21]. The predicted genotypic values were used for estimating the pair to pair joint correlation between growing environments. The predicted genetic values for each trait from the univariate analysis may be used to compose the selection indexes considering one objective character and the others as auxiliaries [22], being PI (GY × CP) the objective trait, and the other 15 traits, GY, CP, CD, NRG, MTG, CM, PH, SD, SL, SH, NGR, PRO, SM, SGM and MM, considered auxiliaries, a selection index may be derived using this 16 information simultaneously: where o g is the standardized genotypic value of the objective character, and ai g is the standardized genotypic values of the auxiliary traits. The index's weighting coefficients ( i b ) are given by [22]: a a a a a g g g g ga g r r r r r r r r r r r r r r r r r P r r r r Sim r Vector of genetic covariance between the predicted genetic value of the objec- Thus, the variance of the index is given by: Consequently, the accuracy of GI is given by the root of reliability.
Results and Discussion
The Deviance analysis revealed significance at 5% of probability by the chi-square The genetic parameters estimated for the traits of interest (Table 3) cates that the G × E interaction for these traits expressed simple effects, in other words, although there was differentiated behavior, the genotypes classification was not substantially altered in function of the different tested environments [30]. The coefficient of genotypic variation (CVgi) ranged from 2.15% to 12.91%, indicating the presence of genetic variation for the evaluated traits. Researches define that the higher magnitude of coefficient of genotypic variation allows genetic gains in the genotypes selection [20]. Regarding the coefficient of experimental variation (CVe), low magnitudes were observed, which reflects the suitable experimental conditions and reliable estimates. The coefficient of relative variation (CVr) ranged from 0.09 (SGM) to 1.96 (GY), with higher contribution of the genotypic value for the trait's total variation, indicating they may be less influenced by environment effects [31].
The genetic correlations for growing environments obtained pair to pair, and referent to the PI objective character, were all low [29], revealing elevated dissimilarity among environments and indicating the absence of breeding zones, therefore, the selection strategies must be exclusively proceeded in each one (Table 4). Studies [32] with maize open pollinated varieties grown in 15 environments in the Goiás state-BR, evidenced formation of two groups of stable environments over the agricultural years studied, and a reduction of 16% of the environments currently used. Research [33] stratified the environments regarding maize lodging and breaking, thus, when considering these traits, the experimental net can be reduced because the genotypes do not present differential responses as function of environmental variations. Besides the best genotypes recommendation through the interaction free genotypic values (u + g), a general recommendation for all environments of the experimental net can be realized by the capitalization of the mean interaction (u + g + gem) among environments (Table 5). This ordering is greatly relevant for plant breeding because it considers the mean genotypes performance in the experimental net environments. The gains with selection through u + g + gem were superior to gains achieved through u + g (Table 5) due to the average performance increment of each genotype in the four environments. Therefore, the use of mixed models methodology and the REML/BLUP procedure allows to access important effects to guide genetic selection by the breeder. Table 5. Ordering of maize hybrids through genotypic values free from genotypes × environments interaction effects (u + g), genotypic values plus one mean effect of interaction (u + g + gem) and predicted gains for the objective character or phenotype index (PI), in the joint analysis among environments. By comparing the ordering for PI through the predicted genotypic value (u + g), genotypic value plus the mean interaction (u + g + gem), stability (HMGV), adaptability (RPGV) and stability, adaptability and grain yield simultaneously (HMRPGV*GY) ( Table 6) The index (GI) with an objective character (PI) and 15 auxiliary traits was elaborated according to methodology of global optimization and multivariate BLUP initially derived by Viana and Resende [22], for utilization with three characters. In this study, the approach was expanded for genotypes selection using 16 characters, being a pioneering work in this sense ( Table 7). The GI is Gain related to the overall mean through HMRPGV*GY (best hybrid): 62% Note: the underlined hybrids are the best five according to the ordering of mean genotypic effects (u + g + gem) in the selection among environments, also present in the selection ordering for stability, adaptability and stability and adaptability (15 in 15, 100%). American Journal of Plant Sciences Coincidence (five best genotypes) between PI and GI: 80% + GY: grain yield per hectare (kg•ha −1 ); CP: percentage of crude protein in the grains (%); CD: cob diameter (mm); NRG: number of rows with grains in the spike (unit); MTG: mass of a thousand grains (g); CM: cob mass (g); PH: plant height (cm); SD: spike diameter (cm); SL: spike length (cm); SH: spike insertion height (cm); NGR: number of grains per row in the spike (unit); PRO: prolificity (unit); SM: spike mass (g); SGM: spike grains mass (g) and MM: mineral material of the grains (%).
composed by the PI objective character which combines grain yield per hectare and percentage of crude protein in the grains, jointly to the 15 auxiliary traits optimally weighted by their accuracies, heritabilities and genetic correlations.
All these factors are adequately considered in the weighting coefficients (Table 7), which will be higher as higher the correlations of auxiliary traits with the objective character are [22]. The GI selective accuracy was 0.63, being 210% higher than the PI objective character individually considered (accuracy of 0.30).
Selective accuracy refers to the correlation between true genotypic value and predicted value through experimental information [29]. This parameter's utilization is considered ideal for choosing the best selection method, mainly because the genetic gain is directly proportional to the accuracy, i.e., as higher the accuracy is, better is the precision of selection [34].
It was verified a change of position between the genotypes selected by PI and A. J. de Pelegrin et al.
GI, with coincidence of 80% among the five best maize hybrids. Therefore, the ordering generated by GI should be used for the final recommendation of the genotypes, since it is a more accurate index than the PI, as it aggregates information of the auxiliary traits, their genotypic correlations with the objective character, genotypic values and selection reliability. In addition to accuracy increment, the GI character provided higher genetic gains than PI, where the use of GI increased genetic gain by 2.33% due to the selection of the best genotype, and 1.72% by the selection of the three best ones. In genetic breeding programs, there is an imminent difficulty for selecting superior genotypes of traits with low genetic control, due to the great effect that the environment exerts on the genotype's phenotypic variation. Therefore, the use of auxiliary traits becomes a viable practice to improve the selecting process efficiency of superior maize genotypes.
Conclusions
1) The genotypic index, composed by the grain yield and the crude protein percentage in the grains, is the best selection strategy to achieve maize superior genotypes.
2) The multivariate genotypes selection, considering grain yield and crude protein, is efficient.
3) The genotypes FORMULA TL®, AS1656PRO®, P30F53Hx®, LG6304YG® and 30F53 are more adapted and stable for grain yield and percentage of crude protein, in the conditions of this study.
4) The mixed models were efficient to employ the multicharacter selection and to contribute for maize genetic breeding. | 2019-04-27T08:34:15.123Z | 2017-12-04T00:00:00.000 | {
"year": 2017,
"sha1": "82323aaa17afaa3a9c737c863566c00bc03e31ce",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=81044",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "82323aaa17afaa3a9c737c863566c00bc03e31ce",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257095493 | pes2o/s2orc | v3-fos-license | An Investigation of Effect of Stand-Off Distance on the Material Removal Characteristics and Surface Generation in Fluid Jet Polishing
Fluid jet polishing (FJP) is a versatile polishing process that has many advantages compared to other polishing processes. Stand-off distance (SOD) is one of the key parameters in fluid jet polishing. However, relatively little research work has been carried out to investigate its effect of SOD on material removal characteristics and surface generation in FJP. In this paper, a systematic investigation of the effect of SOD on the tool influence function and surface topography in FJP was conducted. Experiments were designed for FJP two kinds of materials corresponding to ductile and brittle materials. They are nickel copper (NiCu) alloy and BK7 optical glass, respectively. In this study, the SOD was varied from 2 to 35 mm. Analysis and discussions were made on its effect on the shape of TIF, material removal rate, and surface topography. It is interesting to note that the TIF shape becomes a Gaussian-like shape with large SOD both on NiCu and BK7, which provides a novel way to optimize the TIF in FJP. The variation of the material removal rate and surface roughness versus SOD on NiCu and BK7 were also determined from the experimental results. Moreover, the surface topography of NiCu and BK7 were characterized from the results measured from the white light interferometer and scan electron microscope. The outcome of the study provides a better understanding of the material removal characteristics and surface generation mechanism in FJP.
Introduction
Fluid jet polishing (FJP) [1,2] is a versatile polishing process, which is a kind of working mode of the abrasive water jet machining process [3][4][5] with a working fluid pressure of less than 20 bar [6]. It is also a non-contact polishing process [7], which is beneficial for the polishing of freeform surfaces. Fähnle et al. [1] firstly used this method for polishing optical lens. In FJP, abrasives and water are mixed adequately by mechanical stirring in a tank, and pumped at a low pressure to a nozzle, as shown in Fig. 1a. The generated jet impinges the target surface vertically leading to material removal, as shown in Fig. 1b. Compared to traditional polishing methods, FJP has many advantages, including the ability to polish various freeform surfaces, less tool wear and temperature increase during the polishing process, the ability to generate small tool influence functions (TIF) to dramatically reduce the edge effect, and a wide range of material removal rates through controlling the polishing parameters [2,3,6]. Hence, it has become an appealing method for polishing freeform surfaces for a number of materials, and an increasing amount of attention has been paid to this promising polishing process.
After the development of FJP, Fähnle et al. [8][9][10] and Booij et al. [11] started to do a series of research on material removal characterization and experimental optimization of the polishing process of optical lenses. Fang et al. [12,13] studied the surface roughness, material removal, and edge effect of the FJP process. Tsai et al. [14] designed a Taguchi experiment to investigate and optimize the FJP polishing conditions when SKD61 mold steel is polished. Zhu et al. [15] did a series of experimental studies focusing on polishing hard-brittle materials after FJP. Li et al. [16,17] optimized the tool influence function (TIF) of FJP so as to obtain Gaussian-like shape TIF for corrective polishing, and 1 3 adopted FJP to remove the tool marks left on single-point diamond turning. Beaucamp et al. [6] proposed a computational fluid dynamics model for FJP based on multiphase turbulent flow computational method, which is able to achieve arithmetic roughness (R a ) down to 1 nm on electroless nickel-plated optical dies while removing the diamond turning marks. Super-finished anisotropic surface texture of 0.28 nm root mean square (RMS) value was obtained for future hard X-ray telescopes through combining FJP and bonnet polishing by Beaucamp et al. [18].
Recently, Beaucamp et al. [19] introduced ultrasonic cavitation into the FJP process to generate micro-bubbles, and these micro-bubbles can boost the material removal rate by up to 380% without causing any degradation of the surface finish. Cao et al. [20,21] built a material removal and surface generation model for FJP based on theoretical and experimental studies. Shiou et al. [22] proposed an innovative rotary multi-jet polishing process to improve the surface roughness of Zerodur optical glass. To enhance the polishing efficiency of the FJP without degradation of the surface finish, the authors [23,24] proposed a multi-jet polishing process. The multi-jet polishing process was also utilized for the fast fabrication of structure array surfaces [25]. Recently, they also built a three-dimensional universal tool influence function model for FJP, which is able to be used for modeling of vertical impinging mode and oblique impinging mode of FJP [26]. However, our understanding of material removal characteristics and surface generation mechanisms of FJP is far from complete, especially on the effect of the stand-off distance (SOD) to this process, as shown in Fig. 1b. The SOD is one of the key factors affecting the FJP process. Current investigations of the SOD are mainly focused on its material removal rate and surface roughness. Booij [2] only did the experimental research on the effect of SOD to the material removal rate, but no clear relationship was found. Li [27] studied the effect of the SOD on the material removal rate and surface quality of the polished surface. Shi et al. [28] did a study on the effect of SOD on the shape of TIF and material removal rate in FJP. However, little attention was paid to the effect of SOD on the surface topography.
In this paper, a systematic investigation of the effect of SOD in FJP on ductile and brittle materials was carried out. The investigation will not only focus on the material removal rate and surface generation but also pay attention to the effect on the shape of TIF and polished surface topography. It will provide a deeper understanding on the material removal characteristics in fluid jet polishing.
Experimental Design
To investigate the effect of the SOD in the FJP process systematically, a series of experiments were conducted on both ductile and brittle materials. Nickel copper (NiCu) alloy is used to signify the ductile metal material, which is one of the important and widely used mold-making materials while BK7 optical glass is used to signify the brittle materials. The experiments were carried out on a ZEEKO IRP200 7-axis polishing machine, as shown in Fig generate the tool influence function (TIF, also called "footprint") on these two materials with various SOD. Table 1 summarizes the polishing conditions. To reduce the effect of the impinging angle, all the experiments adopted 90 degree as the impinging angle. The polishing conditions for these two groups of experiments are the same. To minimize the effect of the initial surface error on the results, singlepoint diamond turning was conducted on the NiCu surface before the experiment. While the polished BK7 surface was adopted as the workpiece, as shown in Fig. 3. As summarized in Table 1, the SOD varies from 2 to 35 mm, since the fluid flow becomes divergent when the SOD is larger than 35 mm, as shown in Fig. 2c. Three TIFs were generated under each condition for the evaluation. 4000# silicon carbide abrasive (average size is about 3 μm, from FUJIMI Corporation, Japan) was used for the polishing experiments on both NiCu and BK7.
Figures 4 and 5 show the measured contour on Zygo Nexview white light interferometer of the generated TIFs on
NiCu and BK7 surfaces, respectively. Their sectional profiles have also been extracted and demonstrated under each TIF contour. The shape of the generated TIFs are almost rotational axisymmetric except for some deviations induced by the wearing of the nozzle. It is interesting to note that the material removal distribution appears to be a ring shape on both NiCu and BK7 with small SOD, and it looks like a "W" shape on its sectional profile. There is almost no material removal at the center of TIF when the SOD is small. With the increase of SOD, the material removal at the center zone also becomes larger gradually. It is interesting to note that the sectional TIF shape changes to a Gaussian-like shape or "U" shape when the SOD is larger than 30 mm in the case of NiCu, which is the deepest at the center and less deep Effect of stand-off distance to the shape of tool influence function on nickel copper alloy surface towards the edge. This phenomenon is also observed in the case of BK7 when the SOD is larger than 25 mm. It is well known that Gaussian-like TIF is conducive to the convergence of the surface form in sub-aperture corrective polishing process [29]. Small SOD (usually smaller than ten times of the nozzle diameter) is usually adopted in practical polishing process, and the sectional TIF is a "W" shape when the nozzle impinges the target surface vertically. Hence, many researchers have started to optimize the TIF shape aiming to obtain Gaussian-like shape through various means. Booij [2] put a stationary helix into the cylindrical nozzle to generate this kind of TIF for the first time, but the fabrication and assembly of this kind of helix is very difficult. Fang et al. [30] attempted to generate the Gaussian-like TIF through oblique impinging the target surface from four or six positions. Li et al. [16] obtained the Gaussian-like TIF through rotating the oblique nozzle. Shi et al. [31] adopted submerged jet polishing to generate Gaussian-like shape TIF, and implement corrective figuring on optical components successfully. Hence, this study attempts to provide an alternative and simplified way to generate the Gaussian-like shape TIF for FJP.
Analysis of the Effect of the Stand-Off Distance on the Material Removal Rate
The material removal rate was determined considering the peak-to-valley (PV) depth and volume removal rate (VRR) of the TIF, as shown in Fig. 6. The variation trend of PV and VRR versus SOD are found to be different when polishing on NiCu and BK7, which are ductile and brittle materials, respectively.
In the case of polishing NiCu, the PV value increases with increasing SOD, and reaches the maximum value when the SOD is about 10 mm, as shown in Fig. 6a. The abrasive impacts with each other after gathering inside the nozzle and eject out, leading to energy loss. This is the reason why small PV value of the TIF when the SOD is smaller than around 10 mm. Moreover, when the SOD is around 10 mm, the abrasive velocity becomes the largest, attributing to the largest PV value. The PV value starts to decrease when the SOD is larger than 10 mm until it reaches the minimum at about 23 mm, induced by the obstruction from the surrounding air. When the SOD is larger than 23 mm, the PV value becomes stable. It is interesting to note that the VRR has a positive relationship to the SOD. With an increase of the SOD, the size of the footprint becomes larger, which leads to the increment of VRR, as shown in Fig. 6b.
In the case of polishing on BK7, both the PV and VRR become larger with increasing SOD until the SOD is about 23 mm. Moreover, both of them start to decrease slightly with a further increase of the SOD. It is different from the trend on NiCu. This phenomenon may can be due to the fact that, when the velocity of abrasive decreases with increasing SOD, and the velocity of some abrasives are not large enough to provide material removal on BK7. Figure 7 shows the measured arithmetic roughness (R a ) of the surface, which varies with the SOD. The arithmetic roughness of the surface was measured by a Zygo Nexview white light interferometer. Considering that the surface roughness at the central zone and circular zone in the generated footprint is different, as shown in Fig. 7a, the surface roughness of these two zones was measured. For NiCu, it is interesting to note that the SOD has little effect on the surface roughness of the circular zone. The R a value at the central zone is smaller than that for the circular zone when the SOD is small. Moreover, the R a value at the central zone becomes larger with increasing SOD until it is equal to the R a at the circular zone, where SOD is about 12 mm. This is due to the fact that the initial surface roughness is quite small after diamond turning (about 5 nm), and little material removal was generated at the central zone when the SOD is smaller than 12 mm, as shown in Fig. 4. When the SOD is larger than 12 mm, the R a value at both the circular and central zone of the footprint stop to increase, and converge to a single value (which is 42 nm in this case, with the fluctuation of ± 5 nm), as shown in Fig. 7b.
Analysis of the Effect of the Stand-Off Distance on Surface Topography
During the surface generation process, the surface roughness is dominated by the material removal region, which is the circular zone of the TIF. Hence, it infers that the SOD almost has no effect on the surface roughness of NiCu surface in fluid jet polishing, and it may be the same for other ductile metal surfaces. Table 2 shows the surface topography at the central zone of NiCu measured by the Zygo Nexview white light interferometer and TESCAN VEGA3 scanning electron microscope (SEM). Since NiCu is a ductile material, ductile erosion is the main material removal mechanism, which includes cutting and ploughing, as shown in the SEM results in Table 2. The surface topography is almost the same under different SOD, which corresponds to the surface roughness results, as shown in Fig. 7b.
Hence, a large SOD can be adopted when fluid jet polishing the ductile metal materials as long as the fluid jet is stable. For example, the SOD between 30 mm and 35 mm can be adopted for the polishing of NiCu surface in this study. It can not only implement the highest material removal rate but also generate the Gaussian-like shape TIF.
For BK7, the roughness of the polished surface increases gradually with increasing SOD. Fractures can hardly be seen when the SOD is small, and pits start to come out when the SOD reaches about 12 mm. As shown in Table 3, more and more pits come out when the SOD is larger than 12 mm, which corresponds to the changing trend of the surface roughness. When the SOD is larger than 12 mm, pits are obviously observed, which may be induced by brittle fracture. As shown in Fig. 7b, the growth rate of the R a value of BK7 is larger than other stages when the SOD is between 12 and 25 mm. This may be explained that there exists the transition stage between ductile material removal and brittle material removal. When the SOD is larger than 25 mm, the material may be mainly removed by brittle removal mode, and the number of fracture cracks increases gradually with a larger SOD, which can be seen from the results obtained by the white light interferometer and the SEM, as shown in Table 3.
When the SOD is smaller than 8 mm, the material may be mainly removed by ductile mode. Hence, the SOD should not be larger than 8 mm under the polishing conditions in this study. Moreover, there may be small differences of this value under different polishing conditions and for different materials. Even though the TIF shape Fig. 7 The effect of the standoff distance to the surface roughness: a schematic diagram of the measured area of the arithmetic roughness, and b measured results of the arithmetic roughness varies with the stand-off distance 1 3 can also become Gaussian-like shape with large SOD, as shown in Fig. 5, it cannot be used for the polishing of brittle materials because of large surface roughness, which is also observed in the sectional surface profile as shown in Fig. 5.
Conclusions
In this study, a systematic investigation of the effect of the stand-off distance (SOD) to the tool influence function and surface topography were carried out in fluid jet polishing 1. The shape of the tool influence function can become Gaussian-like shape from a ring shape with increasing SOD on both ductile and brittle materials. 2. The suitable SOD in FJP is found to be different in polishing of ductile and brittle materials based on the analysis of material removal rate and surface topography. When using a 1.1-mm nozzle provided by Zeeko Ltd., the SOD is preferred to be 25-35 mm for FJP on NiCu, while it is preferred to be ~ 8 mm for FJP on BK7, under the polishing conditions as described in this paper. 3. The material removal mode when polishing on BK7 can be divided into three stages with increasing SOD, which are ductile removal mode, transition mode between ductile and brittle removal mode, and brittle removal mode, respectively. Hence, the SOD should be controlled to be smaller than a certain value so as to pertain ductile material removal mode. This value could be about 8 mm under the polishing conditions, as described in this paper. 4. The conclusions made on NiCu and BK7 may also be suitable for some other ductile materials (e.g., mold steel, nickel, etc.) and brittle materials (e.g., tungsten carbide, sapphire, etc.), respectively. Some certain values described in this paper may be different under different polishing conditions, such as different nozzle diameter, different abrasive size, different fluid pressure, etc.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2023-02-23T15:09:05.133Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "dea24ddc8c39137502ba9cac1320e75b64aec3a6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41871-020-00066-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "dea24ddc8c39137502ba9cac1320e75b64aec3a6",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
237357396 | pes2o/s2orc | v3-fos-license | Automated Dynamic Mascon Generation for GRACE and GRACE-FO Harmonic Processing
: Commonly used mass-concentration (mascon) solutions estimated from Level-1B Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On data, provided by processing centers such as the Jet Propulsion Laboratory (JPL) or the Goddard Space Flight Center (GSFC), do not give users control over the placement of mascons or inversion assumptions, such as regularization. While a few studies have focused on regional or global mascon optimization from spherical harmonics data, a global optimization based on the geometry of geophysical signal as a standardized product with user-defined points has not been addressed. Finding the optimal configuration with enough coverage to account for far-field leakage is not a trivial task and is often approached in an ad-hoc manner, if at all. Here, we present an automated approach to defining non-uniform, global mascon solutions that focus on a region of interest specified by the user, while maintaining few global degrees of freedom to minimize noise and leakage. We showcase our approach in High Mountain Asia (HMA) and Alaska, and compare the results with global uniform mascon solutions from range-rate data. We show that the custom mascon solutions can lead to improved regional trends due to a more careful sampling of geophysically distinct regions. In addition, the custom mascon solutions exhibit different seasonal variation compared to the regularized solutions. Our open-source pipeline will allow the community to quickly and efficiently develop optimized global mascon solutions for an arbitrary point or polygon anywhere on the surface of the Earth.
Introduction
Time-variable gravity measurements from the Gravity Recovery and Climate Experiment (GRACE) and the GRACE Follow-On (FO) missions have enabled an unprecedented analysis of mass change on the surface of the earth since April 2002 [1]. The GRACE and GRACE-FO satellites measure changes in the Earth's gravitational field by measuring changes in inter-satellite distance using a microwave ranging system in the K/Ka band range (KBR), as well as an experimental laser ranging interferometer (LRI) on the GRACE-FO satellite pair [2]. The resulting time-variable gravity product is provided with a monthly temporal resolution and a spatial resolution of roughly 300 km [1,3].
Time-variable gravity measurements by GRACE/GRACE-FO can be represented by spherical harmonic solutions [4], which are provided as Level-2 data products by the mission Science Data System (SDS) centers (Center for Space Research at the University of Texas, Austin (CSR), Jet Propulsion Laboratory (JPL), and The German Research Center for Geosciences (GFZ)). Spherical harmonics are global by nature, and spread power globally [5]. Given the limited resolution of the data at degree and order 60 (∼330 km), along with the random noise that increases as a function of spherical harmonic degree [6], these solutions require post-processing techniques in order to obtain regional estimates of mass change with minimal noise and leakage [7].
One particular technique to isolate the regional mass change from the global gravity solutions is to use mascons, or mass concentrations. This method was developed at the JPL to produce maps of the lunar surface [8]. Mascons are a way of calculating if a region is in a state of mass surplus or mass deficit at any given time as compared to an initial state [9,10]. Mascons were used with both the Level-1b range-rate data and the Level-2 spherical harmonic solutions. Global mascon solutions from the Level-1 data are provided to the scientific community by the JPL [11], CSR [12], and the Goddard Space Flight Center (GSFC) [13,14]. For end users, customized regional mascon solutions can also be created using the Level-2 spherical harmonic solutions [15][16][17][18][19]. Reference [10] used a filtering and smoothing approach to calculate the mass balance of Indian water basins from Level-2 harmonics. Later studies by Jacob and colleagues used a least-squares mascon approach to calculate the mass balance of regionally defined non-uniform regions in North America [16] and the world's glaciated regions [20]. This approach was further improved by [15,21] by using regional uniform spherical caps that minimize the leakage of the mascon solutions. References [17,18] used a non-uniform regional spherical cap approach to adapt the mascon configuration to the geophysical characteristics of the region of interest. Another approach involves the use of point-mass approximations to calculate gravitational variations at the orbit altitude of the GRACE satellites. This method was implemented by [22,23] and further improved by [24]. Ran and colleagues propagated the full covariance matrix of the spherical harmonics, and carefully adjusted mascon sizes to ensure spectral consistency in the solutions, without the need for regularization. Reference [25] then examined the optimal number of mascons in Greenland for various temporal spans (trend, inter-annual and monthly variations, and climatological variations). The point-mass approach was further improved by [26], who modeled the approach as a Taylor expansion with higher degree terms to improve the noise levels of the solution.
Mascons, as a GRACE estimation technique that directly relates range-rate data to the mascon solution, contain less information loss relative to the determination of mascons from spherical harmonics [11]. While the global range-rate mascon solutions offer convenience and ease-of-use, they provide minimal control to the user in terms of the assumptions and corrections that go into the solutions. For example, many mascon solutions are heavily regularized, which may or may not suit the needs of communities focused on different regions [27]. The JPL mascons, for example, use an a priori covariance matrix based on a number of models for the sources of mass change across the globe, as well as a temporal kalman filter for the monthly solutions [11]. In addition, the user has little control of the choice of corrections that go into the solutions, such as the Glacial Isostatic Adjustment (GIA) model, any additional atmospheric or pole-tide corrections, or the removal or incorporation of any additional fields (such as hydrology) in the mascon inversion. To appropriately compare or incorporate additional mass fields (e.g., raster grid), the data have to be converted to spherical harmonics, truncated to the same degree and order as GRACE, and fitted to the mascons. Utilizing this spherical harmonic mascon approach gives researchers the freedom to treat GRACE and non-GRACE data equally for proper validation and comparison.
More importantly, while a few studies have examined the optimal number of mascons (e.g., point-mass estimations for different temporal scales [25] or regional variable-size configurations [17,18]), routinely used global mascon solutions, such as those provided by the JPL or GSFC, provide no flexibility to the mascons' locations. This is particularly important in applications that are close to the limiting resolution of the satellites, for which geophysically meaningful signals may need to be treated separately. The placement of mascons would require the intra-mascon mass change to be close to uniform [20], which needs to be taken into consideration in placement of mascons to the extent possible. Furthermore, the separation of adjacent glacial or hydrological basins requires a careful assessment of the configuration of mascons and the corresponding sensitivity kernel to demonstrate what is being sampled by adjacent mascons [17,18,20].
These problems can be addressed by custom-defined regional mascon configurations that rely on an inversion of the spherical harmonic solutions (e.g., [15,19]). For example, [17] defined non-uniform spherical cap mascons (i.e., circular domes on a sphere, such as a stitched soccer ball), where the position and the size of the caps were dependent on the signal-to-noise ratio and the geometry of the geophysical signal. Thus, the authors were able to extract the mass balance of key ice sheet drainage basins at the limit of the GRACE satellite resolution. These regionally defined mascon solutions do not cover the entire globe, partly due to the difficulty of defining a non-uniform global grid on a sphere while maintaining the local configuration of the mascons. The regional nature of these solutions, however, poses several potential problems. First, any mass change not accounted for by the mascons may leak into the region of interest. Therefore, the far-field signal has to be accounted for by either an ad-hoc correction to the solutions, or an ad-hoc sparse distribution of matrices in regions that are expected to have a large change in mass and leak into the region of interest. Second, regional configurations are prone to more leakage at the boundaries of the mascon grid. The locally customized Antarctic configurations of [17,18] divert this boundary leakage into the ocean. Given that the ocean signal has already been removed from GSM GRACE harmonics, this ocean ringing does not affect the mass balance time series. However, the divergence of the kernel around the boundaries poses a larger problem for more inland regions with large mass change signals, such as High Mountain Asia (HMA).
Here, we propose a new approach to non-uniform mascon configurations that are regionally optimized and retain global coverage on a spherical grid. This allows the user to have complete control over the processing of the data, and focus on smaller basins, such as those of HMA, which require a regionally optimized mascon configuration, while avoiding issues of far-field and boundary leakage. In addition, a non-uniform configuration with large mascons in the far-field minimizes the degrees of freedom of the inversion, reducing noise in the final solution. This automated geometric optimization approach is agnostic to the nature of the data (spherical harmonics or range-rate data), but we focus on spherical harmonics here for simplicity and comparison to similar methods. We present our results for glacierized regions of HMA and coastal Alaska, and compare our results with the existing uniform global range-rate solutions. Open source software and documentation of the full workflow for the presented results is publicly available for use in the community [28].
Methods
We implement an iterative spherical Voronoi tessellation scheme to create the global non-uniform grid on the surface of the Earth, approximated as a sphere. The goal is to gradually shift the concentration of the Voronoi regions towards the region of interest, indicated by a set of fixed points provided by the user, such that we get a more compact presentation of the mascons in the region of interest, and gradually larger mascons in the far-field. A Voronoi diagram is defined on a plane as a series of regions that each contain the set of points that are closest to a given point, called a generator. More precisely, the Voronoi tessellation for a generator z i is defined by all points x such that Given an open set Ω ∈ R n and a set of k generators, where a − b denotes the Euclidean distance between points a and b [29].
Voronoi tessellations provide a powerful tool for mesh creation in climate models [30]. Here, we create Voronoi tessellations on the surface of a sphere [31], and implement an iterative scheme to dynamically adjust the mesh to reflect our desired mascon configuration, as described below.
We start with one or more fixed points for the region of interest chosen by the user. These generators are kept constant through all iterations. A uniform grid of generator points is then built around the globe. These generators are used to create the initialize Voronoi diagram, as represented in Figure 1A, with a single fixed point in the Karakoram region of High Mountain Asia, shown in red. The surfaces of the Voronoi regions are presented as random colors on the surface of the sphere. To ensure azimuthal symmetry around the fixed reference point, we rotate the coordinate system such that the reference point is located at the pole, as seen in Figure 1A. At each iteration step, the centroid of each Voronoi region V i is calculated with respect to its boundary. The centroid c i is then used as the new generator z new i in the iteration step. Recall that the aim of the iterative algorithm is to gradually shift the concentration of the Voronoi regions towards the set of user-generated fixed points. As such, we define a central point X 0 that is given by the mean of the fixed points (X 0 = ∑ m i x i for m user-defined fixed points). At each iteration, the newly calculated centroid for each region i is shifted by a fixed ratio towards X 0 :
A) Initial Tessellation B) Final Tessellation
for iteration t, where r is the distance multiplier coefficient to X 0 by which the centroid is shifted to create the new generator, which is empirically set to 0.02 in this study. As such, the mesh concentrates further around the fixed points at each iteration, while maintaining the same number of total generator points ( Figure 1B). After a set number of iterations, the polygons at greater distances from the user-defined fixed point grow larger, while there is a concentration of polygons around the region of interest. The importance of this setup will be discussed in the following sections. The total number of iterations is set as a hyperparameter by the user. Too few iterations will result in a distribution that is more uniform, and too many iterations will lead to large differences in size between the near-field and far-field mascons. The total number of iterations ranges between 50 and 60 in all of our test cases. The resulting polygons in the final iteration are used as the mascons by assuming each polygon has a uniform mass distribution equivalent to 1 cm water equivalence (cm w.e.). By using the mascons, the observed mass change is represented as the sum of a set of weighted uniform regions. To do this, the resulting mascon distribution is converted to spherical harmonics and truncated at degree and order 60, equivalent to the resolu-tion of GRACE harmonics. We perform a least-squares fit of the Level-2 RL06 spherical harmonic coefficients provided by JPL to the resulting mascons following the methodology of [15,21]. The C 2,0 and C 3,0 GRACE/GRACE-FO coefficients are replaced by the TN-14 supplemental solution provided by the Goddard Space Flight Center (GSFC) [32,33]. The degree 1 (geocenter) coefficients are obtained from the TN-13 supplemental solution using the methodology of [34,35]. Finally, we account for the mass change due to glacial isostatic adjustment (GIA) using the ICE6G-D model [36].
We design global Voronoi configurations for three regions: the Karakoram range, Nyainqentangla in the High Mountain Asia domain, and the glacierized region of coastal southeast Alaska. The Karakoram configuration is designed to sample the northwest (NW) and southeast (SE) regions separately. The Karakoram and Nyainqentangla regions display near-balance and negative geodetic glacier mass change in recent decades, respectively, with large uncertainty in Nyainqentangla [37]. In addition, Berthier and Brun [38] spatially show a variable mass change in the NW and SE Karakoram, which is challenging to resolve in the available GRACE/GRACE-FO data. We extend the methodology to the SE coast of Alaska and adjoining Canada to test the generalizability of our approach by sampling a smaller glacierized region near the coast.
In each case, we find that the optimal mascon configuration is achieved by defining one fixed point for the region of interest, which allows for more flexibility in optimizing the mesh. For the Karakoram, Nyainqentangla, and Alaska, we used a total of 153, 144, and 113 mascons, respectively. These hyperparameters were determined through trialand-error. The number and size of mascons are important in order to obtain the localized mass change of the region of interest, while keeping the degrees of freedom (i.e., the total number of mascons to be fitted), to a minimum elsewhere. In addition, larger mascons in the far-field minimize the effect of sharp edges and corners due to the truncation of the harmonics. The harmonic representations of the chosen mascons in the region of interest and the far-field for the each configuration are shown in Figure 2.
To understand the area being sampled by the optimized mascon configuration, it is necessary to examine the corresponding sensitivity kernel, given by Equation (A6) of [20]: where M i (t) is the mass of mascon i at time t, σ(θ, φ, t) is the actual surface mass density at time t, A(θ, φ) is the value of the sensitivity kernel at point (θ, φ), and R earth , φ, and θ serve as the mean radius of the Earth, longitude, and co-latitude, respectively. Due to the non-uniform distribution of mass within mascons and the truncated nature of the harmonics, there is inevitably some leakage of the mass balance signal between mascons, and the kernels may not correspond to the expected harmonic representation of the corresponding mascons. This leakage should be included in the uncertainty estimates of the final solution. To do this, we compare the kernels of the mascons of interest, with harmonic representations for the corresponding mascons, such as those shown in Figure 2.
In an ideal situation where all mascons are truly orthogonal to each other, the kernels would be exactly equal to 1 inside the kernel and 0 outside, i.e., where A(θ, φ) is the kernel of mascon i at point (θ, φ), S j is the surface area of mascon j, and δ ij is the Kronecker delta function [20]. Any leakage between mascons, which violates the orthogonality assumption, leads to a deviation from Equation (4). Therefore, as a first-order approximation, we quantify this violation as the percentage of mass represented by the kernel outside of the mascons, compared to the mass represented inside the mascons: Figure 3 shows the sensitivity kernels as described by Equation (3) The kernels highlight the fundamental trade-off between the size of the sampled area and the amount of ringing. In order to separate the Karakoram region into physically meaningful regimes, we sample different mascons for the northwestern (NW) and southeastern (SE) regions. The spatial variability of geophysical mass change signals in this area dictates the need for smaller mascons, which inevitably results in a higher amount of ringing due to the truncated nature of GRACE harmonics. With the dynamic mascon tool, the user can make this choice consciously and quantify the amount of leakage. More importantly, the same kernel can be applied to various non-GRACE data, such as hydrological model output, for comparison. Developing custom mascons from Level-2 harmonics provides researchers with the ability to examine the sensitivity kernel of mascon aggregates to quantify any leakage and rigorously sample gridded non-GRACE data within the kernel where the gravimetric data are sampled. Furthermore, non-global mascon configurations such as those of [18] are prone to ringing in the sensitivity kernel at the boundaries of the mascon grid. While this is manageable on the Antarctic Ice Sheet, which is surrounded by the Southern Ocean and has distant sources of far-field contribution to the kernel, it becomes a larger issue in smaller inland areas such as High Mountain Asia. Our automatic global configuration minimizes this problem.
Results
Given the fundamental trade-off between the sampled area and the amount of ringing and leakage, it is crucial to fully quantify the leakage error in the error estimate. This is another advantage of directly working with mascon kernels, where the user can fully account for any leakage from the kernel depending on the area being sampled. We calculate this leakage error following Equation (5). The results for each of the regions represented in Figure 3 are shown in Table 1.
Discussion
We compare the regional case studies presented in the previous section to the closest corresponding estimates from the JPL and GSFC range-rate mascon solutions. The JPL mascon solutions are provided on a global set of 4551 3 • spherical cap mascons [11]. The GSFC solutions are provided on a global set of 41,168 1 × 1 arc-degree mascons [13,14], where regularization constraints are applied over collections of mascons and the subresolution mascons are meant to be aggregated for regional estimates. Given that these mascon solutions are fixed and global in nature, they are not tailored towards the regions presented in this study. As a result, we sample the closest overlapping mascons for each region, as presented in Figure 4. Given the Voronoi mascons designed for a particular region, we select the corresponding JPL and GSFC mascons with the condition that at least 50% of the area of a given mascon overlaps with the Voronoi mascons. This further demonstrates the need to develop regionally tailored global solutions, as geophysically distinct regions are unlikely to be sampled in a physically meaningful way at all locations with a fixed uniform global grid. For example, the geophysically distinct regions in the Northwest and Southeast Karakoram are challenging to separate from the larger JPL mascon configuration. While the smaller GSFC mascons allow for the easier separation of regions, analysis of individual mascons can be problematic without examining the associated covariance matrix or sensitivity kernel to minimize leakage. Furthermore, to minimize any oceanic contributions to the mascons in coastal Southeastern Alaska, we apply land masks to both the JPL and GSFC mascons. The Voronoi mascons are derived from GSM GRACE harmonics, which already have the oceanic signal removed. Note that fitting the GSM coefficients to the JPL and GSFC mascon grids in the harmonic domain is not feasible for a few reasons. On a practical level, while fitting 100 Voronoi mascons to the GRACE data can be completed in minutes on a personal computer, fitting thousands to tens of thousands of JPL or GSFC mascons is computationally impractical for most users. Furthermore, the regularization constraints of the JPL and GSFC mascon solutions are not readily available and are outside of the scope of the present study. More fundamentally, however, fitting 3721 harmonics (maximum degree and order 60) to 4551 JPL mascons or 41,168 GSFC mascons is an ill-posed problem without additional information. This illustrates another advantage of our dynamic Voronoi approach, which allows the user to perform a well-posed global regression. We evaluate the time series from each of the three mascon solutions for the four casestudy sites. For consistency, we apply the same JPL and GSFC corrections to the Voronoi mascons, as outlined in Section 2, including the same GIA correction using the ICE6G-D model [36]. The resulting time series are shown in Figure 5. The errors for the Voronoi mascons are derived from the leakage error as shown by Equation (5), added in quadrature with the 1-sigma GRACE measurement error as in [40]. For the JPL and GSFC mascons, we include the uncertainty estimates provided with the solutions. Note that while the JPL solutions provide a 1-sigma uncertainty estimate, the uncertainty estimates provided by the GSFC are 2-sigma. Despite the more conservative error estimates, we observe a few differences between the solutions, although there is excellent overall agreement, as described below.
We find that, overall, there is good agreement between the three mascon solutions in the four sampled areas. This increases confidence that our custom Voronoi mascon methodology correctly samples the mass change time series of the region of interest, while providing more flexibility to the user to directly work with the design of mascons and the associated kernels. However, we also observe notable differences between the solutions. Namely, the JPL and GSFC mascons show larger mass losses in SE Karakoram compared to the custom Voronoi mascons. Specifically, the JPL and GSFC solutions show trends of −9.47 ± 0.40 Gt/yr and −5.70 ± 0.31 Gt/yr, respectively, for the period April 2002 to September 2020. On the other hand, the Voronoi solution exhibits a trend of −2.30 ± 0.37 Gt/yr. The uncertainty estimates are given at the 95% confidence level using the t-distribution. Given the mass loss regions adjacent to SE Karakoram, particularly to the south, as shown by [37], the larger mass loss of the JPL and GSFC solutions may be explained by the less targeted sampling of these solutions. In particular, the larger JPL mascons, which have the lowest trend in our analysis, sample larger areas to the south Figure 4. All solutions use the ICE6G-D GIA correction [36]. The gray vertical bar shows the gap period between the GRACE and GRACE Follow-On missions. Note the difference in the vertical scale for Alaska.
In NW Karakoram, we again find that the JPL solutions have the most negative trends, followed by the GSFC and Voronoi solutions. The JPL mascon solution has a trend of −1.04 ± 0.38 Gt/yr for the period from April 2002 to September 2020. The GSFC solution shows a trend of −0.93 ± 0.23 Gt/yr, while the targeted Voronoi solution has a trend of 0.06 ± 0.34 Gt/yr for the same period. While the negative trends of the GSFC and JPL solutions are in agreement within uncertainty, they are not in agreement with the Voronoi solution. The results of [37] suggest that while NW Karakoram showed a slightly positive mass balance in this period, surrounding areas exhibited a negative mass balance, which likely explains the more negative trends shown by the larger JPL mascons. The highly customizable and yet global nature of our Voronoi mascons allows the user to further isolate these geophysically distinct areas. It is important to emphasize that using these solutions to assess the assumptions and methodologies of each product is beyond the scope of this paper. However, the Voronoi mascon approach allows one to more carefully assess the region being sampled on a case-by-case basis, which is not possible with static global solutions.
It is also interesting to examine the differences in the seasonal amplitude between the three solutions. The JPL and GSFC solutions use a priori information to regularize the mascons, minimizing the covariance between groups of mascons. We see that in SE Karakoram, the Voronoi solutions show larger seasonal amplitude and more noise. We find a seasonal regression coefficient (the sum in quadrature of cosine and sine components of the seasonal variability) of 22.5 ± 3.9 Gt for the Voronoi mascons, compared to seasonal amplitudes of 8.2 ± 4.2 Gt and 12.1 ± 3.2 Gt for the JPL and GSFC solutions, respectively, for the period April 2002 to September 2020. Note that the unregularized nature of the Voronoi mascons leads to more noise, whereas regularized solutions may have dampened variability. A careful assessment of seasonal variability in these regions based on auxiliary environmental variables is outside of the scope of this study. In the Nyainqentangla region, we do not find larger seasonal amplitudes for the Voronoi solutions. We find seasonal amplitudes of 45.2 ± 5.2 Gt, 42.0 ± 4.8 Gt, and 58.1 ± 5.1 Gt for the Voronoi, JPL, and GSFC solutions. In coastal Alaska, we find the seasonal amplitudes to be in agreement, with coefficients of 94.0 ± 13.4 Gt, 98.8 ± 17.2 Gt, and 93.5 ± 15.0 Gt for the Voronoi, JPL, and GSFC solutions, respectively. It is important to note that the constraints applied to the GSFC solution occur across the entire Gulf of Alaska [13]. In general, Alaska glaciers along the coast exhibit large seasonal variability due to proximity to maritime conditions, so the constraints may be much more physically realistic than those in the Karakoram, where we see greater spatial variability in seasonal mass balances. It may be less important to have the level of control over mascons provided by the Voronoi methodology in regions with more homogeneous geophysical signals, but it becomes increasingly important where there is more spatial variability.
Previous GRACE studies tend to group larger areas together, or focus on regional configurations that require ad-hoc processing to remove far-field effects (e.g., [41]). The automated global variable-size mascon generation solution described here allows users to further isolate physically meaningful distinct regions, while having direct access to the kernels to minimize and quantify leakage for specific configurations. The L1B Regression Mascons using the "Resolution Operator" by [42] are calculated at an even higher resolution of 1 arc-degree. However, these solutions use the inherent trade-off between the temporal and spatial resolution of GRACE data in order to obtain a high spatial resolution for trends across long timespans. While our dynamic mascon solution is still restricted by the spatial resolution of monthly GRACE data at degree and order 60 or about 330 km, we can provide dense time series data with the same monthly sampling as the standard JPL and GSFC mascon solutions.
Conclusions
Gridded mascon products from range-rate data such as those from the JPL [11] or GSFC [13,14] do not give users direct control over the placement of the mascons or the regularization assumptions used during the creation of the solutions. An alternative approach involving manually designed mascons based on Level-2 harmonics for regions of interest, such as those of [17,21], can be arduous and time-consuming to build and suffer from far-field leakage for regions such as High Mountain Asia.
We present an open-source pipeline to produce global mascon solutions based an iterative spherical Voronoi tessellation scheme centered on user-defined point(s) or polygon(s) of interest. The mascons gradually increase in size in the far-field, minimizing the effect of noisy higher degree harmonics and computational cost where high spatial resolution is not needed.
We find that our solutions are in overall agreement with the JPL and GSFC solutions for four case-study sites of NW and NE Karakoram, Nyainqentangla, and SE coastal Alaska. The improved sampling of the Voronoi mascons, however, leads to differences in observed mass change trends. In addition, the different regularization and a priori assumptions may lead to differences in the seasonal variability of the solutions in some locations. While the JPL and GSFC solutions show dampened seasonal variability over SE Karakoram compared to the Voronoi mascon solution, they show a similar or larger seasonal variability in the Nyainqentangla and Alaskan regions.
Previous GRACE analyses in heterogeneous regions such as High Mountain Asia tend to aggregate large regions, rely on ad-hoc processing to remove far-field effects from regional configurations [41,43], and/or sacrifice temporal resolution for higher spatial resolution [42]. Our automated approach allows users to easily and quickly experiment with global configurations of variable area mascons to isolate the mass change for relatively small, physically distinct regions (subject the resolution of GRACE at degree and order 60) with a monthly resolution. Our approach also offers direct access to the underlying kernel, which can also be used to sample non-GRACE data for comparison and analysis and to quantify leakage. In addition to the configuration and placement of the mascons, the user also has control of the assumptions and corrections that are used in the processing, which can be tailored to meet the goals of the study. While we tested our methodology for four different regions, future studies can deploy the same pipeline for various hydrological or glaciological basins across the globe. Data Availability Statement: All associated code and data are publicly accessible on the accompanying Github Repository [28]. The accompanying harmonic processing code is available at [44]. | 2021-08-31T13:16:14.136Z | 2021-08-07T00:00:00.000 | {
"year": 2021,
"sha1": "b3d9cb3ffb25ddc6e1cadb4721466893d08d34d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/16/3134/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "49dfd2cc5ccfe4f6a1071dfd42a4ef8a56df8848",
"s2fieldsofstudy": [
"Environmental Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
270732817 | pes2o/s2orc | v3-fos-license | Use of HugoTM RAS in General Surgery: The First 70 Cases at a German Centre and a Systematic Review of the Literature
Introduction: The versatile open modular design of the newly introduced robotic platform HugoTM RAS is expected to allow its rapid spread in general surgery. However, the system is not yet approved for use in oesophageal and HPB-surgery and is not licensed worldwide. The aim of this work was to review the current spectrum of general surgical procedures that may be feasibly and safely performed with Hugo. Methods: We retrospectively reviewed our own series and performed a systematic review of all the published reports of general surgical procedures performed with this system in the literature. Results: Seventy patients underwent general surgery with Hugo at our institution, and another 99 patients were reported in the literature. The most common procedures were colorectal (n = 55); cholecystectomy (n = 44); repair of groin, ventral and hiatal hernias (n = 34); upper GI (n = 28); adrenalectomy (n = 6); and spleen cyst deroofing (n = 2). No device-related complications were reported. Arm collisions and technical problems were rare. The docking and console times improved in all series. The port positions and robotic arm configurations varied among authors and depended on the surgical indication, patient characteristics and surgeon’s preference. Conclusions: A wide spectrum of general surgical procedures has been safely and effectively performed with the Hugo RAS, even by robotically inexperienced teams with a limited choice of instruments. Technical improvements to the system and the introduction of robotic energy devices may help Hugo evolve to a vital alternative to established robotic systems.
Introduction
The Hugo™ robotic-assisted surgery (RAS) (Medtronic, Dublin, Ireland) platform is one of several much-awaited alternatives to the DaVinci robot.It consists of a system tower; an open console, including a widescreen HD-3D display with dedicated glasses, two pistol-like handgrips as arm-controllers and a footswitch panel; and four individual arm carts with a wide manoeuvre range.The platform was first introduced in South America and the Asia-Pacific region and later became commercially available in other parts of the world.In the USA, the platform is still an investigational device that is not for sale and awaits FDA approval.Reports on the clinical use of Hugo first began appearing after the device had received its CE mark in the EU for use in gynaecologic and urologic surgery in October 2021 and for general surgery a year later.So far, about 60 Hugo™ RAS devices have been installed in Europe alone, showing promising results for prostatectomy, nephrectomy and hysterectomy [1][2][3].It is believed that the open modular design of the platform would also allow its rapid adoption in general surgery.However, due to its novelty, the limited choice of instruments, setup guides that are under development and its missing certification in some parts of the world, published experience with Hugo-assisted general surgical procedures has been very limited.
The aim of this study was to summarize the current experience with Hugo RAS in general surgery with a focus on the spectrum of indications, feasibility and safety of the procedures and technical features of the system through a systematic review of the existing literature along with our own early insights from the first 70 cases with this platform.
Own Patients
A retrospective chart review was performed of all patients who underwent roboticassisted general surgical procedures at our hospital after beginning work with the Hugo RAS in February 2023 until 31 May 2024.Due to regulatory issues in Germany, no procedures were performed between April and October 2023, so that the study period of active use of Hugo was 9 months.No specific exclusion criteria were applied for patient selection.All patients gave written informed consent for robotic surgery and were personally informed by the operating surgeon about the procedure in all its aspects, including its novelty and the limited experience with that system.They signed a special informed consent form including additional remarks about possible device failures and device-induced complications.This observational study was approved by the Ethics Committee of Ruhr University Bochum (No. 23-7872-BR).It was conducted in accordance with the Declaration of Helsinki.In addition to clinical perioperative parameters, exact data on trocar positioning, docking and console times and the technical performance of the device were gathered.All surgeries were recorded by the DS1 system of the platform and were reviewed via the Touch Surgery TM 7.36 (Digital Surgery Ltd., London, UK) application.The docking time was defined as the time needed to attach all the manipulator arms to the instruments and to test them after trocar placement.The console time was the time between docking and final undocking.The total operative time was defined as the time from skin incision until skin closure.
Our surgical team had previous extensive experience with open and laparoscopic surgeries but no clinical experience with robotic surgery.Extensive theoretical and practical training offered by the vendor was completed as already mentioned in a previous report by us [4].The system components of the Hugo RAS platform have been repeatedly described in detail elsewhere [5][6][7].
Systematic Review of the Literature
As no MeSH terms for Hugo RAS have been established, two of the authors (OB and TF) performed an independent initial search in PubMed, Web of Science and Google Scholar with the exact text phrases "Hugo RAS", "Hugo Medtronic" and "Hugo robotic surgery" to identify relevant records.All reports on Hugo RAS published until 31 May 2024, were screened, including an additional manual check of their reference lists, and those reports describing the clinical use of the platform in general surgery were retrieved in full-text and included in the summary.Publications in the fields of gynaecologic and urologic surgery, as well as those reporting duplicate or no patient results were excluded.Final records to be included were checked by all four authors of the review-disagreements and missing data were resolved by personally contacting the authors of the original studies and analysing the raw data when necessary.As all included studies were of the lowest evidence level, being retrospective case reports and low-volume series of strongly selected cases, no formal bias risk assessment was performed.Selection, attrition, performance, detection and reporting bias were characteristic for all selected records.The PRISMA flow diagram of the systematic review is presented in Figure 1.This systematic review is registered in PROSPERO (No. CRD42024558581).
Own Patients and Procedures
The first two robotic-assisted general surgical procedures with the Hugo™ RAS at our centre included a cholecystectomy and a sigmoidectomy, performed on 13 February 2023.They, at the same time, represented the first-in-human procedures with the Hugo™ RAS in Germany.Since then, a variety of general surgical procedures have been performed in 70 patients.A summary of our initial experience with the Hugo RAS is presented in Table 1.Along with the listed 31 colorectal surgeries, a wide variety of concomitant procedures were performed, such as appendectomy, adhesiolysis, adnexectomy, fundoplication and rectopexy.There was no mortality in the group.All non-colorectal procedures were uneventful.Complications occurred only after major colorectal surgery: two presacral haematomas after rectosigmoid resections were treated via CT-guided drainage, and an anastomotic tear after an ultralow rectal resection was treated with endoluminal negative pressure.A high output stoma after another low anterior resection resulted in transient renal failure and had to be closed.An elderly patient received antibiotic treatment for a clostridial infection after a right hemicolectomy.Another one suffered a postoperative paralytic ileus, resulting in a prolonged hospital stay.
Own Patients and Procedures
The first two robotic-assisted general surgical procedures with the Hugo™ RAS at our centre included a cholecystectomy and a sigmoidectomy, performed on 13 February 2023.They, at the same time, represented the first-in-human procedures with the Hugo™ RAS in Germany.Since then, a variety of general surgical procedures have been performed in 70 patients.A summary of our initial experience with the Hugo RAS is presented in Table 1.Along with the listed 31 colorectal surgeries, a wide variety of concomitant procedures were performed, such as appendectomy, adhesiolysis, adnexectomy, fundoplication and rectopexy.There was no mortality in the group.All non-colorectal procedures were uneventful.Complications occurred only after major colorectal surgery: two presacral haematomas after rectosigmoid resections were treated via CT-guided drainage, and an anastomotic tear after an ultralow rectal resection was treated with endoluminal negative pressure.A high output stoma after another low anterior resection resulted in transient renal failure and had to be closed.An elderly patient received antibiotic treatment for a clostridial infection after a right hemicolectomy.Another one suffered a postoperative paralytic ileus, resulting in a prolonged hospital stay.
Lower GI surgery
Following the first original case series of colectomies by Bianchi et al., several other authors reported the feasibility and safety of colorectal procedures with Hugo [6].Romero-Marcos et al. published their experience including six rectal resections, three sigmoidectomies and a rectopexy [17].Gangemi et al. reported a mixed series of patients who underwent various procedures, including five ileocecal resections, a right hemicolectomy and a sigmoidectomy [7].Caputo reported recently on three rectal resections using different setups [16].Our experience with Hugo includes 31 colorectal procedures, most of them being sigmoid resections (n = 11), rectal resections (n = 8) and right hemicolectomies (n = 8).A left hemicolectomy, an APR and two Hartmann's reversals were also performed.A wide variety of setup configurations were used by different authors; however, similar operating times and clinical results were reported among all series.The butterfly 2 × 2 and the 3 × 1 configurations were the most commonly used, and the compact setup with the camera arm between the legs was preferred for low rectal resections (Figure 2).
Upper GI surgery
The most common upper GI procedure was bariatric Roux-Y gastric bypass (n = 15), followed by a case series of 10 Heller myotomies for achalasia [12,15].After publishing their initial results with RYGB in four patients, Raffaelli et al. reported later on a larger series of 15 morbidly obese patients treated with the use of Hugo [12].Only the second report was included in this systematic review.Additionally, a sleeve resection and a subtotal gastrectomy were reported [7].We performed a wedge gastric resection for a large benign tumour of the stomach.The setup for procedures in the upper abdomen is well established and widely accepted, making them feasible and safe (Figure 3).
Cholecystectomy
Most publications on Hugo-cholecystectomy came from centres in Italy and Spain, reporting relatively small case series of up to seven patients.The largest reported single-centre study provided technical details and setup modifications for the procedure in 14 consecutive cases-it was published by our team in 2023 [4].Meanwhile, we have performed 32 CCEs with excellent clinical results and no technical problems.A variety of trocar and robotic arm positionings for CCE were proposed by other authors [7,9,14].
Hernia Repair
There are five reports solely focused on the repair of abdominal wall hernia with Hugo.Mintz et al. reported on the successful rTAPP of 13 groin hernia repairs in 10 patients [10].Quezada et al. performed transversus abdominis release in 10 patients with large ventral hernias using redocking for each side [20].The short shaft of Hugo's instruments was pointed out by the authors as the main limitation.Jebakumar et al. described in detail the use of Hugo for two cases of rTAPP and five cases of ventral hernia repair in the IPOM technique [13].Three robotic arms were used by all authors in cases of abdominal wall repair.Hiatal hernia repair as a separate procedure or as a part of fundoplication were described by Gangemi and Quijano in two cases [7,8].We also performed three repairs of hiatal hernia: two of them with mesh reinforcement, and the third one combined with a Toupet fundoplication.In all cases of hiatal hernioplasty, four robotic arms in an upper abdominal setup were used.We do not perform the robotic repair of abdominal wall hernia because in Germany, this type of surgery is increasingly pushed into the strongly underfinanced outpatient care.
Adrenals and Spleen
A case series of three left and two right adrenalectomies was published earlier by Raffaelli et al., using a three-arm setup and an assistant port in the periumbilical region [5].We confirmed that a left adrenalectomy using a similar subcostal triangular positioning of the robotic trocars is safe and feasible; however, we placed the assistant trocar between the camera and right-hand trocars in the left lower abdomen.The Italian colleagues positioned all three arms at the same ipsilateral side of the patient, whereas we distributed two arms on the left side and one arm on the other side of the patient to avoid collisions and provide more freedom for the assistant.All cases were performed in a lateral decubitus position.The same setup but with four robotic arms and an assistant trocar was successfully applied by our team to perform the deroofing of large symptomatic spleen cysts in two very obese patients.
Technical Performance of Device
We experienced one technical problem with the device in the beginning of this series.In patient No. 3, one of the arms repeatedly did not recognize the instruments.The system had to be restarted twice until function was regained.The arm was successfully repaired before the next procedure.Also, a software update was necessary after the fifth procedure to eliminate some bugs.These technical problems led to a time delay but no adverse clinical events.Subtle arm collisions were repeatedly detected but tolerated by the device and did not interrupt the proper functioning of the system.Neither the instruments nor the other hardware parts broke or showed defects.There were no problems with the image or video transfer.All procedures are saved in the cloud and are easy to review at any time using the Touch Surgery TM application on any device, allowing notifications, workflow analysis and video editing.There were no device-related patient injuries, neither in our collective nor in the literature.
Discussion
This paper offers a summary of all the available reported cases of general surgical procedures performed with the help of the Hugo RAS robotic platform so far.
As expected, since its introduction, the platform has mostly been used in the fields of urology and gynaecology.The noninferiority of Hugo RAS compared to DaVinci was already demonstrated in some large series of prostatectomies and nephrectomies [22,23].
The expectations of its successful use in general surgery are based on its open modular design allowing improved communication within the surgical team and the flexibility in the configuration of the four individual arms.The ergonomic position of the console for the surgeon, the pistol-like grips resembling laparoscopic ones and the lower price have been discussed as further possible advantages of Hugo [4][5][6][7].The novelty of the device is inherently associated with some temporary disadvantages, such as the limited choice of instruments, lack of worldwide certification and approval for some major surgical procedures, immature training process, shortage of experienced proctors and difficulties in product supply.
The summary of our own initial experience and the review of the literature revealed that despite the abovementioned limitations, a wide spectrum of procedures in general surgery are feasible and safe to perform using the Hugo RAS.Interestingly, the clinical results and operating times of surgical teams without previous robotic experience, such as ours or that of Romero-Marcos, did not differ from those of robotically experienced authors.This paradox indicates that either Hugo RAS and DaVinci are quite different in terms of handling or that the learning curve with Hugo is steep and its interface extremely user-friendly.Moreover, our data provide evidence that general surgery with this system may be safe and feasible even in non-selected patient populations.
Indications for Hugo use showed geographical differences.Thus, reports on hernia repair came predominantly from countries outside Europe, just opposite to a series of colorectal resections and cholecystectomy.Such contrasts may reflect differences in health care reimbursement policies around the world.In Germany, current governmental attempts focus on redirecting all hernia repair into the underfinanced outpatient sector, which makes the use of robotic platforms for this indication unattractive.Another point is the strict hygiene regulation in the EU, which has rated the resterilisation of Hugo instruments insufficient and prohibited their multiple use so far.
On an international level, Hugo was most often used for colorectal surgery.The specific challenges of these procedures include a wide multi-quadrant surgical field, extending from the mid/upper abdomen to the lower abdomen and pelvic region, and the use of multiple different instruments for the resection and reconstruction steps of the operation.The modular design of Hugo with four independent arms may offer an advantage to the single pivot point design of the DaVinci system; however, it lacks automatic targeting and procedural setup-memory.Thus, the manual setting of the docking and tilt angles is needed in every single patient, which may be challenging and requires profound device-specific experience of the surgical team.According to our experience and the available literature, the modular design of Hugo cannot fully prevent redocking in complex procedures such as low anterior rectal resections.The major limitations of the Hugo platform with respect to colorectal surgery include the lack of a wristed robotic advanced energy device and some other instruments, such as a clip-applicator and a linear stapler.ICG fluorescence is still not available for the Storz camera of Hugo.The shaft of Hugo's instruments is significantly shorter than those of DaVinci, which is a problem in tall patients and those with a larger abdominal cavity, especially in multi-quadrant surgical procedures such as colorectal ones or the repair of large ventral hernias.A comparison of the features between the Hugo RAS and DaVinci platforms is summarized in Table 3.
The limited portfolio of robotic instruments currently available for Hugo requires more active participation from the bed-side assistant, who must apply clips, staplers and vessel sealing through an additional laparoscopic port.Increased interaction between the console and bed-side surgeon is therefore of utmost importance, and this is enabled by the open design of the platform.As the four robotic arms are relatively bulky and possess a large volume of movement outside the body, the assistant sometimes suffers limited manoeuvrability and must be careful not to be squeezed between the arms.Some minor but annoying shortcomings of the Hugo system include the current time limit of 45 min for the use of the monopolar shears and the need to restart the whole system after every single procedure.Attention should be given to positioning the surgical table as high as possible to allow an adequate volume of movement for the arms with a positive tilt-this may sometimes lead to a higher risk of contact between the long sterile robotic arms and the non-sterile parts attached to the ceiling such as the lamps or anaesthesia devices.
The initial lack of clinically proven setup guides has led to a variety of setup configurations being used by different authors for the same procedure.It was especially obvious in colorectal surgery, where the number and positioning of ports, the positioning of the arms and the number of redockings differed substantially among the authors.However, these differences appeared to not affect the docking times, duration of surgery or clinical outcomes, providing evidence that the modular design of Hugo may effectively allow a personalized setup considering both the individual patient's characteristics and the surgeon's preferences.
Despite its comprehensive character and detailed presentation of the current Hugo standards in general surgery, this systematic review suffers some major limitations.The most important flaw comes from the low evidence level of the included studies and their heterogeneity: 18 studies reported on less than 100 patients treated for a variety of diseases and retrospectively described the outcomes of a relatively broad spectrum of procedures.Furthermore, the lack of established setup guides, the preferences and individual robotic experience of surgeons and the selection bias made the reported results hardly comparable.Another drawback is the lack of a direct comparison of Hugo to DaVinci as the current gold standard in robotic surgery and to other newly introduced robotic platforms, especially in terms of cost effectiveness.
Adding our own experience with Hugo in 70 patients to the results of this systematic review aimed to show the feasibility and safety of this platform in general surgery and present a firsthand point of view of its advantages and limitations.However, this additional data cannot overcome the already mentioned drawbacks of the review regarding heterogeneity and the small patient numbers.
In conclusion, the Hugo RAS platform allowed a wide spectrum of abdominal surgical procedures to be safely performed.Ongoing technical development and software updates of the system are expected to improve its performance.To ensure a more rapid dissemination and the increasing use of the system in general surgery, joint international efforts including the creation of a procedural register and the establishment of standard clinically proven setup guides have to be endorsed.
Figure 1 .
Figure 1.PRISMA diagram of the systematic review of the literature March 2022-May 2024.
Figure 1 .
Figure 1.PRISMA diagram of the systematic review of the literature March 2022-May 2024.
Figure 2 .
Figure 2. (A) Lower GI setup for vascular control and colon mobilisation.(B) Lower GI setup for pelvic mobilisation and anastomosis.
Figure 3 .
Figure 3. Setup for upper GI surgery.
Table 1 .
The first 70 general surgical procedures with Hugo TM RAS at our centre over a period of 9 months.
For single cases, actual values are shown; for multiple cases, median values with (minimum-maximum) range are shown.F-female; M-male.
Table 2 .
A summary of published data on general surgery with Hugo TM RAS in chronological order until 31 May 2024.
Table 3 .
Comparison of the major features of the Hugo and DaVinci robotic platforms. | 2024-06-26T15:11:48.180Z | 2024-06-24T00:00:00.000 | {
"year": 2024,
"sha1": "82602977bab20dd07dc28d34575b8e3ef2443b37",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/13/3678/pdf?version=1719239555",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0f712e9a178ca17216a0e078644416c4111ec14",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
112131160 | pes2o/s2orc | v3-fos-license | A CASE OF THERMOECONOMIC OPTIMIZATION OF AN ENERGY HUB
Sustainable distric development requires innovative energy use solutions. The aim of this paper is to illustrate the operation of a real energy hub that can satisfy both thermal and electrical demands of a generic user. In particular, a specific case study developed around the smart grid of the University Campus of Savona (Italy), which just completed in 2014, is analysed. The grid includes different cogenerative prime movers and a storage system to manage the thermal load demand. Through a time-dependent thermo-economic hierarchical approach developed by the Authors, the work aims at optimizing the management strategy of the different prime movers to satisfy the energy demand, taking into proper account both the energetic and economic aspects. The analysis was carried out considering two different layouts, with and without a conventional stratified thermal storage, to evaluate the impact of this component in the management of the district.
Sustainable districts' development oriented to set targets requres innovative solutions in energy sector.We believe, that in this context a primary role is played by the Distributed Generation (DG), which refers to the electrical and thermal generation located near to the place of use, exploiting available renewable sources.One of the best way to exploit the emerging potential of DG is to take a system approach which views generation and associated loads as a whole concept called "microgrid".The major benefits can be divided into two categories: economic and operational (El-khattam 2004).From an economic point of view, distributed generation provides power support when load increases during peak demand periods, thus reducing interruption that may lead to system outages.It also reduces the risk of investment, due to the flexibility of its capacity and installation placement.Distributed generation cuts operational costs when installed close to the customer load because it avoids upgrading or setting up a new transmission and distribution network, thereby providing a cost saving.From the operational point of view, distributed generation warranties the reliability and stability of supply and reduces power losses.
These main aspects increase the interest of researchers on distributed polygeneration grids at both industrial and academic levels.Specifically, the Thermochemical Power Group (TPG) of the University of Genoa is involved in a four years European Collaborative project called E-HUB (Energy-HUB for residential and commercial districts and transport).An E-hub is similar to an energy station in which some different forms of energy are used in order to satisfy the energy demand of the district.Both consumers and suppliers of energy should be connected to this E-hub by means of bi-directional energy grids (low and/or high temperature heat grid, cold grid for cooling, electrical grid, gas grid).The main aim of the hub is to distribute the energy in the smart way over the consumers.The "smartness" is also in the management system, where control strategies aiming at the optimization of technical, economic and environmental issues are typically implemented.
Under such conditions storing energy will become beneficial, because those who can store energy can generate flexibility and make use of market opportunities.An important difference between electrical and thermal energy is that heat can be stored more easily and efficiently.So, thermal storage will be one of the first candidates to support smart grids.Heat pumps, CHPs and other devices convert electrical power into heat or vice versa and can do this when the market conditions are best (Ferrari et al. 2014).
Aim of this paper is to study the best management of the energy hub installed in Savona Campus with studying in particular the better operational strategy of the e-hub if a thermal storage is installed.So the same simulation was carried out considering with or without this component.
E-hub description
The facilities installed are based on different technologies with the aim to produce both electrical and thermal energy: the poly-generation smart grid analyzed here is based on the one installed at the TPG laboratory of the University of Genoa (Ferrari et al. 2012).T he test rig considered in this work (Fig. 1) is based on the following technology (both electrical and thermal energy): • a 100 kWel recuperated micro gas turbine (T100 PHS Series): nominal electrical efficiency of 30% and thermal efficiency of 47%; • 20 kWel internal combustion engine (TANDEM T20-A): nominal electrical efficiency of 29% and thermal efficiency of 68%; • 5000 l storage tank.Since the test rig is real installed in a University Campus and from the next winter it will contribute to satisfy the load demand of the Campus, the aim of this paper is to find the best operational strategy of the prime movers taking into account only the variable cost and not the economic parameters (like NPV or PBP) because there is no investment for the machine.
ECoMP description
ECoMP (Economic Cogeneration Modular Program) is an original software developed by the Thermochemical Power Group (TPG), at the University of Genoa, aiming at thermo-economic time-dependent analyses and optimization of energy systems, including off-design conditions (Rivarolo et al. 2013).Recently, a standard component interface (NeWECoMP) has been added to the software, allowing for the implementation of the most complex plant lay-outs with a user-friendly interface.
ECoMP is characterized by a modular approach and a standard component interface.It maintains the flexibility and the extendibility of the library components (46 modules are available at the moment), allowing users to add new components without modifying the core of the software (Yokoyama and Oseb 2012).Each component is described by three subroutines, which define mass and energy flows, off-design performance curves, variable and capital costs.Thanks to its modular approach, ECoMP allows to analyze various plant solutions, searching for the optimal dimensioning and/or for the best strategy of management from the thermo-economic point of view.
The Figure 2 shows how, given as an input the economic environment and the electrical and thermal loads, taking into account the connection to the power grid, the desired plant optimisation is obtained.
The choice of the design conditions and optimal management is carried out by pursuing a very clear goal: minimize the total cost calculated over the considered period.For the definition of the total cost, two items must be considered: Variable cost The first considers the capital cost of each component and it is a function of size, the second one takes into account the consumption of energy and fuel and depends on the chosen operational strategy.First of all, the software loads the input data, which are stored in specific matrices and report information about ambient temperature, user electrical demand, user thermal demand, etc.. Secondly, it calculates the component fixed costs, using internal cost functions.Finally, it calculates the variable costs basing on the decided operational strategy and the prime movers off-design performance curves.The revenues, obtained from the sale of electrical or thermal energy to the network, are considered as negative costs and then lower the objective function to be minimized.The input file of requested energy must be completed in a timely manner, and it contents information about the operation/non operation days.The operating days are divided into a number of periods: one of the most important features of ECoMP is the possibility of performing analysis for whatever number of periods, depending on the plant under analysis; moreover, it is possible to choose the number of seconds which a single period is made of.
ECoMP software uses built-in cost equations, which evaluate the capital cost of the single components of the plant based on installed power (gas turbines, internal combustion engines, boilers, fuel cells, solar panels, etc.) or volume (thermal storage), or other relevant parameters.The cost functions for different modules were developed and updated thanks to the contribution of industrial partners over the last few years, from literature data and from commercial offers collected during the Energy-Hub construction (Turbec SpA... 2012;asJagen, SpA..2013).
In order to improve the reliability of the simulation results, the off-design curves of the prime movers installed in the plant have been implemented in the software.These curves have been extracted from experimental tests.The curves refer to the internal combustion engine TANDEM T20, as well as to the micro gas turbines Turbec T100.The curves are plotted as a function of the electrical power, taking into account three different indicators: thermal power produced by the mover (black line), electrical efficiency (blue line) and fuel consumption (red line).All the values are compared to the nominal ones, as shown in Figure 3 (Ferrari 2014).Two different optimization levels can be investigated by ECoMP: a low and a high level.At the low level, the size of the components is considered fixed (therefore, capital costs are fixed) and the software employs a genetic algorithm in order to determine the best operational strategy.The choice of the use of a genetic algorithm to solve this kind of problem and the advantages/disadvantaged respect to other technique/programme is well described in (Carroll 1996).In this case, the software aims to minimize the objective function (Eq.1), which represents the hourly (or less) variable costs, as follows: Variable costs are made up of the following terms: (i) fuel consumption costs, (ii) electrical energy costs, and (iii) ''virtual costs''.The electrical energy costs term represents the product of the electrical energy purchased from the external grid and the specific cost of electricity: when the electricity produced by the plant is not sufficient to satisfy the electrical load, which is one of the problem constraints, electricity is purchased from the external grid.It is important to underline that "virtual flows" represent energy exchanges between the plant and the external environment, necessary to satisfy the optimization constraints (i.e.load demands).Since these amounts of energy cannot be produced by the plant, penalty costs are associated with virtual flows.Since the term cvirt assumes a high value (two orders of magnitude higher than the other specific cost terms), the optimization process is forced to find an operational strategy which minimizes virtual flows.
The results of the optimisation process and some additional input data are passed to the economic subroutine for the investment analysis, which is carried out considering a variety of economic scenario parameters (e.g.: construction time, inflation, escalation rates, plant life, financial interests, etc.).
Main inputs for the thermo-economic analysis
As mentioned before, a large number of inputs, most of them related to the site where the plant is installed, must be considered in the thermo-economic optimization approach.This section details the main plant data considered for the analysis.The simulation was carried out for three hours considering five minutes as time step.a) Electricity and thermal load curves: they represent the main optimization problem constraint.The software receives the time dependent electrical/thermal load demands as input, which must be satisfied in each period of the year using electricity produced by the generators or by purchasing electricity from the National grid.For this study Fig. 4 and 5 represent the thermal and electrical demand considered. 1.It has to be said the distinction between Electricity cost and Electricity price is due to different value in terms of money associated to energy bought from the grid or sold back to.Value of thermal energy has been evaluated by considering a 90% efficiency boiler.
WITHOUT STORAGE
The first test simulated with ECoMP was without a thermal storage; in this way the system is forced to satisfy the heat demand in a timely manner, without exception.Figure 7 shows the results for electricity production/consumption.It can be immediately seen how the request (black curve) and production (orange and blue line) have different trends and are not related each other.This behaviour is due to the constraints regarding the heat commodity imposed during the test (i.e.no thermal demand flexibility).The plant is, in fact, in the position of having to meet a given heat load without having the possibility to store thermal energy; electricity production, on the other hand, has the ability to interact with the grid network.Analysing the power levels of CHP (Combined Heat and Power) generators, it can be observed that the production of the microturbine is always higher than the internal combustion engine; this is due both to the choice made by the control system (i.e.ECoMP optimiser) and the different size of the two machines.Secondly, the microturbine is maintained almost constantly close to the nominal working point, while most of the adjustments are assigned to the internal combustion engine.Table 2 summarizes the energetic parameters of the plant.
WITH STORAGE
The second analysis was performed including the use of thermal storage.In this case, the system can exploit one additional degree of freedom, thanks to the flexibility on the thermal demand/load; the model, in fact, is no longer required to meet the heat demand in a timely manner, without exception, but it can depart from that request within the limits of the storage.
Analyzing the curves of the request and production it can be seen that the two curves have nearly the same trends for quite all the time; this behaviour does not differ from that recorded in the previous simulation without storage.
During the peak of the thermal demand, where the electrical load is very low, the storage help to satisfy the request keeping off the internal combustion engine and at the maximum power the micro-gas turbine.• At the beginning, when production definitely overmatches, this configuration is characterized by a large sale of electricity (it reaches a maximum power of almost 100 kW) • Subsequently, since the storage is almost fill, the thermal demand is covered exactly by the mGT taking switch off the ICE.This configuration is clearly biased in favour of an underproduction compensated by a large purchase of electricity (indicated by the purple curve).Also in this case the network balances the mismatch, providing a maximum power of about 39 kW.It is possible to notice that the use of a thermal storage in the management of a smart grid brings an improvement in the energetic consumption of the plant.In fact, with respect to the solution without a thermal storage, the electricity produced by the prime movers is higher, this caused a reduction of the electricity bought from the grid and an increase of revenues from the electricity sold to the grid (Figure 10).Based on the operating cost in Table 4, it is possible to evaluate the variable cost for both configuration evaluating the economic improvement if the thermal storage.Considering how the machine work in both case (Figure 11), the use of thermal storage have an important impact in the management of the mGT because it works for a longer time in nominal condition, with an important reduction of the time in off design.This is very significant since when the mGT operates outside of its nominal conditions, average efficiency decreases and the impact of maintenance costs on the energy produced increases.
On the other hand, the ICE, in the configuration with thermal storage, never works in nominal condition and it is switch off for a longer time.
Conclusions
In this paper the Energy Hub installed in the Savona Campus of the University of Genoa, Italy, to satisfy the thermal and electrical demands was analysed via a thermo-economic approach, employing the software ECoMP.
In particular the impact of thermal storage was investigated.From the results, it can be inferred that the thermal storage has a considerable impact in the system behaviour.The use of a thermal storage to management better the thermal demand, have an improvement not only in the energetic results but also in the economic one with a reduction of about 4% in the variable costs.So, it can be concluded that a relatively simple device, as a conventional water stratified thermal storage, can have a significant positive impact on system performance, provided that proper control algorithms are employed.Implementation of suggeted innovative solution contributes to sutainable development of district and facilitates meating goals set by the EU.
Fig. 11 .
Fig.11.Comparison of time percentage of the prime movers with and without thermal storage
Table 1 .
Energy cost
Table 3
summarized the energetic parameters of the plant.
Table 4 .
Economic results | 2019-01-02T14:52:18.928Z | 2014-12-29T00:00:00.000 | {
"year": 2014,
"sha1": "a7a31dc4fbf7f093c0a7e74d637d1d4397ffe243",
"oa_license": "CCBY",
"oa_url": "http://jssidoi.org/jesi/article/download/38",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a7a31dc4fbf7f093c0a7e74d637d1d4397ffe243",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
254363999 | pes2o/s2orc | v3-fos-license | Effect of Combination Miding Fern (Stenochlaena palustris) and Iron Tablet to Pregnant Women at Sintang Regency
ABSTRACT
INTRODUCTION
Anemia can be found in all developing and developed countries in the world. According to data from World Health Organization (2015), prevalence of pregnant women with anemia was 38.2%. The highest prevalence of anemia was in the Southeast Asia region (48.7%), the Africa region was 46.3%, the Eastern Mediterranean region was 38.9%, the European region was 25.8%, the American region was 24.9% and the lowest in the western pacific region (24.3%).
Hemoglobin is a component of red blood cells or erythrocytes. Hemoglobin binds oxygen and then delivers oxygen to all body tissue cells. Deficiency of hemoglobin will cause anemia. (Ministry of Health RI, 2018). Complications of anemia in pregnancy are miscarriage, preterm delivery, bleeding after delivery, shock and infection during labor or after delivery. In severe anemia can cause cordis decompensation. Complications in the fetus are fetal death, early neonatal/perinatal death, premature babies, iron deficiency and birth defects. (Mahyuni A, et al. 2015).
Based on data from the 2018 Basic Health Research Report, there were 48.9% of pregnant women in Indonesia with anemia (Ministry of Health RI, 2018). The prevalence of pregnant women with anemia based on Basic Health Research data in 2013 was 37.1%. This data showed that there was an increase in cases of anemia in pregnant women by 11.8% in 2018 (Health Research and Development Agency, 2013). Based on data from the West Kalimantan Provincial Health Office in 2020, the number of first trimester pregnant visits that were checked for hemoglobin was 70.358 pregnant women and 27.3% with anemia. Meanwhile, the total number of pregnant women in West Kalimantan in 2020 is 109.314. This data showed that there were 38.956 pregnant women who have not had their hemoglobin checked.
Based on the Sintang Regency Health Office Report in 2020, the number of first visits to pregnant women was 86.02%, the number of fourth visits was 76.86%, the number of blood supplement tablets given to pregnant women was 80.03% and pregnant women with anemia was 18.16%. Based on the data, it showed that there were 3.978 pregnant women at the time of the first visit who had not had a hemoglobin check. In 2021, the first visit was 81.52%, the fourth visit was 73.09% and anemia was 4.90%. The data showed a decrease in the number of visits and the number of pregnant women who were checked for hemoglobin levels compared to 2020. This was due to the COVID-19 pandemic. The government had made efforts to resolve anemia in pregnant women through Regulation of the Minister of Health of the Republic of Indonesia Number 88 of 2014 concerning blood booster tablet standards for pregnant women and fertile women. Based on this regulation, pregnant women were given iron tablets during their pregnancy or at least get 90 blood tablets.Miding fern (Stenochlaena palustris) is a type of fern or ferns. This plant is very easy to find, grows wild in peatlands and does not know the season. This means that they are available throughought the year. For population of West Kalimantan, especially in the Ketungau Hilir area of Sintang Regency who live in rural areas, this plant is very easy to obtain without having to buy it. In everyday life, many people use the young miding fern leaves and stems as vegetables for stirfrying or clear vegetables. The results of the information obtained from residents in the area show that the majority of Ketungau Hilir residents do not know about the benefits of consuming the miding fern.
Siharina Febby (2020) in her research entitled Laborate Test for Iron Levels in Kelakai Biscuits states that the plant can be processed into various kinds of food and can meet the iron needs of pregnant women. In the study of Mahyuni, et al., (2015) in their research stated that there was an increase in hemoglobin levels in second trimester pregnant women who consumed miding fern by 0.82 g/dl. Gracea Petricka (2018) also stated in her research that the miding fern has been proven to increase hemoglobin levels. Therefore, it is recommended as a food supplement to prevent anemia. Christine Aden (2019) said that miding fern, which is a locally based additional food, is effective in increasing hemoglobin levels in pregnant women.
Miding fern is also believed to be able to treat anemia and facilitate milk production (Fahruni et al., 2018). In the Dayak Kenyah community, miding fern is used as an anemia medicine, antipyretic, skin disease medicine and is used as a medicine for diarrhea (Mawaddah, 2019). The iron contained in the miding fern can be used as a basis for the treatment of anemia. The abundant availability of miding fern can be an easy alternative for pregnant women to prevent and treat anemia due to iron deficiency. The enormous impact of anemia on pregnant women made researchers interested in conducting further research on the effect of the combination of miding fern (Stenochlaena palustris) and iron tablets on hemoglobin levels in pregnant women at the Nanga Ketungau Health Center Working Area, Sintang Regency.
RESEARCH METHOD
This research was a quantitative study using a quasi-experimental research design, pre-test and post-test without a control group. The population in this study were all pregnant women in the 1st to 3rd trimester in the work area of the Nanga Ketungau Public Health Center, Sintang Regency, totaling 298 pregnant women. Sampling in this study used non-probability sampling with consecutive sampling technique. The total sample were 33 respondents. This research was conducted on May 9-31, 2022 at the Nanga Ketungau Public Health Center Working Area, Sintang Regency. The study began by measuring the hemoglobin level of the respondents before the intervention. Then an intervention was carried out in the form of giving a combination of miding ferns and iron tablets for 10 days. On the 11th day, hemoglobin was measured after the intervention. Then compared the results of each respondent's hemoglobin measurement before and after the intervention. Based on Table 1, it was found that the respondents in this study totaled 33 persons. The majority age is 20-35 years old as many as 24 persons (72,7%). Based on the education, most of the respondents had basic education as many as 20 persons (60,6%). Characteristics of respondents based on the occupation, mostly housewife as many as 30 persons (90,9%). Based on parity of respondents, most of parity are multigravida as many as 21 persons (63,6%). The majority of respondents were in the third trimester as many as 14 persons (42,4%).
Bivariate Analysis
Bivariate analysis is an analysis carried out to analyze the relationship between two variables that can be independent of each other, influence each other, one variable affects another variable. Bivariate analysis in this study was to analyze the effect of combination miding fern and iron tablets to pregnant women at Nanga Ketungau Health Center Working Area, Sintang Regency.
Nurmala Sari, Effect of Combination Miding Fern (Stenochlaena palustris) and Iron Tablet to Pregnant
Women at Sintang Regency
Discussion
Based on the results of data analysis, it can be seen the results before and after being given the combination of miding fern (Stenochlaena palustris) and iron tablets. The median value before being given the combination of miding fern and iron tablets was 11.20 g/dl and after being given the combination of fern miding and iron tablets increased to 13.10 g/dl with a p-value = 0.000. These results indicated that p<0.05, which means that there was a significant effect of giving the combination of miding fern and iron tablets on increasing hemoglobin levels. This study showed that there were 31 respondents who experienced an increase in hemoglobin levels after being given a combination of miding fern and iron tablets, and there were 2 respondents who experienced a decrease in hemoglobin levels after being given a combination of miding fern and iron tablets. In this research, there were 9 respondents (27.3%) with gestational age in the 1st trimester. After being given miding fern and iron tablets, there were 8 respondents who experienced an increase in hemoglobin levels, while 1 respondent experienced a decrease, namely from 14 g/dl to 13.7 g/dl. This is because the respondent experienced nausea and vomiting so that the appetite decreased. The median hemoglobin levels of respondents before being given miding fern and iron tablets was 12 g/dl. After being given miding fern and iron tablets it became 13.10 g/dl. The average increase in respondent's hemoglobin level in the first trimester was 1.1 g/dl.
In the second trimester, 10 respondents after being given miding fern and iron tablets, there were 9 respondents who experienced an increase in hemoglobin levels, while 1 respondent experienced a decrease from 12.2 g/dl to 9.7 g/dl. Even though they are in the 2nd trimester, there were still respondents who experienced nausea and vomiting. The median hemoglobin levels of respondents before being given miding fern and iron tablets was 11 g/dl, after being given miding fern and iron tablets was 12.1 g/dl. Respondents in the second trimester experienced an increase in hemoglobin levels of 1.2 g/dl.
In the third trimester, after being given miding fern and iron tablets, all of them experienced an increase in hemoglobin levels. The median hemoglobin levels of respondents before being given miding fern and iron tablets was 11.4 g/dl, after being given miding fern and iron tablets was 13.1 g/dl. The increase in the respondent's hemoglobin in the third trimester was 1.7 gr/dl. It can be concluded that based on the respondent's gestational age, the average increase in hemoglobin in the third trimester is the highest, namely 1.7 g/dl.
The difference in the increase in hemoglobin levels in the 1st, 2nd and 3rd trimester was due to an increase in iron requirements during pregnancy. Iron is useful for the growth of the fetus and placenta. In addition, during pregnancy there is an increase in maternal blood volume. During pregnancy a woman experiences an increase in blood plasma. In the 1st trimester the increase reached 10-15%, in the 2nd trimester it increased to 30%, in the 3rd trimester there was a gradual decrease. Nausea and vomiting also caused the increase in the respondent's hemoglobin levels to be varied.
Based on the results of this research, there were 8 primigravida respondents. The median hemoglobin level of primigravida respondents before being given miding fern and iron tablets was 11.8 g/dl, after being given miding fern and iron tablets was 12.9 g/dl. The increase in the hemoglobin level of the primigravida respondents was 1.1 g/dl. Multigravida respondents as many as 21 persons. The median hemoglobin level of respondents before being given miding fern and iron tablets was 11 g/dl, after being given a miding fern and iron tablets it became 13.1 g/dl. The average increase in hemoglobin level in multigravida respondents was 2.1 g/dl.
Grande multigravida respondents as many as 4 persons. The median hemoglobin level of respondents before being given miding fern and iron tablets was 11.8 g/dl, after being given the miding fern and iron tablets it became 13.8 g/dl. The increase in hemoglobin in grande multigravida respondents was 2 g/dl. It can be concluded that based on the respondent's parity, the average increase in hemoglobin level is multigravida, which is equal to 2.1 gr/dl. Parity is the number of times a mother has given birth to a live or dead child, but not an abortion. The more often a woman experiences pregnancy and childbirth, the more iron loss. In this research, respondents with multigravida characteristics experienced the highest increase in hemoglobin.
Research conducted by Febby (2020) stated that the miding fern (Stenochlaena palustris) can be processed into various kinds of food, one of which is biscuits which contain 245 mg of iron. These biscuits consumed 3-4 times a day can help the iron needs of pregnant women. In a study conducted by Aden (2019) stated that there was an increase in hemoglobin levels of pregnant women after consuming miding fern (Stenochlaena palustris) stew. Pregnant women who consumed boiled miding fern (Stenochlaena palustris) experienced an increase in hemoglobin levels of 0.711 g/dl.
Regulation of the Minister of Health Number 88 of 2014 explained that the program for administering blood supplement tablets is given to fertile women and pregnant women. For fertile women it is given 1 (one) time a week and 1 (one) time a day during menstruation and for pregnant women every day or at least 90 (ninety) tablets, where each tablet contains iron equivalent to 60 mg elemental iron (in dosage forms Ferro Sulfate, Ferro Fumarate or Ferro Gluconate) and Folic Acid 0.400 mg.
Research conducted by Mawaddah (2018) on students who experienced anemia with the results showed that the average hemoglobin levels of students who experienced anemia before being given miding fern syrup was 10.9 g/dl, while the average hemoglobin level after being given miding fern syrup was 12.7 g/dl. This showed that there was an increase in hemoglobin levels in female adolescents by 1.8 g/dl after being given miding fern syrup. Research conducted by Petricka (2018) on late adolescents (17-19 years old) who experienced anemia, showed that there was an increase in hemoglobin levels with anemia of 3.24 g/dl after consuming miding fern (Stenochlaena palustris) for a week. Mahyuni (2015), in her research stated that in the 2nd trimester pregnant women group after being given the miding fern (Stenochlaena palustris), there was an increase of 0.14 g/dl, while in the 2nd trimester pregnant women group after giving only iron tablets, it increased 0.49 g/dl.
Based on the research that has been done, increasing hemoglobin in the blood is not only overcome by giving iron tablets, but it is necessary to provide iron-containing foods. Miding fern as much as 100 grams which has been processed into vegetable stew within 1-5 minutes has an iron content of 0.38 mg. The iron will help the formation of red blood cells and increase hemoglobin in the blood. Iron has an important role in the formation of red blood cells and keeping the body's organs functioning properly. Iron metabolism is very useful for the formation of hemoglobin. Iron in the body's cells has a role as an electron carrier or cytochrome which is useful for activating bound oxygen in hemoglobin and myogoblin.
Iron is related to the process of absorption, transportation, utilization, storage and disposal. Iron bound in the content of plant foods is called ferric. Iron bound to the content of animal foods is called ferrous. In the stomach, iron in the ferric form is converted into the ferrous form with the help of vitamin C so that it is more easily absorbed by the intestinal mucosa. Iron in the ferrous form combines with apoferritinin to form an iron-containing protein called ferritin. Iron that will enter into the blood circulation is released in ferrous form which will later undergo oxidation and change form to transferrin after combining with specific proteins. The source of iron from food absorption will be stored as a reserve and transferred to the spinal cord as a material for the formation of hemoglobin. Every day about 20-30 mg of iron is needed by the body for the formation of hemoglobin and iron needs. The combination of miding fern (Stenochlaena palustris) and iron tablets has sufficient iron content so that it can increase hemoglobin levels. The increase in hemoglobin levels in pregnant women after being given a combination of miding fern and iron tablets was 1.90 g/dl.
CONCLUSION
Based on the results of the research and discussion described previously, it can be concluded that there is an effect of the combination of miding fern (Stenochlaena palustris) and iron tablets on increasing hemoglobin levels in pregnant women at Nanga Ketungau Health Center Working Area, Sintang Regency. | 2022-12-07T19:50:28.587Z | 2022-11-29T00:00:00.000 | {
"year": 2022,
"sha1": "379307793fc62d2679377ae46e7a958569b53350",
"oa_license": "CCBYNC",
"oa_url": "https://midwifery.iocspublisher.org/index.php/midwifery/article/download/943/837",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "68f2216856c0bda4e79dea8c1f92a19bc1bc4b4d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
240527045 | pes2o/s2orc | v3-fos-license | Fine-scale hydrological niche segregation in coastal dune slacks
Questions: Hydrological niche segregation is widespread and has been found across a range of different habitats. Different plant species can occupy distinct hydrological niches, and as a result fine-scale variability in hydrology can structure plant communities. However, these patterns may not be as clear in habitats where differences in hydrology are more short-lived, such as coastal dune slacks. We explored the extent that the hydrological regime structures dune slack plant communities. Location: Ainsdale Coastal Sand Dune National Nature Reserve, UK. Methods: Six hundred quadrats were surveyed, 100 in each of six coastal dune slacks. Water table levels are recorded
spond along local hydrological gradients has important implications for conservation management, as hydrologically driven biodiversityrich habitats may be sensitive to climate change.
In complicated systems there is clear evidence that many species occupy distinct hydrological niches, and as a result, fine-scale variability in hydrology drives community composition and patterns of biodiversity (Silvertown et al., 2015).These differences of species preferences and therefore their segregation within the same habitat, can be studied along hydrological gradients.However, these patterns may not be as strong in habitats where differences in hydrology are more short-lived.Habitats where hydrological regimes vary significantly temporally and spatially, may respond differently to hydrological pressures, due to complex biotic and abiotic factors.Therefore, exploring plant-hydrology patterns in habitats with short-lived hydrological regimes, such as coastal dune slacks, can help us to understand to what extent hydrology is driving the distribution of vegetation patterns in these biodiversity-rich habitats.
Dune slacks are seasonal wetlands found between coastal sand dune ridges (Grootjans et al., 1998).They are typically nutrient-poor and support species of conservation importance including the natterjack toad (Epidalea calamita), fen orchid (Liparis loeselii) and petalwort (Petalophyllum ralfsii) (Houston, 2008).Dune slacks lie close to the water table, which varies seasonally and annually, causing periods of flooding-stress and drought-stress (Ranwell, 1959).Dune slack hydrology is complicated as they can be fed by different pathways of water movement such as from groundwater, surface water and precipitation (Grootjans et al., 2008;Rhymes et al., 2015).Depth to the water table, seasonal fluctuations and duration of drought or flood can alter species interactions, and therefore community patterns (Willis et al., 1959;Lammerts et al., 2001).These interactions are altered by climate-driven changes in groundwater, with species that are already at risk from climate change more likely to decline (Bartholomeus et al., 2011).Therefore, to predict how dune slack plant communities will respond to climate-driven changes in groundwater, it is essential to understand and explore the fine-scale hydrological drivers in these systems.
Vegetation composition in dune slacks is not only driven by hydrological conditions.For instance, successional processes can shape plant communities.In the pioneer phase, small species establish on limited soil.Over time a moss layer develops, and the depth of the soil organic matter and nitrogen mineralisation increases.Plant-plant interactions shift from competition for nutrients to competition for light, and a decrease in bare ground cover (Bossuyt et al., 2003;Sýkora et al., 2004;Grootjans et al., 2008).The depth of the soil organic matter indicates dune slack age (reflecting accumulation over time), but also impacts water-holding capacity (Minasny et al., 2015;Bordoloi et al., 2019).The presence of leaf litter (dead organic matter) increases water-holding capacity by reducing water loss from the soil surface (Li et al., 2014).Composition can also be altered by disturbance from grazing animals, which can be indicated by high bare ground cover and presence of dung (Plassmann et al., 2010;Millett & Edmondson, 2015).
Dispersal limitations (Bossuyt et al., 2003) and priority effects (arrival and establishment of a certain species impacts later arrivals by taking key resources) may also be important drivers of variation in vegetation composition (Eriksson & Eriksson, 1998;Fukami, 2015;Vaughn & Young, 2015).To predict how plant communities will respond to climate change, we therefore need to understand how these complicated interactions drive community structure.
In this study we investigated the extent to which hydrological regimes structure plant community composition in coastal dune slacks.
Specifically, we addressed the following four questions: (1) is hydrology the dominant driver of community composition differences across dune slacks; (2) does hydrology control fine-scale spatial variability in community composition within dune slacks; (3) does the presence of a community-hydrology link relate to topographic variability; and (4) do species have consistent responses to hydrological gradients?
| Study site
The Sefton Coast is located in the northwest of England and is a designated Site of Special Scientific Interest (SSSI).It is the largest dune system in England, approximately 25 km long and 3 km wide (Dynamic Dunescapes, 2021).Within the Sefton Coast dunes lies Ainsdale Sand Dunes National Nature Reserve (NNR) which is also a Natura 2000 protected site.The site contains areas of plantation woodland, fixed dunes, and dune slacks.The site has a large rabbit population and some areas are grazed with livestock (Millett & Edmondson, 2013).
| Vegetation sampling
To measure fine-scale variability, plant communities in six dune slacks were surveyed between 09.2018 and 07.2019 (Figure 1; Appendix S1).Plant communities, which were co-located to a single dipwell (way to measure groundwater) in each dune slack, were surveyed using a stratified random sampling approach (Figure 2).
Plant communities were surveyed within a 7-m radius of the dipwell, providing a robust measure of hydrological conditions for each quadrat (Curreli et al., 2013).Within the 7 m, the dune slack was sectioned into four quadrants (north, east, south and west).Within each quadrant, 25 random quadrats of 15 cm × 15 cm were sampled, totalling to 100 quadrats per dune slack.Species occurrences (vascular and non-vascular), bare ground, dead organic matter and dung were recorded as percentage cover values.Nomenclature for vascular plants follows Stace (2019) and non-vascular follows Atherton et al. (2010).Vascular plants identified to species were included in the analysis (six vascular species removed).From the vegetation data we calculated species richness, the Shannon diversity and Simpson diversity index for each dune slack in R (v.4.0.3) using the package BiodiversityR (Kindt, 2018;R Core Team, 2020).
| Hydrological data
Ainsdale NNR has an extensive hydrological monitoring network, where the water table depth (WTD) has been measured monthly since 1972 (Clarke & Ayutthaya 2010;Abesser et al., 2017).The names (numbers) used for dune slacks in this study follow the naming convention of the ongoing water table data collection.Differences in frequency (sampling interval) of the hydrological monitoring makes comparisons across slacks challenging (Bartholomeus et al., 2008).
Therefore, daily hydrological time series models were created for each dune slack using the software Menyanthes (von Asmuth et al., 2002).Menyanthes uses the PIRFICT method (Predefined Impulse Response Functions In Continuous Time) which can handle different frequencies and time intervals (von Asmuth et al., 2012).The model requires WTD and climatic data as inputs.Climate data from Blackpool Squires Gates weather station were used for the models (Met Office, 2020).Data required included mean temperature (°C), precipitation (mm), and potential evapotranspiration (PET; mm).PET was calculated using the Penman-Monteith formula using the function "ET.PenmanMonteith" in the package Evapotranspiration (Allen et al., 1994;Guo et al., 2016).Data required included maximum, minimum and average temperature (°C), relative humidity (%), precipitation (mm), daily sunshine hours (hour) and wind speed (m/s).
| Calculating quadrat-level hydrological metrics
For use in the statistical analysis, the average of water table depth (AWTD) over a four-year hydrological period (01.06.2015-30.05.2019) was calculated based on modelled daily values for each slack.Previous hydrological studies in UK dune slacks have been based on one to three years of hydrological data (Ranwell, 1959;Jones et al., 2006;Rhymes et al., 2014).A four-year average extends current research, and is similar to another study in UK dune slacks (Curreli et al., 2013).Additionally, in UK wet meadows, hydrological variables over the preceding three to seven years explained shifts in plant communities (Gowing et al., 2002(Gowing et al., , 2005)).
The location and elevation of the six dipwells and the 600 quadrats were recorded using a differential GPS (dGPS), with vertical accuracy <10 mm (Trimble R6 ;Trimble, 2013).Quadrat-level hydrological metrics were calculated based on the difference in elevation of each quadrat relative to the AWTD of the dipwell.
| Topographic variability
The shape and slack floor elevations of the dune slacks were often heterogenous, and dipwells were not always located in the centre of a dune slack.The dGSP measurements also provide a measure of the variation in topography of the dune slack (topographic roughness).
The standard deviation for each dune slack was calculated based on the quadrat-level elevation.The higher the standard deviation, the more topographically variable the dune slack.Standard deviation was calculated using the function "sd" in the base package stats (R Core Team, 2020).
| Data analysis
Our analysis focussed on the four hypotheses exploring large and fine-scale community responses and species responses.All data analysis was performed using R (v.4.0.3) (R Core Team, 2020).
| Is hydrology the dominant driver of community composition differences across dune slacks?
The choice of multivariate analysis technique was informed by first running a Detrended Correspondence Analysis (DCA) using the "decorana" function in the package vegan (Oksanen et al., 2015;R Core Team, 2020).Four outlier quadrats were removed due to the presence of a species with high percentage cover (i.e., 90%) not recorded elsewhere, and two quadrats removed due to missing dGPS data.This resulted in a matrix of 594 quadrats by 69 species.The length of the first DCA axis was >4 (4.634) which indicates strong unimodality (Smilauer & Leps, 2014).Therefore canonical correspondence analysis (CCA) in the package vegan (Oksanen et al., 2020) was used for subsequent analyses to test the potential effects of the environmental parameters on the plant communities.CCA is a direct gradient analysis, where the ordination is constrained by the environmental variables.Variables such as dune slack identity can also be partialled out (Gardener, 2014).
We first included all environmental variables in a global CCA model with dune slack identity as a dummy variable (model one).We used the function "cca" in the package vegan (Oksanen et al., 2020).To remove redundant variables and create a parsimonious model (model two), we used the "ordistep" function to select the minimum number of environmental factors that best explain community structure, with forward selection and R2scope = FALSE (Legendre & Gallagher, 2001;Oksanen et al., 2020).After this process, to determine the presence of multicollinearity between environmental factors, the variance inflation factor (VIF) was calculated using the "vif.cca"function.Variables with VIF values greater than 10 were subsequently dropped from the model.We then repeated this process but with slack identity included as explanatory variable using the function "Condition".We produced both a full model (model three) and a parsimonious model (model four).This enabled the environmental impacts on communities to be examined with dune slack-caused variability partialled out (the effects of the explanatory variable removed; Borcard et al., 2011).
The significance of the models, for the entire model, each axis and each term, was assessed using an ANOVA using the "anova.cca"function (permutations = 1000).
| Does hydrology control fine-scale spatial variability in community composition within dune slacks?
We used non-metric multidimensional scaling (NMDS) to explore the relationship between hydrology and community composition for each dune slack.Matrix sizes (quadrats by species) were 99 cm × 26 (slack 3), 100 × 18 (slack 5), 99 × 44 (slack 6), 99 × 28 (slack 7), 98 × 22 (slack 11), and 99 × 24 (slack 49).NMDS is an indirect gradient analysis and was performed using Bray-Curtis dissimilarity (Borcard et al., 2011).The function "metaMDS" in the package vegan was used as it uses repeated random starts to find a stable solution (Oksanen et al., 2020).A maximum of 1,000 iterations for the distance matrix was run, and the NMDS was projected in K = 2 dimensions to minimise stress.The ordination was rotated using the function "MDSrotate", to align the first dimension of the NMDS parallel with AWTD.The ordering of the relationships was assessed using a Shepard diagram, using the function "stressplot" (Borcard et al., 2011).The function "ordisurf" was then used to fit a quadratic trend surface corresponding to the AWTD for each slack (the results from the rotation), which was overlaid onto the NMDS ordination.
2.6.3 | Does the presence of a communityhydrology link relate to topographic variability?
We used linear models to determine the relationship between topographic variability within a slack and plant community diversity, and the composition-hydrology relationship.We tested relationships between topographic variability (roughness) and the dependent variables (1) species richness, (2) Shannon diversity, (3) Simpson diversity and (4) R 2 -value from the NMDS trend surface.We used the function "lm" in the base package stats (R Core Team, 2020).
Richness, Shannon and Simpson provided a measure of alpha diversity for each slack.R 2 -value from the NMDS provided a measure of the strength of the hydrology-plant community relationship.For each model, we fitted and compared linear and logarithmic regression lines.Logarithm regression lines were the best fit (highest R 2 ) for all four models.
| Do species have consistent responses to hydrological gradients?
We modelled species-specific responses to the hydrology gradient (AWTD) using a generalised additive mixed models (GAMM) approach (Zuur et al. 2009).This approach allows for non-linear relationships between the response (species) and explanatory variable (AWTD) and can control for spatial autocorrelation.Models were created for species with more than 5% frequency across all slacks (present in 30 out of 596 quadrats).This reduced the number of species modelled from 69 to 20.Percentage cover data were converted to presence/absence data.A GAMM was fitted with presence of each species as the dependent variable and AWTD as the independent variable, using a cubic spline and the family binomial.The random effect of slack was included and spatial autocorrelation using latitude and longitude.The function "gamm" from the package mgvc was used (Wood, 2019).
| RE SULTS
A total of 69 species were recorded across the six dune slacks.
| Is hydrology the dominant driver of community composition differences across dune slacks?
In the global CCA model (model one), the constrained variables (AWTD, NV, BG and DOM, and slack identity) explained 13.7% of the total variation.The first five axes were statistically significant, with axes 1 and 2 explaining respectively 30% and 23.5% of the explained variation.The constrained variables were all statistically significant except for DU.In the parsimonious model (model two), the constrained variables (AWTD, slack 3, 5, 6, 7 and 49 and NV) explained 13% of the total variation.Constrained variables DU, BG, DOM and slack 11 were dropped.The first seven axes were statistically significant, axes 1 and 2 explaining respectively 31.5% and 24.8% of the explained variance.The constrained variables were statistically significant.
In the full model, where slack identity was included as an explanatory variable (model three), the constrained variables (AWTD, NV, BG and DOM) explained 3.7% of the variance and slack identity explained 10%.Only the first axis was statistically significant, explaining 72% of the variation.The constrained variables AWTD and NV were statistically significant, but BG and DOM were not statistically significant.In the parsimonious model (model four), the constrained variables (AWTD, NV, BG and DOM) explained 3.6% of the variance and slack identity explained 10% (Figure 3).The constrained variable DU was dropped.Axes 1 and 3 explained respectively 74.7% and 9.7% of the explained variation.The constrained variables were statistically significant, but BG was not (p = 0.078), but was included in the final model as it was the best model fit.
| Is hydrology controlling community composition within each slack?
For three dune slacks (6, 7 and 49), plant community composition was strongly associated with the hydrological gradient (Figure 4).
However, this was not the case for slacks 3, 5 and 11, where there was no clear structuring of the community by the hydrological gradient.
| How does the presence of a communityhydrology link relate to topographic variability?
There was good evidence that more topographically variable dune slacks contained greater plant species diversity (linear model: Richness: F 1,4 = 8, Adj R 2 = 0.58, p = 0.047; Shannon:
| Do species have consistent responses to hydrological gradients?
Out of the 20 species investigated, for 12 the relationship between the depth to water table and probability of presence was statistically significant (Figure 6).Responses of species presence along a water table gradient from −171 cm (drier) to −4 cm (wetter) included hump-shaped (e.g., Lotus pedunculatus), curvilinear increasing (e.g., Carex nigra) and curvilinear decreasing (e.g.,
| DISCUSS ION
We found that hydrology is a key driver of plant composition at a landscape scale because it explained variation in plant community composition across all dune slacks.However, for variation in plant communities within a slack, hydrology was only an important control for half of the wetlands.We demonstrate that these differences in within-slack community-hydrology relationships are explained by the extent of within-slack variability in topography.In topographically variable slacks community composition is tightly coupled to hydrology; this is not the case in less variable slacks.Slack topographic variability was also a key driver slack plant community diversity.
We also demonstrate species-specific responses along hydrological gradients, providing evidence of hydrological niche segregation for some species at least.These patterns are important in demonstrating that, while the short-lived hydrological regimes can be very important drivers of plant community structure, they may not be the only factor responsible.We also demonstrate the key role that topography plays in structuring these biodiverse rich coastal wetlands.This will provide important information for coastal-site managers when considering dune slack restoration and emphasise the necessity of including topographical variability to support plant species diversity.
How species co-exist in spatially heterogenous environments depends on several criteria.One of the most important explanations for co-existence is that individual species have different responses to the environment (Chesson, 2000;Silvertown, 2004).We provide indirect evidence of hydrological niche segregation (due to being observational data) by modelling species-specific responses along a hydrological gradient (Silvertown, 2004).Evidence of species segregation along hydrological gradients has been noted in several habitats (Silvertown et al., 1999(Silvertown et al., , 2015;;Araya et al., 2011;Letten et al., 2015;García-Baquero et al., 2016).
Of the 20 species, 12 species were responding along the hydrological gradient; a lack of response for the remaining species may follow from priority effects.For instance, in a grassland system Damgaard et al. (2017) demonstrated that colonisation has a larger impact on species survival and success compared to survival of flooding.
Successional processes may also result in differences in plant communities that are not evidenced in current hydrological conditions (Pye & Saye, 2005;Davy et al., 2006).Successional processes may be altered by priority effects in young dune slacks, changing the trajectory of the plant community (Bossuyt et al., 2003).Extremely dry conditions can also delay the development towards older successional vegetation (Grootjans et al., 1991).Species co-existence and community structure may be affected by species dispersal and competition (Bossuyt et al., 2003).In addition, underlying mechanisms such as groundwater nitrogen that were not measured may be driving species-specific responses (Rhymes et al., 2016).Site management must, therefore, include recognition of other local factors in controlling vegetation composition other than water table depth.
In our study, within-slack topographic variability was the driver of differences in hydrology and plant community diversity.Less topographically varied dune slacks had the least variation in hydrology and had typically lower species diversity.The internal structuring of vegetation composition within flatter slacks is not determined by hydrology.Topographically controlled soil moisture plays an important role in structuring plant communities (Moeslund et al., 2013c).Vertical elevation is recognised as an important control of soil moisture in habitats that lie within a few metres of the water table (Moeslund et al., 2011(Moeslund et al., , 2013a)).For instance, as Moeslund et al.Across all dune slacks hydrology was a dominant driver of plant community composition.These findings are consistent with general trends in dune slacks (Willis et al., 1959;Grootjans et al., 1991Grootjans et al., , 2008;;Lammerts et al., 2001).Our results also show that slack identity explained more variation than environmental variables.This could relate to differences in community composition within the (Bossuyt et al., 2003(Bossuyt et al., , 2005)).Changes in the water table levels can also affect the removal of the nutrients, with high water levels maintaining a low nutrient status of nitrogen and phosphorus (Lammerts & Grootjans, 1997).Microbial communities and belowground interspecific competition can also be altered by the hydrological regime (Wolfe et al., 2006;Bartelheimer et al., 2010;Araya et al., 2013).These unmeasured effects driven by the hydrological F I G U R E 6 Fitted generalised additive mixed models (GAMM) for 12 species across all six dune slacks, accounting for spatial autocorrelation and random effect (of dune slack).The smoothers (-) and confidence bands (---) represent the mean probability of presence against average water table depth (AWTD in cm; n = 594) regime may be influencing the plant community structure differently within each slack.Plant species and community structure may also be affecting soil moisture dynamics (Leitinger et al., 2015;Zheng et al., 2015), and soil properties (Ehrenfeld et al., 2005;Fischer et al., 2014Fischer et al., , 2015)).For inst ance, Fischer et al. (2019) demonstrated in a long-term grassland experiment that soil water content was affected by species richness and certain functional groups.
Water table depth provides a measure of the hydrological conditions of the slack.This metric is valuable for site managers, as it provides essential information on the seasonal and long-term hydrological changes, and can explain differences in community composition (Sterck et al., 2011;Curreli et al., 2013;Rhymes et al., 2014;Schietti et al., 2014;Earl, 2015).However, our choice of metric (fouryear average) will not encompass all of the variability in hydrological regime.One of the challenges in interpreting plant responses along hydrological gradient is the role of water as both limiting resource and a stress when over-abundant.Silvertown et al. (1999) demonstrated that there is a trade-off of species tolerance between time spent in drought conditions and time spent waterlogged, which is measured as sum exceedance values (SEV).This is further supported by Araya et al. (2011), whereas Bartholomeus et al. (2012) suggest that direct measures such as availability of water and oxygen in the root zone predicts vegetation better than indirect measures such as mean spring groundwater or SEV.Consequently, our choice of metric may not have been sensitive enough to identify all the plant communities responding along hydrological gradients.Nonetheless, we found strong responses to the metric we used, indicating that we did characterise a component of slack hydrology which is important for plant communities.
Our results demonstrate the complexity of dune slacks, and the importance of exploring hydrological drivers in short-lived habitats.
Across all dune slacks, hydrology was a key feature in structuring plant communities.However, communities in only half of the studied coastal wetlands were structed along a hydrological gradient.This relates to topographic variability within the slack.Even within the same sand dune system, plant communities may be controlled by different sets of environmental drivers.This has implications for understanding how the drivers of plant community change might impact communities and may result in idiosyncratic responses, where specific habitat-level controls are not fully understood.This research assists coastal dune management by demonstrating that to maximise habitat space for many species, topographically varied dune slacks need to be created and that successional processes and species interactions will play a role in determining final slack composition.
Climate change will have a significant impact on dune slack hydrology and understanding how plants respond to hydrology is essential to protect these biodiversity-rich habitats.
F I G U R E 3
Canonical correspondence analysis (CCA) of the constrained parsimonious model (model four).(a) The plant species and the environmental factors.(b) The quadrats, the environmental factors and dune slack identity.Abbreviations for environmental variables are: NV, non-vascular plants; DOM, dead organic matter; BG, bare ground; and AWTD, average water
(
2013b) demonstrated, at different spatial scales (2-250 m) across a range of lowland habitats in Denmark, elevation had the strongest impact on plant communities and topographically controlled soil moisture was the main determinant of habitat type.While only based on six dune slacks, our study demonstrates the potential for small-scale local topographic variability as a control over local plant diversity patterns.This is explained by the interaction between niche differentiation along the hydrological gradient, coupled with differences in niche diversity between habitats.This further demonstrates the importance of creating topographically variable dune slacks in restoration projects.
F
The relationship between topographic variability in a slack and measures of alpha diversity and the hydrology-plant community relationship.Presented are the surface roughness (standard deviation of between-quadrat variation in elevation) and dune slack (1) species richness, (2) Shannon diversity, (3) Simpson diversity and (4) non-metric multidimensional scaling (NMDS) R 2 trend surface.The fitted line is based on a logarithmic regression slack, which could be driven by the difference in hydrological regime.For instance, fluctuations in the hydrological regime can alter intraspecific and interspecific interactions, and thus community composition of the stratified random sampling in a dune slack.A single dipwell is located within each dune slack.Within a 7 m radius from the dipwell, the dune slack is sectioned into four quadrants (north, east, south and west).Within each quadrant, 25 random quadrats of 15 cm × 15 cm are undertaken, totalling to 100 quadrats per slack Environmental variables CodeNon-vascular (%) NVBare ground (%) BGDead organic matter (deadwood and litter) (%) DOMDung (%) DUAverage water table depth over four years (cm) AWTD Summary of the topography, water table depths, NMDS trend surface and alpha diversity for each slack 87, p = 0.004; Simpson: F 1,4 = 10.18,AdjR 2 = 0.65, p = 0.033).There was evidence that more topographically variable dune slacks had significant water table depthDWYER Et al.trends (linear model: NMDS R 2 trend surface: F 1,4 = 42.46,AdjR 2 = 0.89, p = 0.003).Slack 6 was most topographically variable, contained most species and had the highest Shannon and Simpson diversity.Slack 11 was least topographically variable, had the second-lowest species richness, but lowest Shannon and Simpson diversity (Table2; Figure5).TA B L E 2AWTD, average of water table depth; NMDS, non-metric multidimensional scaling.
This research would not have been possible without Ainsdale NNR's Senior Reserve Manager, David Mercer and the assistance of Reserve Managers Barry Smith, Peter Gahan and Tony Meadow.We would like to thank landowners Natural England.Many thanks to the three anonymous referees.This research was possible due to PhD funding to CD from UK Research and Innovation via the Central England NERC Training Alliance (CENTA). | 2021-10-19T16:06:52.127Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "305d3c91bd181962242171c447c314d560f18559",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvs.13085",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "01c7e68f87932acb1062c016d682da4074b53ad5",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
18206721 | pes2o/s2orc | v3-fos-license | Complementarity between survival and mortality
Accurate demographic functions help scientists define and understand longevity. We summarize a new demographic model, the Weon model, and show the application to the demographic data for Switzerland (1876-2002). Particularly, the Weon model simply defines the maximum longevity, which is induced in nature by the mortality dynamics. In this study, we reconsider the definition of the maximum longevity and the effectiveness for longevity by the combined effect of the survival and mortality functions. The results suggest that the mortality function should be zero at the maximum longevity, since the density function is zero but the survival function is not zero. Furthermore, the effectiveness for longevity can be maximized at the characteristic life by the complementarity between the survival and mortality functions, which suggests that there may be two parts of rectangularization for longevity. The historical trends for Switzerland (1876-2002) implies that there may be a fundamental limiting force to restrict the increase of the effectiveness. As a result, it seems that the density function is essential to define and understand the mortality dynamics, the maximum longevity, the effectiveness for longevity, the paradigm of rectangularization and the historical trends of the effectiveness by the complementarity between the survival and mortality functions.
Introduction
The study of aging has traditionally been independently approached at two levels of biological organization: at the individual and sub-individual level by gerontologists interested in the physiology of human aging, and at the population level by demographers primarily interested in patterns of survival and mortality in human populations (Gohil and Joshi, 1998). Fundamental studies of the aging process have lately attracted the interest of researchers in a variety of disciplines, linking ideas and theories from such diverse fields as biochemistry to mathematics (Weitz and Fraser, 2001). The way to characterize aging is to plot the increase in mortality rate with chronological age. The mortality rate is the probability that an individual who is alive at a particular age will die during the following age interval. The mortality rate can also be represented as the fraction of the population surviving to a particular age (or the survival rate) (Weon, 2004c).
The fundamental law of population dynamics is the Gompertz law (Gompertz, 1825), in which the human mortality rate increases roughly exponentially with increasing age at senescence. The Gompertz model is most commonly employed to compare mortality rates between different populations (Penna and Stauffer, 1996).
However, no mathematical model so far, including the Gompertz model, has been suggested that can perfectly approximate the development of the mortality rate over the total life span (Kowald, 1999). Particularly in modern research findings, it seems to be obvious that the mortality rate does not increase according to the Gompertz model at the highest ages (Vaupel, 1997;Robine and Vaupel, 2002), and this deviation from the Gompertz model is a great puzzle to demographers, biologists and gerontologists. There is strong evidence from many developed countries that the rate of increase in mortality decelerates at high ages. Many of the traditional mathematical models (for instance, the Gompertz, Weibull, Heligman & Pollard, Kannisto, Quadratic and Logistic models) for the mortality rate provide poor fits to empirical population data at the highest ages (Thatcher, Kannisto and Vaupel, 1998;Yi and Vaupel, 2003).
We have recently found a useful model, the Weon model, derived from the Weibull model with an age-dependent shape parameter to describe the human survival and mortality curves (Weon, 2004a(Weon, , 2004b(Weon, , 2004c. In this paper, we summarize the new demographic model, the Weon model, and show the application to the demographic data for Switzerland from 1876 up to 2002. Especially, we reconsider the definition of the maximum longevity, the effectiveness for longevity, the new paradigm of rectangularization for longevity and the historical trends of the effectiveness by the complementarity or the combined effect of the survival and mortality functions.
Demographic functions
Mortality distributions can be effectively summarized by any one of several complementary functions (Wilmoth, 1997). Three functions are particularly useful: the density function, the survival function and the mortality function as a function of age; Let f be the (probability) density function describing the distribution of life spans in a population. The cumulative density function, F , gives the probability that an individual dies before surpassing age t (especially in this case, we consider that the age t is a continuous variable). The survival function, s , gives the complementary probability ) that an individual is still alive at age t . The mortality function, m , is defined as the ratio of the density and survival functions ( ). Thus, the mortality function gives the probability density at age t conditional on survival to that age. The mathematical, complementary relationships among the demographic functions are as follows, (2) In life (Weon, 2004a(Weon, , 2004b(Weon, , 2004c. For the best fits to the demographic data over the total life span; a cubic or a quartic expression is appropriate for ages 0-20, a linear or a quadratic expression is appropriate for ages 20-80 and a quadratic expression is appropriate for ages 80+ (Weon, 2004c).
Weon model
In recent a new concept, model, methodology and principle for studying human longevity in terms of demographic basis have been introduced, developed and established by Weon (2004aWeon ( , 2004bWeon ( , 2004c. We call the new model the "Weon model" (Weon, 2004c), which is modified from the Weibull model with an age-dependent shape parameter. This section summarizes the concept, model, methodology, generality, definition of the mathematical limit of longevity (the maximum longevity) and complementarity principle on longevity by the Weon model and shows the application to the demographic data for Switzerland.
Concept
The original concept was obtained as follows: typical human survival curves show i) a rapid decrease in survival in the first few years of life and ii) a relatively steady decrease and then an abrupt decrease near death thereafter. Interestingly, the former behaviour resembles the Weibull survival function with 1 < β and the latter behaviour seems to follow the case of 1 >> β . With this in mind, it could be assumed that shape parameter is a function of age (Weon, 2004b). The new model is completely different from the Weibull model in terms of the 'age dependence of the shape parameter'. It is especially noted that the shape parameter can indicate a 'rectangularity' of the survival curve. The reason for this is that as the value of the shape parameter becomes a high value, the shape of the survival curve approaches a further rectangular shape (Weon, 2004a).
Model
The Weon model is derived from the Weibull survival function and it is simply described by two parameters, the age-dependent shape parameter and the characteristic life. The age-dependent shape parameter enables us to model the survival and mortality functions and it is expressed as follows, where α denotes the characteristic life (or the scale parameter, α = t when ) 1 exp(− = s ) and β denotes the shape parameter as a "function of age". The Weon model is completely different with the Weibull model in the age dependence of the shape parameter. The fact that the shape parameter for humans is a function of age is valid with a certain degree of universality in many countries (Weon, 2004a(Weon, , 2004b(Weon, , 2004c. The density function by the Weon model can be expressed the multiplication of the equation (3) and (4) by the mathematical relationship of '
Methodology
We could evaluate the age dependence of the shape parameter to determine an adequate mathematical expression of the shape parameter, after determination of the characteristic life graphically in the survival curve. Conveniently, the value of the characteristic life is always found at the duration for survival to be ' ) 1 exp(− ', this is known as the characteristic life. This feature gives the advantage of looking for the value of α simply by graphical analysis of the survival curve. In turn, with the observed value of α , we can plot the shape parameter as a function of age by the this obviously implies that ' β is a function of age'. For example, we can see the age dependence of the shape parameter for Switzerland (2002) In empirical practice, we could successfully use a polynomial expression for modeling the shape parameter as a function of age: ...
, where the associated coefficients could be determined by a regression analysis in the plot of shape parameter curve. And thus, the derivative of β is obtained as follows: , which indicates again that the shape parameter for humans is a function of age. Roughly a linear expression is useful for ages 0-80. But for the best fits to the demographic data over the total life span; a cubic or a quartic expression is appropriate for ages 0-20, a linear or a quadratic expression is appropriate for ages 20-80 and a quadratic expression is appropriate for ages 80+ (Weon, 2004c).
On the other hand, β mathematically approaches infinity as the age t approaches the value of α or the denominator ' Only two parameters, α and β , determine the survival and mortality functions. In the case of Switzerland (2002), when the characteristic life is evaluated to be 86.98 years, the trace of α can be observed at the age 87 near the characteristic life in Fig. 1.
Generality
The Gompertz model (Gompertz, 1825) and the Weibull model (Weibull, 1951) are the most generally used models at present (Gavrilov and Gavrilova, 2001).
Interestingly, the Gompertz model is more commonly used to describe biological systems, whereas the Weibull model is more commonly applicable to technical devices (Gavrilov and Gavrilova, 2001). In the previous paper (Weon, 2004c), we could see that the traditional models, the Gompertz and Weibull models, may be generalized by the Weon model on the basis of the fact that shape parameter is a function of age by the approximate relationship of ' β ∝ m ln ' after adulthood (for ages ~20+) (Weon, 2004c).
The Weon model approximates the Gompertz model when ' t ∝ β ' and the Weibull model when ' = β constant'. We could see that the Gompertz model is a special case of a linear expression for β and the Weibull model is a special case of a constant shape parameter. Particularly, the mortality rate would deviate from the Gompertz model when β shows a non-linear behavior (before age ~20 or after age 80). Thus, β is a measure of the deviation from the Gompertz model (Weon, 2004c).
Particularly for aging patterns, it is the age dependence of the shape parameter that distinguishes humans from technical devices. It seems to show the difference between humans and technical devices in terms of 'robustness'. The fundamental difference for robustness between biological systems and technical devices is obvious (Gavrilov and Gavrilova, 2001). In the previous papers (Weon, 2004a(Weon, , 2004b(Weon, , 2004c, the age-dependent shape parameter is changed from approximately 0.5 to 10 with age for the typical demographic curves. This feature is in great contrast to technical devices typically having a constant shape parameter (Nelson, 1990). We attribute the age dependence of the shape parameter to the resistance to aging, or the nature (the homeostasis and the adaptation) of biological systems to maintain stability and to survive (Weon, 2004a). A fundamental principle related to the age-dependent shape parameter for humans was suggested in the recent paper (Weon, 2004c), which will be explained in the section 3.6.
Mathematical limit of longevity (maximum longevity)
In general, the mortality rate should be mathematically positive ( 0 > m ).
Therefore, the criterion for the mathematical limit of longevity, implying the maximum longevity which is able to be determined by the mortality dynamics in nature, can be given by (Weon, 2004c), We successfully used a quadratic expression for the description of the shape parameter after age 80. For example, see the case of Switzerland (2002) from age 80 to 109 in Fig. 2. Interestingly, the quadratic coefficient ( 2 β ) is important to evaluate the mathematical limit of longevity, since it determines the slope with age in the derivative ( t 2 1 2β β + ) of the quadratic expression of the shape parameter (Weon, 2004c).
Specifically, the mortality curves for higher ages (110+) are important to understand the human longevity. According to the Weon model, the quadratic expression for ages 80-109 is valid with a certain degree of university in many modern developed countries, which enables us to predict that the mortality rate decreases after a plateau around ages 110-115 and the mathematical limit of longevity emerges around ages 120-130 (Weon, 2004c). If the quadratic expression is valid for ages 110+, we are able to predict the mortality rate at the highest ages. The pattern of the mortality dynamics (deceleration, plateau and decrease) at the highest ages by the Weon model is consistent with the other assertions (for instance, Vaupel et al., 1998;Robine and Vaupel, 2002;Helfand and Inouye, 2002).
Complementarity principle on longevity
In recent we suggested a fundamental principle on longevity (Weon, 2004c). We wish to explain it in brief. The essence of the Weon model is the age dependence of the shape parameter. What is the origin of the age-dependent shape parameter? According to the Weon model, in principle for the highest value of s or for longevity at all times, the shape parameter should be variable according to the characteristic life; "for longevity, β increases at α < t but it decreases at α > t ." See an example of the longevity tendency in Fig. 3 . According to the Weon model, the quadratic expression is obviously related with the mortality dynamics at α > t , which induces the mathematical limit (the mortality rate to be mathematically zero, implying the maximum longevity).
Interestingly, the mathematical limit tends to decrease with increasing quadratic coefficient ( 2 β ). It seems that the mathematical limit decreases as the longevity increases, which shows "complementary" aspects. It is very interesting that the reason for longevity, especially in terms of the decrease of the shape parameter for ages after characteristic life, may be the reason for limit of longevity in nature (Weon, 2004c).
Results and Discussion
In this study, it seems to be obvious that the mortality function ( m ) Fig. 4. According to the Weon model, the quadratic expression for the shape parameter at ages 80-109 results in that the mortality function is zero at the maximum longevity by the mortality dynamics in nature (Weon, 2004c). It seems that the density function is essential to understand the mortality dynamics. Especially in this study, we reconsider the definition for the maximum longevity and the demographic meaning of the density function by the complementarity between the survival and mortality functions. The demographic analysis for Switzerland supports the findings by the Weon model. In the following sections, we think about the definition of maximum longevity by the Weon, the effectiveness for longevity by the complementarity between the survival and mortality functions, the new paradigm of rectangularization for longevity by the effectiveness and the historical trends of the effectiveness for Switzerland .
Definition of maximum longevity
In general, the term of "longevity" means the "duration of life". In a sense, the "maximum longevity" can be used to mean the "maximum duration of life" of a given population. However, what we know is the "maximum age at death", which means the oldest age at death observed in a given population during a given time period (Vallin and Meslé, 2001). Perhaps the most common notion of a limit in the study of human longevity is the limited-life-span hypothesis, which states that there exists some age ω beyond which there can be no survivors. This hypothesis can be expressed by any one of the following three formulas (Wilmoth, 1997): By the way, according to the Weon model, the survival function is not zero, although it has extremely low values at the highest ages, but that the mortality function can be zero at the maximum longevity. See an example for Switzerland in Fig. 5. The survival curves for 1876 and 2002 are not zero when they are extrapolated by the estimation of the shape parameters for ages 80-109. Therefore, the Weon model suggests that the maximum longevity can be defined as follows: "at If that is the case, why the mortality rate approaches zero at the maximum longevity in the Weon model? We think about what will happen to the survival rate at the maximum longevity. In fact the survival rate approaches zero, but it is not zero. The (5) and (6) are mathematically identical.
We confirm the fact that the mortality function should be zero at the maximum longevity, since the density function is zero but the survival function is not zero at the moment. The functions beyond the maximum longevity have no reality. See an example for modeling the mortality function for Switzerland (2002) in Fig. 6. The mortality function extrapolated by the Weon model approaches zero at the maximum longevity, which is due to the nature of the density function: That is, the decrease rate of the survival function with age ( dt ds / − ) or the density function should be zero at the maximum longevity.
Effectiveness for longevity
It seems that the density function indicates the effectiveness for longevity between the survival and mortality functions, , since the mortality rate tends to increase with decreasing the survival rate, which is due to the complementarity between the survival and mortality functions; "For longevity, individuals tend to reduce the mortality rate but strive to improve the survival rate." It is expected that there may exist the 'maximum' effectiveness between the survival and mortality rates at the 'characteristic life'. See an example for Switzerland (2002) in Fig. 7. The effectiveness for longevity is maximized at the characteristic life (α~87 years) and it is zero at the maximum longevity (ω~124 years) for Switzerland (2002).
Two-part rectangularization
If that is the case, it can be suggested that since the most effective combination between the survival and mortality functions for longevity occurs at the characteristic life, there may be two parts of rectangularization for longevity as shown in Fig -it is the second part. The first and second parts can be overlapped as one rectangularization as the characteristic life approaches the maximum longevity ( ω α → ). This paradigm of rectangularization for longevity makes sense and comprehends the conventional paradigm of rectangularization (Fried, 1980;Eakin and Witten, 1995).
Historical trends
The historical trends of the density function for Switzerland from 1876 up to 2002 are seen in Fig. 9. It tells the trends of the effectiveness for longevity by increasing the survival rate and decreasing the mortality rate over time. The density function shifts to the right and upward direction, which indicates that the most effective combination between the survival and mortality functions for longevity effectively increases as the characteristic life increases. However, it seems that the exponential decrease rates of the density function after the characteristic life does not significantly increases over time.
This implies that a fundamental limiting force may restrict the effectiveness for longevity over time. It is attributed to the complementarity principle on longevity; "for longevity, β increases at α < t but it decreases at α > t , resulting in that the mathematical limit (the maximum longevity) decreases as the longevity increases in nature (Weon, 2004c)".
Conclusions
In this paper, we summarize the new demographic model, the Weon model, and show the application to the demographic data for Switzerland . In this study, it seems to be obvious that the density function is essential to understand the mortality dynamics. We reconsider the definition of maximum longevity, the effectiveness for longevity by the complementarity between the survival and mortality functions, the new paradigm of rectangularization for longevity by the effectiveness and the historical trends of the effectiveness for Switzerland . The results suggest that the mortality function should be zero at the maximum longevity, since the density function is zero but the survival function is not zero. Furthermore, the effectiveness for longevity can be maximized at the characteristic life by the complementarity between the survival and mortality functions. Since the most effective combination between the survival and mortality functions for longevity occurs at the characteristic life, there may be two parts of rectangularization for longevity according to the characteristic life. The historical trends for Switzerland tells that the most effective combination between the survival and mortality functions for longevity effectively increases as the characteristic life increases over time, however the exponential decrease rates of the density function after the characteristic life does not significantly increases, which implies that a fundamental limiting force may restrict the effectiveness for longevity. As a result, it seems that the density function is essential to define and understand the mortality dynamics, the maximum longevity, the effectiveness for longevity, the paradigm of rectangularization and the historical trends of the effectiveness by the complementarity between the survival and mortality functions. | 2004-03-15T10:00:00.000Z | 2004-03-15T00:00:00.000 | {
"year": 2004,
"sha1": "1f3ee3cebbb4969602675cc5770aebfe923f6165",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1f3ee3cebbb4969602675cc5770aebfe923f6165",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Biology"
]
} |
258228023 | pes2o/s2orc | v3-fos-license | Design, synthesis and antitumor activity of new naproxen based 1,2,4-triazole-Schiff base derivatives
In the present work, new Naproxen based 1,2,4-triazole-Schiff base derivatives have been synthesized and screened for in vitro antitumor activity and in silico pharmacokinetic studies. The structure of the newly synthesized compounds (5–12) was elucidated by IR, NMR and mass spectrometry. All the compounds were tested against breast MCF-7, hepatocellular Huh-7 and lung A-549 cancer cell lines using MTT assay. Compound 7 was better in killing A549 cells with IC50 3.71 µM (1.48 fold), compared with Doxorubicin (IC50 5.50 µM). Also, compound 7 was found to be non toxic on MRC-5 normal cells as it depicts IC50 more than 500 μM. Besides, compound 12 also revealed promising activity with IC50 6.94 and 3.33 µM against MCF-7 and Huh-7 respectively. The in silico studies displayed that the synthesized compounds favors the desired pharmacokinetic profile and drug likeness properties. It can be concluded that these new Naproxen based 1,2,4-triazole-Schiff base derivative (7) has the potential to be further investigated as lead molecule in the development of new chemotherapeutic agent.
Introduction
Cancer is the rapid proliferation of abnormal cells that has the tendency to invade other body parts resulting in high mortality rates [1,2].It is the leading cause of death globally accounting for nearly 10 million deaths in 2020 [3] and is expected to rise to 16.4 million by 2040 [4].The most common cancers are breast, lung and colon and 30-50% of cancer mortality can be prevented through early detection, appropriate treatments and care of patients [5].There has been a great advancement in cancer treatment which includes surgery, radiotherapy, chemotherapy and hormonal treatments [6,7].However due to dose toxicity, drug resistance to malignant tumours and selectivity of the current anticancer drugs [8], there is a need to develop effective chemotherapeutics to overcome these obstacles.Naproxen is a COX inhibitor and a potent NSAID.It has now been extensively studied for its anticancer potential as many of its derivatives inhibited proliferation in various cancers [9,10].Urea and propanamide derivatives of Naproxen inhibited cancer proliferation in colon cancer [11], Naproxen-1,3,4-oxaadiazole as EGFR inhibitors [12], Naproxen hydrazide-hydrazones as potent VEGFR-2 inhibitors [13], Naproxen-triazole hybrids as HDAc inhibitors [14] and provided protection in bladed cancer [15].
Chemistry
The intermediates 2-4 were prepared using the reported method [12] with slight change.Naproxen 1 was reacted with methanol in presence of catalytic amount of concentrated sulphuric acid to yield compound 2 in pure form (91% yield).Then compound 2 was refluxed with hydrazine monohydrate in presence of methanol for 4 h to yield compound 3 (86% yield).The hydrazide 3 was dissolved in ethanolic KOH and added carbon disulfide dropwise at 0-5 ℃ followed by stirred at room temperature for 14 h yield potassium thiocarbamate salts, which was filtered.To this crude solid, hydrazine hydrate was added, and reflux for 12 h and then acidification with HCl solution yield main intermediate 4 (76% yield).Compound 4 was reacted with different aromatic aldehydes to yield new compounds 5-10 in 65-85% yield (Scheme 1) and reaction of compound 8 with different alkyl halide afforded compound 11-12 (Scheme 2).All the newly synthesized compounds formation was confirmed by different analytical techniques such as FTIR, 1 H NMR, 13 C NMR and mass spectrometry.The 1 H NMR of compounds 5-7 and 9-10 displayed signals for azomethine protons merged with aromatic protons in the range 6.57-8.55ppm, S-H proton signal in the range 9.93-10.48ppm and aliphatic region showed peaks at 1.79-1.86ppm, 4.43-4.65 ppm and 3.90-3.91ppm ascribed to CH 3 CH-, CH 3 CH-(Ar) 2 , and Ar-O-CH 3 , respectively.These peaks were further supported by 13 C NMR which showed OCH 3 , CH and CH 3 signals in upfield regions at 55. 30
Physicochemical properties
Besides effectiveness, many molecules could not enter drug development due to poor bioavailability, absorption, water solubility and lipophilicity [6].Therefore, the molecules must have good pharmacokinetic profile along with excellent pharmacological potential.There are certain criteria such as Lipinski rule of five which should be followed by the molecules in order to enter drug innovation [36].The candidate must have molecular weight (M.W.) less than 500, lipophilicity (log P) below 5, hydrogen bond acceptor/donor must be below 10 and 5, respectively for easy transportation, excretion, diffusion and absorption.The synthesized final molecules were screened for in silico ADME predictions to have a look on their physiochemical and pharmacokinetic properties and the results are shown in Table 1.All the newly prepared compounds (5-12) were screened for their in-silico absorption, distribution, metabolism, excretion and toxicity (ADMET) study using swiss ADME and pKCSM data base tools [37].From the results, it was observed that most of the compounds showed promising pharmacokinetics with molecular weight less than 500 except compound 11, % absorption in the range 61.76-79.04,lipophilicity (log P) was found to be less than 5 in the range 3.06-4.90and water solubility (log S) in the good range − 5.27 to 6.99.Also these compounds could not cross blood brain barrier and displayed desired skin permeability (log Kp) in the range − 5.05 to 5.82 for These data suggests that these Naproxen based 1,2,4-triazole-hydrazone derivatives follow Lipinski rule therefore possess desired pharmacokinetic and drug likeness properties as shown in Table 1.
In vitro antitumor activity
All the target molecules were evaluated for their antitumor activity against the three cancer cell lines MCF-7 (breast), Huh-7 (liver) and A-549 (lungs) by MTT method using the method of Mosmann [38].Doxorubicin was used as standard drug which showed IC 50 1.85 µM, 1.40 µM and 5.50 µM against MCF-7, Huh-7 and A-549, respectively.From the results as shown in Figs. 2, 3 and 4, it was observed that the tested compounds showed variations in their activity from excellent to moderate in the range IC 50 4.72-46.80µM, 1.91-28.10µM and 3.71-56.63µM against MCF-7, Huh-7 and A-549, respectively.Among the tested derivatives, compound 7 was the most promising with IC 50 4.72, 1.91 and 3.71 µM, compared to the standard drug, Doxorubicin against MCF-7, Huh-7 and A-549 respectively.It can be seen that compound 7 was better in killing A549 cells with IC 50 3.71 µM (1.48 fold), compared with Doxorubicin (IC 50 5.50 µM).Also, compound 7 was found to be non toxic on MRC-5 normal cells as it depicts IC 50 more than 500 μM.Besides, compound 12 also showed promising activity with IC 50 6.94 and 3.33 µM against MCF-7 and Huh-7 respectively.Compounds which were moderately toxic (IC 50 < 20 µM) to 2.
Chemistry
All the chemicals and reagent used for the synthesis of the target molecules were procured from Sigma Aldrich, Loba Chem and Across.The proposed structure of all the synthesized compounds was confirmed by different analytical techniques such as FT-IR (Thermo Scientific iS50), 1 H & 13 C NMR (Bruker 850 MHz and 213 MHz respectively), Mass spectrometry (Thermo Scientific LCQ FLEET LCF10605), Elemental Analysis (LEECO Elementar Analyzer) while melting points were recorded on Stuart SMP40 machine which were uncorrected.The intermediate compounds 2-4 were prepared according to our previous method [12].Dimethyl sulfoxide (DMSO), MTT and trypan blue dye was purchased from Sigma (St. Louis, Mo., USA).Fetal Bovine serum, RPMI-1640, HEPES buffer solution, L-glutamine, gentamycin and 0.25% Trypsin-EDTA were purchased from Lonza (Belgium).
General procedure for synthesis of compound 5-10
Compound 4 (0.001 mol) was taken in 100 mL round bottom flask and added 30 mL absolute ethanol, different aromatic aldehydes (0.001 mol) followed by addition of 3-5 drops of glacial acetic acid.The reactions mixture was stirred at 50-60 ℃ for 6-14 h.After completion of the reaction, the reactions mixture were concentrated to around 10 mL and poured onto the crushed ice, stirred to get solid precipitate.The products were filtered, washed with water and dried.The crude products were recrystallized from ethanol to get pure compounds with 68-86% yield.
Antitumor activity
The antitumor activity of the newly synthesized compounds was tested against Breast MCF-7, Hepatocellular Huh-7 and lung A549 carcinomas using MTT protocol.The cell lines were obtained from the American Type Culture Collection (ATCC, Rockville, MD) and the activity was performed at Regional Center of Microbiology and Biotechnology, Al-Azhar University, Egypt.The positive reference drug used was Doxorubicin and the assay was performed according to published work [39].The optical density was measured at 590 nm with the microplate reader (SunRise, TECAN, Inc, USA) to determine the number of viable cells and the percentage of viability was calculated as [(ODt/ ODc)] × 100% where ODt is the mean optical density of wells treated with the tested sample and ODc is the mean optical density of untreated cells.The graph between surviving cells and drug concentration were plotted to get the survival curve of each tumor cell line after treatment with the specified compound.The IC 50 for each compound was calculated from the dose response curve for each conc.using Graphpad Prism software (San Diego, CA.USA) [38].
Conclusion
In the present work, new Naproxen based 1,2,4-triazole-Schiff base derivatives have been synthesized and screened for in vitro antitumor activity and in silico pharmacokinetic studies.Compound 7 was found to be the most potent with IC 50 4.72, 1.91 and 3.71 µM, against MCF-7, Huh-7 and A-549, respectively and was better in killing A549 cells than doxorubicin with 1.48 fold activity.The in silico studies displayed that the synthesized compounds favors the desired pharmacokinetic profile and drug likeness properties.It can be concluded that these new Naproxen based 1,2,4-triazole-Schiff base derivative (7) could be further investigated as lead molecule in the development of new chemotherapeutic agents.
Scheme 1 Scheme 2
Scheme 1 Synthesis of naproxen based 1,2,4-triazole bearing Schiff base (5-10) the cancer cells were found to be 10 and 14 towards MCF-7; 9 and 10 against Huh-7; and compound 12 towards A-549 cells.Other compounds shown mild cytotoxicity with IC 50 < 50 µM towards the tested cell lines except compound 9 which was less active against A-549 cells.The result of antitumor activity is shown in Table
Fig. 4
Fig. 4 Cell viability (%) of final compounds (5-12) against A-549 cells (8),4-triazole-hydrazone derivative(8)displayed one additional signal at 6.08-6.10ppmforAr-O-CH 2 in 1 H NMR and at 102.13 ppm in 13 C NMR in addition to similar signals observed for compound 7. Compound 10 showed a downfield signal in13C NMR spectra at 176.02 ppm for C = O confirming the presence of cromonyl group.Compounds 11 and 12 showed similar signals to compound 8 but both of them showed disappearance of -SH proton.Also, compound 11 exhibited appearance of additional S-CH 2 and five aromatic protons at 4.46 ppm and 7.21-7.38ppm,occurreddue to S-benzylation whereas compound 12 displayed a triplet at 1.42 ppm and broad singlet at 3.35 ppm integrating for three and two protons, respectively and signals at 14.62 ppm and 27.85 ppm correspond to -S-CH 2 -CH 3 carbons in1H & 13 C NMR, confirming alkylation at sulphur.Lastly, all the final compounds were confirmed by mass spectrometry displaying molecular ion peaks in the positive/negative mode.
Table 1
ADMET studies of the target molecules 5-12 P octanol-water partition coefficient, HBA hydrogen bond acceptor, HDB hydrogen bond donor, Log S water solubility, Log Kp skin permeability, %abs absorption, BBB blood brain barrier | 2023-04-20T15:18:31.198Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "f2a27358876f66c90970e15b71d7449f39e6816e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s43994-023-00044-7.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "e1769e66abfb5add6f9244c508cb7fa486602c83",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
2711915 | pes2o/s2orc | v3-fos-license | The human urothelial tight junction: claudin 3 and the ZO-1α+ switch
Objective Tight junctions are multicomponent structures, with claudin proteins defining paracellular permeability. Claudin 3 is a candidate for the exceptional “tightness” of human urothelium, being localised to the terminal tight junction (TJ) of superficial cells. Our aim was to determine whether claudin 3 plays an instigating and/or a functional role in the urothelial TJ. Materials and Methods Normal human urothelial (NHU) cells maintained as non-immortalised cell lines were retrovirally-transduced to over-express or silence claudin 3 expression. Stable sublines induced to stratify or differentiate were assessed for TJ formation by immunocytochemistry and transepithelial electrical resistance (TER). Expression of claudin 3, ZO-1 and ZO-1α+ was examined in native urothelium by immunohistochemistry. Results Claudin 3 expression was associated with differentiation and development of a tight barrier and along with ZO-1 and ZO-1α+ was localised to the apical tight junction in native urothelium. Knockdown of claudin 3 inhibited formation of a tight barrier in three independent cell lines, however, overexpression of claudin 3 was not sufficient to induce tight barrier development in the absence of differentiation. A differentiation-dependent induction of the ZO-1α+ isoform was found to coincide with barrier formation. Whereas claudin 3 overexpression did not induce the switch to co-expression of ZO-1α−/ZO-1α+, claudin 3 knockdown decreased localisation of ZO-1 to the TJ and resulted in compromised barrier function. Conclusions Urothelial cytodifferentiation is accompanied by induction of claudin 3 which is essential for the development of a terminal TJ. A coordinated switch to the ZO-1α+ isotype was also observed and for the first time may indicate that ZO-1α+ is involved in the structural assembly and function of the urothelial terminal TJ.
Introduction
A common feature of all epithelia is the ability to form a selective barrier to limit the passage of solutes from the apical to basal aspect. Epithelial "tightness" is commonly assessed by measurement of transepithelial electrical resistance (TER), with epithelia displaying a TER >500 Ω.cm 2 classified as "tight" [1]. The paracellular barrier is defined by tight junctions (TJs) found in the terminal junction complex, which also serve to establish cell polarity by segregating apical from baso-lateral membrane components. TJs are multicomponent complexes incorporating members of the claudin family, of which there are 24 in man, along with occludin and the PDZ-containing zonula occludens (ZO). Whereas occludin is dispensible [2], ZO-1 forms an essential link between the TJ and the perijunctional cytoskeleton, but does not itself limit solute permeation [3,4]. The claudins show expression profiles that reflect epithelial barrier properties and are considered to govern permselectivity [5]. Although the relationship between composition and properties of the TJ is well-studied in some experimental systems [6], it is less clear how individual claudins contribute to the barrier properties of differentiated epithelial tissues.
The least permeable epithelium is the urothelium -the transitional epithelium that lines the bladder and associated urinary tract (reviewed in [7]). Urothelium provides a urinary barrier that is maintained during the filling and voiding cycles of bladder accommodation and has features that limit permeability via both trans-and para-cellular routes. The Asymmetric Unit Membrane (AUM) is unique to urothelium and covers the surface of the apical cells in hexagonal plaques [8]. The AUM prevents transcellular absorption of water and solutes from the urine, but does not itself contribute to the paracellular barrier [9]. In the rat, the TJs of superficial urothelial cells undergo structural and functional reorganisation to preserve barrier integrity during filling and voiding cycles [10]. In the mouse, germline deletion of CLDN4 resulted in a diffuse urothelial hyperplasia leading to urinary tract obstruction and progressive hydronephrosis, indicating a possible role for claudin 4 in maintaining the homeostatic integrity of the normal murine urothelium. However, the absence of claudin 4 did not perturb barrier function, which was considered to be due to molecular compensation by claudin 7 [11]. In man, disruption of urothelial barrier integrity and TJ structure has been linked to several bladder pathologies, including interstitial cystitis [12][13][14][15][16] and urinary tract infections [17,18].
We have previously investigated the TJ constituents of normal human urothelium in situ, where we showed differentiation-associated expression of claudins 3, 4, 5 and 7 [19]. The expression of claudin 3 was restricted with ZO-1 to the terminally-differentiated superficial cells, where it localised specifically to the terminal "kissing points" between cells. Using PPARγ activation as a means to induce the urothelial differentiation-associated gene expression programme in cultured normal human urothelial (NHU) cells, we demonstrated that the claudin 3 gene was transcriptionally-activated during late/terminal urothelial cytodifferentiation in vitro [19], suggesting that claudin 3 may play a crucial role in urothelial barrier function in man. To test the hypothesis that claudin 3 is an essential component of the urothelial TJ required for barrier function, we have here investigated the effect of claudin 3 knockdown and over-expression on functional barrier development in NHU cell cultures, which has revealed an unexpected change in expression of ZO-1 isoforms from ZO-1α − to its alternatively-spliced ZO-1α + variant.
Materials and Methods
Tissues National Health Service Research Ethics Committee Approvals were obtained for the research use of surgical specimens of normal human urinary tract, which were collected from patients and donors with no history of urothelial cancer. Full informed consent was obtained as required and the study was approved locally by the Department of Biology Ethics Committee under the auspices of the University of York Ethics Committee.
Immunohistochemistry
Sections (5 μM) of human ureter were dewaxed and rehydrated. For immunolabelling with ZO-1 and claudin 3 antibodies, endogenous avidin and biotin were blocked and antigen retrieval was performed by boiling sections for 10 min in 10 mM citric acid buffer, pH 6.0. After a 16 h incubation of primary antibody at 4°C, slides were washed, incubated in biotinylated secondary antibody and visualised by addition of streptavidin-biotin horseradish peroxidase complex (DAKO) and 3,3′-diaminobenzidine (Sigma Aldrich). For ZO-1α + labelling, antigen retrieval was performed by boiling of slides in 1 mM EDTA in 10 mM Tris-HCl buffer (pH 9.0) before incubating with primary antibody for 16 h at 4°C. Antibody binding was visualised using the ImPRESS™ Excel Polymer system (Vector labs), according to the manufacturer's instructions. All slides were counterstained in Mayer's haematoxylin and mounted in DPX (Sigma).
Cell culture
Normal human urothelial (NHU) cells were isolated from human ureter and bladder biopsies and maintained as finite cell lines in vitro [20,21]. Cultures were propagated on Primaria™ plasticware (BD Biosciences) in low calcium [0.09 mM] keratinocyte serum-free medium containing recombinant epidermal growth factor and bovine pituitary extract (Life Technologies), supplemented with 30 ng/ml cholera toxin (KSFMc) and used for experiments between passages 3-5. In these conditions, NHU cells proliferate as a monolayer that becomes contact-inhibited at confluence and can be propagated by serial, but finite sub-culture. Supplementing the medium to 2 mM Ca 2+ (near physiological) results in stratification accompanied by the formation of adherens and tight junctions, but without urothelial cytodifferentiation [20,21]. Urothelial cytodifferentiation with tight barrier formation was induced by subculturing the cells in KSFMc supplemented with 5% adult bovine serum and 2 mM Ca 2+ , as described [22]. In all cases, cultures were grown on 24 mm Transwell™ membranes (Corning) for mRNA and protein extractions and on 12 mm Snapwell™ membranes (Corning) for electrophysiological studies [23].
Quantitative real-time polymerase chain reaction
RNA was extracted from non-differentiated, stratified and differentiated NHU cell cultures and cDNA synthesis performed as previously described [15]. Gene transcript quantification assays were performed using an ABI Prism 7300 Real-Time PCR System (Applied Biosystems) following the TaqMan™ assay protocol. Primers, probes and PCR conditions for UPK2 and GADPH transcripts were as described previously [15]. A pre-validated TaqMan™ gene expression assay for claudin 3 (Hs00265816_s1) was used (Applied Biosystems). Genes of interest were normalised to endogenous GAPDH in the same sample and the comparative C T method was used for relative quantification [24].
Generation of claudin 3 shRNA
For RNA interference experiments, siRNA oligonucleotides were designed to target the CLDN3 coding sequence (ENSG00000165215), with further addition of a hairpin loop, restriction overhangs for directional cloning and a Mlu1 restriction site to verify cloned inserts, thus generating the following CLDN3 sense shRNA sequences: gatccAAGGGCATCTTTTGGGTACCTTTCAAGAGAAGGTACCCAAAAGATGCCCTT TTTTTTACGCGTg shRNA sequences were ligated into pSIREN-RetroQ vector and a firefly luciferase negative control shRNA was included (Clontech). After bacterial transformation, successful ligation was confirmed by Mlu1 restriction digest. Plasmids were designated shRNA CLDN3-(1), shRNA CLDN3-(2), shRNA CLDN3-(3) and shRNA control.
The amplified product was purified using the QIAquick Gel Extraction Kit (Qiagen) and directionally-cloned into the pLXSN vector (Clontech) using EcoR1 and BamH1 restriction sites. An empty vector control reaction was included from which claudin 3 cDNA was absent. After bacterial transformation, purified plasmid DNA was sequenced to verify successful cloning of full length claudin 3.
Generation of claudin 3 knockdown and overexpressing cell lines
After transfection of pLXSN and pSIREN-RetroQ vectors into the PT67 packaging cell line (Clontech), retroviral particles were harvested from the growth medium, filtered through 0.45 μM low-binding Tuffryn® membranes (Pall Corporation) and applied to proliferating NHU cells for retroviral transduction, as previously described [25]. Transduced NHU cells were subjected to antibiotic selection and thereafter subcultured in KSFMc prior to experimental study.
Following initial shRNA studies, construct CLDN3-(1) was selected for further study as it gave the most effective knockdown of claudin 3 protein. Stable sub-lines were generated from four independent donor NHU cell lines following transduction with shRNA control and shRNA CLDN3-(1) retroviral particles.
Slides were fixed in methanol:acetone (v/v) and air dried before applying primary antibodies at 4°C for 16 h. Primary antibodies are detailed in Table 1. Goat anti-rabbit IgG (Alexa 488, 1/400) or goat anti-mouse IgG (Alexa 594, 1/700) secondary antibodies (Molecular Probes) were applied for 1 h at ambient temperature before washing. Hoechst 33258 (0.1 mg/ml; Sigma-Aldrich) was used to counterstain nuclei. Samples were mounted using Fluorescent Mounting Medium (Dako) and visualised on an Olympus BX60 microscope under epifluorescence illumination.
Electrophysiological studies
NHU or transduced NHU cells were seeded at 5 × 10 5 cells per Snap-well™ membrane (3-6 replicates) in either undifferentiated, stratified or differentiated culture conditions for 7 days before measuring the TER using an EVOM™ Voltohmmeter (World Precision Instruments) [23]. Blank membrane (no cell) readings were subtracted from each TER value.
Statistics
Unless otherwise stated, descriptive statistics are mean ± SD. Statistical analysis was by means of analysis of variance (ANOVA) with Tukey post-test correction.
Induction of barrier function
TER was used to assess barrier function of cultures grown on permeable membranes. Neither cultures maintained as undifferentiated monolayers in 0.09 mM [Ca 2+ ] nor as stratified cultures in 2 mM [Ca 2+ ] formed a tight epithelial barrier (>500 Ω.cm 2 ). By contrast and as previously recorded [22], a tight barrier was obtained following differentiation of cultures in 5% ABS and 2 mM [Ca 2+ ] (Fig. 1A).
Constitution of the urothelial tight junction
Claudin expression was examined by immunoblotting in undifferentiated, stratified and differentiated NHU cell cultures from two independent NHU cell lines grown on permeable membrane supports. The expression of claudin 3, 4, 5 and 7 proteins was enhanced in stratified and differentiated cultures compared to undifferentiated cultures, whereas there was little change in the expression of claudin 1 (Fig. 1B). Comparison of stratified and differentiated cultures showed equivalent expression of claudins 4, 5 and 7, but a marked increase in expression of claudin 3 was associated with the differentiated, barrier-forming phenotype (Fig. 1C).
The TJ anchoring protein ZO-1α − was expressed by NHU cell cultures in all states, but differentiation was accompanied by a partial switch to the alternatively-spliced ZO-1α + isoform (Fig. 1D).
Quantitative transcript analysis revealed that claudin 3 expression was upregulated in NHU cells following differentiation in vitro, with relative expression of claudin 3 enhanced > 300 fold. The expression of claudin 3 transcript by in vitro-differentiated NHU cells mirrored that of the archetypal urothelial differentiation-restricted UPK2 transcript and was of similar magnitude to the expression found in situ (Fig. 1E).
By immunohistochemistry, antibodies against claudin 3, ZO-1 (total) and ZO-1α + all showed localisation to the terminal junction between superficial cells of human urothelium, with some basolateral membrane localisation apparent with all three (Fig. 1F).
Claudin 3 knock-down studies
Due to the association between claudin 3 expression and NHU cell differentiation, we examined the role of claudin 3 in barrier development by generating stable shRNA knockdowns in four independent donor NHU cell lines. Following selection, the sublines expressing shRNA were induced to differentiate before analysis by immunoblotting, with the untreated parental control included for comparison purposes. Transduction with claudin 3 shRNA resulted in failure to upregulate claudin 3 protein expression following differentiation in vitro, as illustrated in one representative cell line ( Fig. 2A) and collectively in all four (Fig. 2B). Knockdown of claudin 3 had no significant effect on the differentiation-induced expression of other tight junction-associated moieties, including claudin 5, which was included as a shRNA specificity control since it shares 50.3% sequence identity with claudin 3 (Fig. 2B). Qualitatively, the failure to upregulate claudin 3 led to some reduction in the extent of membrane-localised total ZO-1 and ZO-1α + observed by immunocytochemistry (Fig. 2C).
Barrier function following claudin 3 shRNA knock-down was determined by measurement of the TER in three independent donor cell lines (Fig. 2D). TER measurements were low in undifferentiated cultures (<150 Ω.cm 2 ). Following differentiation in 5% ABS and 2mM calcium, non-transduced and control shRNA cultures generated tight barriers with all TER readings exceeding 2000 Ω.cm 2 , however claudin 3 shRNA cells were unable to generate a tight barrier and in all cases showed a significant reduction in TER (Fig. 2D).
Overexpression of claudin 3
Claudin 3 is upregulated by differentiated NHU cells, coinciding with the development of a tight barrier, which is abrogated in differentiated cells that lack induction of claudin 3 expression. We reasoned that if development of a tight barrier was dependent on the expression of claudin 3, then the overexpression of claudin 3 by TJ-forming stratified cultures would facilitate barrier formation and result in a high TER. Claudin 3 overexpressing cells were generated by stable transduction with the full length claudin 3 coding sequence.
Overexpression was confirmed by quantitative transcript analysis of claudin 3. Claudin 3 expression was highly upregulated (>100,000-fold) in undifferentiated and stratified pLXSN-claudin 3 cultures, compared to wild-type (non-transduced) and empty vector transduced NHU cells (Fig. 3A). However, following differentiation, the amount of claudin 3 transcript found in overexpressing cells was comparable to the endogenous expression observed in differentiated wild-type and empty vector cells. These results were confirmed by immunoblotting, where overexpression of claudin 3 resulted in a 64 and a 136-fold increase in claudin 3 protein expression in undifferentiated and stratified cells, respectively (Fig. 3B). By contrast, following differentiation, the amount of claudin 3 protein detected in overexpressing cells was equivalent to that detected in differentiated wild type and empty vector sublines, again indicating that claudin 3 is regulated endogenously in differentiated cells (Fig. 3B). Overexpression of claudin 3 did not appear to influence the overall expression of either ZO-1α + or ZO-1α − proteins in undifferentiated, stratified or differentiated NHU cultures, as observed by immunoblotting (Fig. 3C).
Overexpression of claudin 3 was evident by immunolabelling of cultures induced to stratify in 2 mM [Ca 2+ ], where much of the overex-pressed protein was cytoplasmic, with small patches of membrane-localised protein indicative of association with TJs (Fig. 4A). Claudin 3 overexpression resulted in an increase in the intensity and extent of intercellular-localised ZO-1 in stratified cultures, but did not affect either the extent or localisation of ZO-1α + , which was detected at tight junctions in differentiated cultures, with minimal intercellular localisation in stratified cell cultures (Fig. 4A). Finally, the overexpression of claudin 3 had no effect on the expression or immunolocalisation of claudins 1, 4, 5, and 7 (data not shown).
The effect of claudin 3 overexpression on barrier development was determined by measurement of TER in control (wild-type and pLXSN-empty vector) and claudin 3 overexpressing cultures (Fig. 4B and 4C). No significant changes were observed in barrier function resulting from claudin 3 overexpression in proliferative, stratified or differentiated NHU cells.
Discussion
This is the first study to directly link the specific protein composition of the urothelial TJ to the development and maintenance of barrier function. Here we provide evidence that claudin 3 is indispensable to urothelial tight barrier function in human urothelium, although it is not alone sufficient to instigate tight barrier formation in vitro, even under conditions of physiological calcium that are permissive for TJ formation.
Urothelial cells afford an ideal model for the study of barrier function: not only is urothelium the "tightest" of all epithelia [26,27], but urothelial cell phenotype can be manipulated in vitro by modification of the culture environment. Optimised growth conditions involve a low calcium serum-free medium originally developed for keratinocyte growth and in these conditions NHU cells adopt a basal squamous cell phenotype. The raising of extracellular [Ca 2+ ] to near-physiological [2 mM] has been shown to induce stratification and support the development of adherens and tight junctions. However, such cultures maintain a CK14-positive squamous phenotype and do not express late/terminal urothelial markers nor form a functional tight barrier [20,22]. The capacity of NHU cells to undergo differentiation along a transitional cytokeratin 13-expressing pathway, with expression of urothelial differentiation markers and tight barrier formation can be achieved by culturing NHU cells in serum and physiological [Ca 2+ ] [22]. Although the differentiation-inducing bioactive factor(s) in serum have yet to be identified, it is known that activation of the nuclear receptor PPARγ induces a programme of gene expression changes associated with late/terminal urothelial differentiation, including TJ components [19]. Despite there being close correspondence in the gene expression programmes entrained by the two differentiation-inducing methods, only the serum-induced method results in formation of an organised stratified epithelium with barrier function [28]. One study has reported TJ formation and expression of various claudins in the HPV16E6E7-im-mortalised TEU-2 human urothelial cell line, although neither barrier formation nor expression of claudin 3 was described [29]. In this con-text, it should be noted that immortalisation of human urothelial cells has been shown to compromise their capacity to differentiate [30,31], including their ability to form a functional barrier [31].
In situ, the superficial urothelial cell layer holds ultimate responsibility for containment of urine. In human urothelium, the superficial cell is distinguished from the other layers by the expression of claudins 3, 4 and 5 [19]. The superficial urothelial TJs in mouse comprise claudins 4, 8 and 12, with claudin 3 transcript reported as not detected in whole mouse bladder RNA [32], although this may reflect an issue of sensitivity, as sparse claudin 3 was Smith et al. Page 8 immunolocalised to the lateral borders of superficial cells in another study [11]. We have previously proposed claudin 3 as a prime candidate for urothelial tight barrier function due to its specific localisation to the terminal TJ in human urothelium in situ. In addition, claudin 3 gene expression is induced de novo upon PPARγ-induced differentiation, unlike claudin 4, which is regulated at the protein level through stabilisation with claudin 5 [19]. It is known that claudin 4 can contribute to barrier function: forced expression in MDCK II and LLC-PK1 cells resulted in increased TER and decreased paracellular permeability [27]. However, deletion of CLDN4 did not perturb barrier function in mouse urothelium [11], suggesting it may not be the principal barrier-forming component.
Here, we have confirmed claudin 3 as a urothelial differentiation-regulated gene and provided functional evidence of abrogated barrier function following claudin 3 knockdown.
Although the possibility of off-target effects from the claudin 3 shRNA sequence cannot be completely disregarded, we did show that there was no effect on the expression of claudin 5, which we included as a specificity control due to the fact that it shares 50% sequence identity. Claudin 3 is expressed by a wide variety of epithelia and is an important component of the blood-brain barrier [33]. Forced expression of human claudin 3 in MDCK II cells elevated the transepithelial electrical resistance and sealed the paracellular pathway against both charged and uncharged solutes, acting as a general barrier-forming protein [34].
One unexpected aspect of the differentiated phenotype was the lowering of claudin 3 transcript (and protein) to baseline amounts in overexpressing cells, indicating that a physiological feedback mechanism operating at the transcriptional level is acquired during differentiation to regulate the absolute amount of claudin 3. Although claudin 3 was successfully overexpressed in undifferentiated and stratified cultures, the finding that it did not result in the development of a high TER, even in conditions conducive to TJ formation, demonstrated that expression of claudin 3 alone is not sufficient for full barrier development. TJs function as large multi-protein complexes and whilst claudins are thought to be the barrier-defining components, molecular association between the cytoplasmic domain of claudins with the PDZ-domains of the ZO is recognised as critical to TJ development, with both ZO-1 and ZO-2 implicated in the timing and positioning of claudin polymerisation at TJ strands [35]. Thus, the association between claudin 3 with the scaffold structure of the ZO is likely to be essential to the correct assembly and functioning of the urothelial terminal TJ. Although experimental over-expression of claudin 3 did not affect overall ZO-1 expression, it did increase the amount of ZO-1 localising to the TJ in stratified cultures, suggesting some recruitment. This is in keeping with the hypothesis that ZO-1 is stabilised at the TJ under the influence of claudin 3. However, the increased presence of ZO-1 did not result in any significant increase in barrier function, as assessed by TER.
What did coincide with barrier development was a differentiation-dependent induction of the ZO-1α + splice variant, which includes an 80 amino acid sequence not present in the ZO-1α − isoform, resulting in co-expression of both isoforms. This is the first report of ZO-1α + splice variant expression by human urothelium and our results implicate ZO-1α + in the structural assembly and function of the urothelial terminal TJ. Whereas claudin 3 overexpression did not induce the ZO-1 isoform switch, claudin 3 knockdown decreased localisation of ZO-1/ ZO-1α + to the TJ and resulted in compromised barrier function. Together, this indicates coordination between claudin 3 and ZO1α + in the development of tight barrier structure and function.
In conclusion, urothelial differentiation is accompanied by induction of claudin 3 which may act in conjunction with variant ZO-1α + isoform to develop the terminal TJ that represents one of the tightest barriers in the human body. cultures differentiated in 5% ABS and 2 mM calcium (D). The dashed line at 500 Ω.cm 2 represents the threshold for a "tight" epithelium. Bars are means (± SD) from a representative experiment, which has previously been performed on numerous independent NHU cell lines [22]. *** P < 0.001 (ANOVA with Tukey post correction) relative to undifferentiated and stratified cultures. B. Expression of claudins 1, 3, 4, 5 and 7 was assessed by immunoblotting in undifferentiated (U), stratified (S) and differentiated (D) cultures from two independent NHU cell lines. β-actin was included as a loading control. C. Densitometry analysis of Western blot bands was performed to compare expression of claudin proteins in stratified and differentiated cultures (normalised to β-actin) and plotted as mean fold change relative to undifferentiated cultures (dashed line). Data represents mean values from experiments performed with two independent cell lines. D. Expression of ZO-1 was assessed by immunoblotting in undifferentiated (U), stratified (S) and differentiated (D) cultures. The ZO-1a + and ZO-1a − variants are arrowed. E. Transcript expression of claudin 3 and uroplakin 2 (UPK2) was determined by quantitative real-time PCR in undifferentiated (U), stratified (S) and differentiated (D) cultures (each point represents three experimental replicates). For comparison, native (N) freshly-isolated urothelium was examined from bladder and ureteric urothelium. Expression relative to undifferentiated cells is plotted after normalisation to the housekeeper gene GAP-DH. F. Immunohistological labelling of claudin 3, ZO-1 (total) and ZO-1α + in human ureteric urothelium. Labelling at the terminal junctions is indicated by arrows. Scale bar = 25 μm.
Figure 2. Barrier function and tight junction analysis of NHU cultures following claudin 3 shRNA knock-down
A. Expression of claudin 3, total ZO-1 (both isoforms) and ZO-1α + proteins was assessed in undifferentiated cultures (U) and in cultures differentiated with 5% ABS and 2 mM calcium (D) by Western blotting, as well as in differentiated NHU cells transduced with control shRNA and CLDN3 shRNA (lanes 3 and 4, respectively). The densitometry analysis of Western blot bands was normalised to the loading control β-actin and the percentage expression relative to control shRNA cells is indicated. B. Compilation of results from shRNA transduction performed on four independent donor NHU cell lines. Expression of CLDN3, CLDN5, ZO-1α − and ZO-1α + was analysed by western blotting following differentiation of NHU cells transduced with control and CLDN3 shRNA alongside wildtype undifferentiated (U) and differentiated (D) NHU cultures. Expression is plotted as relative expression (± SEM) after normalisation to β-actin. C. Immunofluorescence localisation of claudin 3, ZO-1 (total) and ZO-1α + in control and CLDN3 shRNA transduced NHU cells following induction of differentiation. Scale bar = 50 μm. D. Barrier function of control shRNA and CLDN3 shRNA transduced cells, alongside undifferentiated (U) and differentiated (D) non-transduced NHU cells, was determined by measurement of transepithelial electrical resistance (TER) after seven days. Results compiled from three Claudin 3 overexpressing NHU cell transductants were generated as in Figure 3. A. Indirect immunofluorescent localisation of claudin 3, ZO-1 and ZO-1α + in pLXSN control (empty vector) and pLXSN-claudin 3 overexpressing NHU cells, following stratification or differentiation. Scale bar = 50 μm. B and C. Barrier function of wild-type, empty-vector and claudin 3-pLXSN NHU cells was determined by measurement of transepithelial electrical resistance (TER) in undifferentiated (U), stratified (S) and differentiated (D) cultures. B shows a representative cell line, with n=3 replicates (mean ± SD), C shows averaged data from four independent cell lines (mean ± SEM). There were no statistically significant differences between wild-type or transduced cells and claudin 3 overexpressing cells of any status. List of antibodies used | 2018-04-03T03:21:13.680Z | 2015-01-19T00:00:00.000 | {
"year": 2015,
"sha1": "db4abf78b20c39e1c9425b56af788083c515ad3c",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.bladderj.org/bladder/article/download/33/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "db4abf78b20c39e1c9425b56af788083c515ad3c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14693378 | pes2o/s2orc | v3-fos-license | The analysis of Polyakov loop and spin correlators in finite volumes
We derive an analytic expression for point to point correlation functions of the Polyakov loop based on the transfer matrix formalism. The contributions from the eigenvalues of the transfer matrix including and beyond the mass gap are investigated both for the $2d$ Ising model and in finite temperature $SU(2)$ gauge theory. We find that the leading matrix element shows similar scaling properties in both models. Just above the critical point we obtain for $SU(2)$ a Debye screening mass $~\mu_D/T\approx4~$, independent of the volume. Sorry, figures are not included and can be sent by ordinary mail.
Introduction
The determination of the correlation length ξ and the screening mass µ D from point-to-point correlation functions of the Polyakov loop is a non-trivial task, especially close to the critical point of lattice gauge theories. The difficulties are resulting on one hand from finite volume effects due to the nearby transition and on the other hand from the unknown parametrisation of the heavy quark potential in the non-perturbative regime.
In the transfer matrix (TM) formalism the levels of the transfer matrix provide an access to both ξ and µ D without the introduction of an ansatz for the quark potential. The levels and matrix elements may be obtained easily from fits to measured plane-plane ( or zero momentum ) correlation functions since their TM form is known and simply exponential. In this paper we intend to derive the corresponding expression for point-to-point correlation functions. In the 2d Ising model we test the validity of our TM formula by comparison to the results obtained from plane-plane correlators. Simultaneously we are * TALK GIVEN AT THE LATTICE '93 INTERNA-TIONAL SYMPOSIUM LATTICE FIELD THEORY
Correlation functions in the transfer matrix formalism
Let us consider d−dimensional spatial lattices of size N d−1 L, where N denotes the number of points in each transverse direction and L that in one selected direction ( the z−direction ). The lattice spacing a is set to unity in the following. The partition funtion is then and V is the transfer matrix in z− direction. Its eigenstates | n and eigenvalues λ n (n = 0, 1, 2, ...) are chosen to be orthonormal. In addition we normalize our partition function such that we have for the vacuum state Next we define zero momentum operators bỹ where O( x ⊥ , z) is the Polyakov loop P( x ⊥ , z) for the 3 + 1 dimensional SU (2) gauge theory and the spin σ x,z for the 2d Ising model. The corresponding correlation functions arẽ resulting iñ where are the level difference and the transition matrix element. Due to the symmetry properties of the eigenstates under tranformations, which change the sign of O, c nn = 0 . Below the critical point β < β c the lowest nonzero energy level µ 1 ( the mass gap ) defines the large distance behaviour of the correlator. We therefore define the correlation length at β < ∼ β c by At β > β c the mass gap µ 1 ≈ 0 and the large distance behaviour is given by the next level difference ∆µ, so that the Debye mass is where m D is the perturbative screening mass. A similar formula as eq.7 may now be found for the point-to-point correlator in the following way. The Fourier transforms of Γ(z) and Γ( x) are related bỹ This leads us to the ansatz where G( p; µ) = 2 1 − e −µL sinh µ and G( p ⊥ = 0, p z ; µ) is just the Fourier transform of e −µmnz + e −µmn(L−z) , i.e. we have added corresponding contributions for the missing momenta components in the denominator of eq.14. It is now straightforward to obtain Γ( x) by another Fourier transformation of eq.13.
Results
We have used the 2d Ising model to test our ansatz, eq.13. To this end we have measured plane-plane and point-to-point correlators on N = L = 30, 40, 50, 60 lattices. At each point 500000 cluster updates were performed and measurements taken every 10th update. In the twodimensional model the levels µ n are explicitly known [1,2]. We have carried out fits to both correlators with varying numbers of levels to obtain the matrix elements. Both formulae lead to the same results, when the maximal number of levels is taken into account which lead to non-negative c 2 mn ; i.e. our ansatz is definitely confirmed. The final result is shown in Fig.1 for N = L = 30. We find that for β < β c only one term with µ 10 = µ 1 , the mass gap, contributes; near β c up to three terms are essential and well above β c , where µ 1 ≈ 0 only one more term is present.
Here the level differences are unknown and have to be determined through the fit. In general we find a very similar behaviour as in the 2d Ising model. Fits with more than two levels are only possible on the largest lattice very close to the transition. Otherwise one either obtains negative squares of matrix elements or there is no minimum of χ 2 . Taking into account more terms in eq.13 for the fits tends to decrease the result for the mass gap level. This is shown in Fig.2.
It is interesting to look at the behaviour of the next to leading level ( or level difference ) µ 2 . As can be seen from Fig.3, µ 2 drops from a higher value below β c at the transition to a value near to one ( in lattice units ) and stays then relatively constant and moreover independent of the lattice sizes used here. This second level fixes the large distance behaviour above β c of the correlation functions, since µ 1 , as is evident from Fig.2, is essentially zero there and a third level does not contribute. Therefore we identify it with µ D . Because we have N τ = 4 we are led to a ratio µ D /T ≈ 4, slightly higher than found with conventional methods [4]. It can be shown [5], that for N → ∞ c 2 mn ∼ N 0 ; for β < β c , and from finite size scaling theory [3] for β ≈ β c c 2 10 = N γ/ν−1 f (xN 1/ν ).
These scaling properties are all well confirmed by both the 2d Ising model and the SU (2) gauge theory results. | 2014-10-01T00:00:00.000Z | 1993-11-29T00:00:00.000 | {
"year": 1993,
"sha1": "5dddbd414bfed42975121686cee642b8a223fe23",
"oa_license": null,
"oa_url": "https://pub.uni-bielefeld.de/download/1776246/2312123/Engels_006.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5dddbd414bfed42975121686cee642b8a223fe23",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258064545 | pes2o/s2orc | v3-fos-license | Author Correction: Exceptional preservation of internal organs in a new fossil species of freshwater shrimp (Caridea: Palaemonoidea) from the Eocene of Messel (Germany)
Paleontology’
. Although the species is fully characterized and figured in the original description, it was published in an online-only journal issue and the article does not include evidence of registration in ZooBank within the work itself, which is a requirement by Article 8.5.3 of the International Code of Zoological Nomenclature 1 . Therefore, the newly proposed species-group name Bechleja brevirostris is not available.
The present publication has been registered in ZooBank with the LSID: urn:lsid:zoobank.org:act:D9D7741A-AD8A-48B1-B6E6-B720C376307B. The following 'Systematic Paleontology' section modified from the original article 2 appears below. In addition, the collection data of the holotype is corrected in the text and in Fig. 1, since the information provided in the original article revealed to be erroneous. The correct Fig. 1 and accompanying legend appear below. www.nature.com/scientificreports/ Stratigraphic information: Holotype SF-MeI 5933: no data; SF-MeI 13611: grid square E8/9; 2.5m above to 3.5m above local stratigraphic marker level alpha; SF-MeI 14640: grid square i14; 0.95m above to 1.75m above local stratigraphic marker level M; SF-MeI 16018: grid square F9; 2.5m above to 3.5m above local stratigraphic marker level alpha; HLMD-Me-10684: grid square H/I7; 1.86m below stratigraphic marker gamma; HLMD-Me-13919: grid square H/I7; 1.57m below to 2.09m below stratigraphic marker gamma; HLMD-Me-13920: grid square H/ I7; 1.63m below stratigraphic marker gamma (marked in Fig. 1 with red dots).
Derivation of epithet: From the Latin words "brevis" (short) and "rostrum" (beak) referring to the distinctively short rostrum of this species in comparison to its congeners.
Diagnosis: Small shrimp with a short dorsally serrate rostrum and long second pereiopods with strong chela.
Rostrum short, about one fifth of carapace length, straight, laterally compressed, with an acute distal end, bearing 6-8 spines of equal size on dorsal margin all placed distally to the post-orbital margin and one tooth on ventral margin. Eyes developed, with a globular cornea, broader than eyestalk. Antennules seemingly biflagellate, antennular peduncle about half as long as carapace length. Antennae long, basal segments shorter than the antennular peduncle, with a well-developed scaphocerite about 4 times as long as broad. Left mandible preserved in the holotype, incisor process well developed, with three strong teeth, reduced molar process, no evidence of a palp being present. Pereiopods long and slender, first two pairs chelate. Chela of first pereiopod rounded, about three times as long as high, with sharp dactylus twice as long as its maximum height, about the same length as the palmar portion. Second pereiopod much longer and bigger than first, chela about four times as long as high, shorter than carpus, dactylus slightly shorter than palmar portion. Possible sexual dimorphism, with males having longer second pereiopods than females (see remarks below). Last three pairs of pereiopods similar in length and shape. Pleopods poorly preserved. Abdomen smooth, six-segmented, somites with a convex dorsal margin, pleura well developed, first somite reduced, second pleura overlapping both first and third, fourth and fifth somites smallest, similar in shape and size, sixth somite longest. Long telson, about half of carapace length, slightly shorter than uropods. Uropods flabellate, exopod about the same length as endopod, with no diaeresis discernable. www.nature.com/scientificreports/ | 2023-04-12T14:09:10.009Z | 2023-04-12T00:00:00.000 | {
"year": 2023,
"sha1": "c7b30c08d3019d11aaf2963f940665f8c326e367",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c7b30c08d3019d11aaf2963f940665f8c326e367",
"s2fieldsofstudy": [
"Biology",
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227244171 | pes2o/s2orc | v3-fos-license | Perioperative Chemotherapy for Liver Metastasis of Colorectal Cancer
Simple Summary Survival outcomes for resectable metastatic colorectal cancer have improved over the past decade. This is due in part to improvements made in imaging technology, locoregional treatment, and systemic treatment. The focus of this review is to summarize and analyze the existing information available on systemic therapy in the setting of colorectal cancer with liver metastases. It is written with structured viewpoints including patient selection, sequencing of treatment, choice of systemic therapy for upfront resectable disease and conversion chemotherapy for unresectable colorectal cancer. Post-treatment surveillance and future research directions are also discussed. This review will help treating physicians make decisions for the treatment of liver metastases of colorectal cancer. Abstract The liver is the dominant site of metastasis for patients with colorectal cancer. For those with isolated liver metastases, surgical resection with systemic therapy has led to long-term remission in as high as 80% of patients in well-selected cohorts. This review will focus on how systemic therapy should be integrated with resection of liver metastases; in particular, the use of clinical risk scores based on clinicopathological features that help with patient selection, various approaches to the treatment of micro-metastatic disease (peri-operative versus post-operative chemotherapy), as well as conversion chemotherapy for those with initially upfront unresectable disease will be discussed.
Introduction
According to the World Health Organization (WHO) Global Cancer Observatory (GLOBOCAN) 2018 data, colorectal cancer (CRC) is the third most common cancer diagnosed (10.2%) and the second-leading cause of mortality amongst all cancers (9.2%) [1]. Over 1.8 million people worldwide were diagnosed with CRC in 2018, 15 to 25% of them would have synchronous liver metastases, and another 20% would develop liver metastases within three years, from the time of the first diagnosis [2][3][4]. The five-year overall survival (OS) in all stage IV CRC is 14.3% [5]. This is in stark contrast to what is observed for patients with resectable liver metastases, whose OS has improved remarkably over the last two decades. In fact, five-year OS from databases around the turn of the millennium was reported to be between 30 and 40% [6][7][8], compared to figures higher than 80% in trials published in 2020 [9].
Patient Selection
As there is a clear survival impact, surgical resection of R0 resectable liver metastases is the treatment of choice when possible. The criteria to select patients for potentially curative resection is constantly being redefined, and this varies between institutions but is largely dependent on the experience of the multi-disciplinary medical team. Considerations that need to be accounted for include patient factors, disease factors, and anatomic factors.
There are several well-established clinical risk scores that are used to stratify patients based on their likelihood of recurrence (Table 1) [7,[10][11][12]. To spare patients from a futile operation, those with high-risk features should be considered for initial chemotherapy to assess tumor biology before surgery. Even though these scores have been widely used to stratify patients based on their likelihood of recurrence, they were developed based on patients who underwent surgery before 2000, when prognostic information of tumor mutations were not known. Multiple groups have shown that somatic mutations in RAS are associated with inferior progression-free and overall survival in colorectal cancer [13,14]. Retrospective data based on 1460 patients who underwent colorectal liver metastasis resection and multigene testing revealed that RAS mutation status alone is not sufficient for predicting prognosis after resection [15]. These patients should not be uniformly excluded from surgery as long as a R0 resection can be achieved, as it remains the only option for cure [16][17][18][19]. Tumors with BRAF V600E mutations are associated with aggressive tumor biology and are less likely to be candidates for metastatectomy due to widespread disease at the time of presentation. Median OS even after resection of liver metastases for this group of patients is between 22 to 28 months [20,21]. Therefore, resection of liver metastases in this group of patients should only be considered if there has been a durable response to systemic therapy and the liver was the only site of disease.
More recently validated risk scores have included factors like KRAS mutation status and presence of extrahepatic disease, as in the Genetic and Morphological Evaluation (GAME) score [22], and KRAS/NRAS/BRAF mutation status and presence of bi-lobar liver metastasis in the comprehensive evaluation of relapse risk (CERR) score, both of which have outperformed the initial Fong Criteria [23]. Anatomic criteria when selecting candidates for resection of colorectal liver metastases have broaden so much over the years that the modern consensus on what defines resectable liver metastases is tumors that can be resected completely while leaving adequate liver remnant [24]. All in all, no set of guidelines can replace case-by-case evaluation by a multi-disciplinary team, taking into consideration the complex web of interactions between "resectability", tumor biology, and patient factors; this remains the standard of care.
Timing of Systemic and Surgical Therapy
The optimal timing and sequencing of chemotherapy, chemoradiotherapy (for patients with a rectal primary), and surgery should be individualized for each patient. This decision is undertaken by a multi-disciplinary tumor board and should account for the following: synchronous versus metachronous liver metastases, presence of a symptomatic primary tumor, resectability of liver metastasis, and response to systemic therapy. There is no evidence that adding a biological agent to a cytotoxic doublet improves the outcome in the presence of resectable metastases compared with a cytotoxic doublet alone in combination with resection of the metastases. In general, patients with a symptomatic primary tumor (e.g., bleeding, obstruction, perforation) should have treatment of the primary tumor prioritized. If liver metastases are upfront resectable, concurrent resection of liver metastases can be undertaken if the peri-operative risks are acceptable. If not, pre-operative chemotherapy may be given, followed by reassessment with close-interval imaging. It is important to proceed to liver metastatectomy once resection is deemed feasible and not to continue with prolonged chemotherapy to avoid veno-occlusive disease or chemotherapy-associated steatohepatitis. Those with upfront resectable metachronous liver metastases (i.e., metastatic recurrence in the liver following resection of the primary tumor) and low clinical risk of recurrence can be considered for upfront resection. Following resection, the benefit of adjuvant chemotherapy or surveillance alone will need to be discussed with the patient. Figure 1 summarizes the approach to colorectal liver metastases.
Peri-Operative Chemotherapy for Upfront Resectable Disease
In those with upfront resectable disease, studies have looked at delivering systemic chemotherapy either for six months post-operatively or in a sandwich fashion, i.e., three months before and three months after resection of hepatic metastases. In those with resectable disease, the goal of delivering chemotherapy pre-operatively is to "test tumor biology", that is, to allow an observational period to determine whether new lesions will appear soon after systemic treatment, rendering surgery futile. Conversely, we know that the pathologic response to pre-operative chemotherapy is predictive of prognosis after resection of colorectal liver metastases; the five-year OS in those attaining pathological complete response is 76%, compared to 45% in those who do not [25].
To date (Table 2), there are two Phase III clinical trials that randomized patients in the sandwich regimen-the EORTC 40983 and the New EPOC study. In the first study, 364 patients with CRC and resectable liver metastases (up to four) were randomized to surgery alone or six cycles of FOLFOX4 chemotherapy (Fluorouracil, Leucovorin, Oxaliplatin), administered before and after surgery [26]. Three-year progression-free survival (PFS) did not increase significantly with the addition of chemotherapy. However, the improvement of 8% in all eligible patients and 9% in patients who had resection was significant, even though not pre-specified. Pre-operative chemotherapy achieved a response rate of 43%, 7% of the patients had progressive disease, and eight subjects (5%) were inoperable due to disease progression. Tumor progression on chemotherapy predicts poor outcomes after resection [27], and pre-operative chemotherapy helps to select patients who would otherwise go through unnecessary surgery. Opponents of pre-operative chemotherapy would argue that it delays otherwise curative surgery and increases the risk of post-operative complications [28]. Although surgery is deemed as curative in this setting, it has been shown that in those patients who achieve complete radiological response (3%) with chemotherapy, viable tumors are still observed in the resected liver specimen in more than 80% of these cases, which could have precluded them from resection in the first place [29]. Furthermore, there was no observed OS difference for patients treated with peri-operative FOLFOX4 during long-term follow-up. The five-year OS was 51.2% for patients in the chemotherapy arm versus 47.8% for the surgery-only group [30].
Although the addition of anti-epidermal growth factor receptor (EGFR) antibodies such as cetuximab and panitumumab have shown to produce a survival benefit for patients with RAS/RAF wild-type tumors in the palliative setting [31,32], this is not the case for patients with operable liver metastases. The New EPOC trial randomized patients with KRAS exon 2 wild-type tumors with resectable or borderline resectable liver metastases to receive chemotherapy (oxaliplatin plus fluorouracil, oxaliplatin plus capecitabine, or irinotecan plus fluorouracil) with or without cetuximab before and after resection of liver metastases. Despite a higher number of patients showing response to treatment with cetuximab, the trial was terminated due to the detrimental effect on PFS that was observed during interim analysis, which eventually led to a shortening of OS of 26 months in the group receiving cetuximab [33,34]. The effects of anti-EGFR therapy in the setting of resectable metastases, such as tumor sidedness, RAF mutation, HER2 amplification, and microsatellite instability status, which were untested at the time of trial enrolment as the knowledge of resistance to anti-EGFR antibodies, post-date the design of this trial [35]. However, mutation status alone would not explain the result of this trial, as post-hoc analysis showed no significant difference in the distribution of extended RAS and RAF mutations in both groups. Interestingly, cetuximab was more detrimental in subgroups that were associated with good prognostic features such as well-or moderately differentiated primary tumors, fewer number of liver metastases, lack of N2 disease and metachronous disease [33]. Post-recurrence survival was much worse in the group that received cetuximab, possibly suggesting an aggressive disease phenotype development at recurrence or imbalances in post-recurrence treatment approaches [36]. Only 10% of patients who received peri-operative cetuximab received it again in the recurrent setting, compared to 30% in the chemotherapy-only arm. The findings of the New EPOC trial echo the results of N0147 and PETACC-8 in the adjuvant setting that saw no benefit with the addition of cetuximab to oxaliplatin-based chemotherapy in patients with stage III disease [37,38]. Overall, the consistent trend of lack of efficacy (and perhaps even the presence of a detrimental effect) of anti-EGFR therapy in patients with micro-metastatic disease makes this treatment unsuitable for use in the curative setting.
Adjuvant Chemotherapy
The use of oxaliplatin-based adjuvant chemotherapy for the treatment of micro-metastatic disease has long been accepted as the standard of care for patients with stage 3 colon cancer due to the consistent, significant disease-free and overall survival benefits [39]. However, this benefit cannot be directly extrapolated to post-operative chemotherapy after resection of liver metastases. Several trials designed to answer this question have been fraught with issues of poor recruitment, early termination, and usage of non-modern chemotherapy by today's standards [40][41][42]. Table 3 summarizes the randomized trials for adjuvant fluorouracil-based chemotherapy after resection of liver metastases. In three studies, fluorouracil-based chemotherapy was compared to observation [40,41,43], showing a consistent improvement in PFS of 6-9% with adjuvant chemotherapy. However, this improvement did not consistently translate to a significant improvement in OS at longer term follow-up. The preliminary data from JCOG0603, which was presented at the American Society of Medical Oncology (ASCO) meeting in 2020, represent the most recent results we have on chemotherapy after resection of liver metastases. The trial compared six months of post-operative mFOLFOX6 with observation alone and found a significant improvement in three-year disease-free survival (DFS) of 10.6% with chemotherapy. However, this trial was terminated early because of the futility in OS; the five-year OS was 83.0% in the control arm compared to 69.5% for patients receiving chemotherapy. Only 44% of those receiving adjuvant chemotherapy completed the six-month trial. The reasons suggested for the detrimental effect of mFOLFOX6 on OS was the restricted use of oxaliplatin for recurrent disease and the emergence of more aggressive chemotherapy-refractory tumors at relapse in those patients who received post-operative chemotherapy [9]. The results of this study also suggest that a five-year OS of 83% is possible after resection of small (<5 cm) and a limited number (≤3) of liver metastases followed by surveillance alone.
Conversion Chemotherapy for Unresectable Disease
The purported clinical benefits are seen only in R0 resection (i.e., no gross or microscopic tumor remains in the primary tumor bed); there is no role for partial palliative resection of metastases. As such, the consideration of upfront chemotherapy in the presence of initially unresectable liver metastases with the hope of downstaging to resectable disease seems to be a reasonable approach, but in reality it only occurs in 12% of cases [45]. However, for those who achieve a good response and successfully undergo curative resection treatment, the five-year DFS rate is 22% [45]. In choosing this approach, consideration must be given to the timing of chemotherapy, as prolonged administration of chemotherapy may lead to higher risk of liver toxicity and post-operative morbidity [46].
Multiple regimens for conversion chemotherapy in the presence of unresectable CRC with liver metastases have been studied, but the optimal chemotherapy regimen has not yet been established (Table 4). Broadly, the treatment should involve doublet or triplet chemotherapy with or without addition of targeted therapy. Factors considered include need for response, sidedness of primary tumor, mutation status, and previous chemotherapy-related toxicities.
Oxaliplatin-based (FOLFOX/XELOX) and irinotecan-based doublets (FOLFIRI/XELIRI) result in similar response rates, between 34% and 59% [47][48][49][50][51][52][53][54]. A careful selection of patients for conversion chemotherapy in Phase II clinical trials showed that the approach with doublet chemotherapy results in an R0 resection rate of 24% to 40% [48,52,53]. Further intensification to a triplet regimen with FOLFOXIRI resulted in higher responses (60% to 70%) and in a 15-26% rate of R0 resection [49,55]. In deciding the duration of chemotherapy pre-operatively, one should bear in mind that the purpose of conversion chemotherapy is not to treat until maximal response but rather to provide a limited course of chemotherapy until response has occurred to enable liver resection [56]. The incremental risk of post-operative morbidity is directly related to the number of chemotherapy cycles administered before surgery [28]. Cytotoxic chemotherapy used for metastatic CRC is associated with sinusoidal dilation in the case of oxaliplatin and steatohepatitis in the case of irinotecan, which increased the odds of post-operative mortality by ten-fold [46].
Bevacizumab is a humanized monoclonal antibody against vascular endothelial growth factor (VEGF) which has shown survival benefit in the palliative setting [57][58][59]. The addition of bevacizumab to either doublet or triplet chemotherapy has been investigated in several trials [50,[60][61][62][63][64]. Results from a pooled analysis of 11 trials including 889 patients showed that a FOLFOXIRI-bevacizumab combination resulted in response rates of 69% and R0 resection in 28% of those with initially had unresectable metastases [65]. Yet, the addition of bevacizumab to chemotherapy does not appear to significantly increase the response rates or rate of R0 resection in the NO16966 study [50]. Bevacizumab belongs to a class of anti-angiogenic agents with a peculiar toxicity profile including hypertension, proteinuria, hemorrhage, thromboembolism, gastrointestinal perforation, and impaired wound healing [66]. The half-life of bevacizumab is around 20 days, and the current data suggest that an interval of 5 to 8 weeks from the last dose of bevacizumab to surgery is likely to be safe [67,68]. However, given that bevacizumab has not shown to consistently improve response rates [50], does not provide benefit in the adjuvant setting [69,70], and has well-established side effects, it may not need to be routinely administered for the goal of conversion.
Addition of anti-EGFR agents (cetuximab or panitumumab) to doublet chemotherapy in the setting of unresectable liver metastases results in response rates of 41-70%, translating to R0 resection rates of 27-38% [31,[71][72][73][74]. In the CELIM trial, cetuximab added to either FOLFIRI or FOLFOX significantly increased the rate of R0 resection (25% vs. 7%) compared to chemotherapy alone [75]. The combination of EGFR inhibitors to triplet chemotherapy has consistently shown better responses, compared to FOLFOXIRI in patients with wild-type RAS tumors [63,76]. Patients with metastatic wild-type RAS CRC were randomized to receive panitumumab plus mFOLFOXIRI vs. mFOLFOXIRI only in the VOLFI trial [76]. The arm receiving quadruplet therapy achieved an overall response rate (ORR) of 86%, while those administered triplet therapy had an ORR of 54%. Among those receiving panitumumab, ORR was significantly higher in patients with left-sided tumors compared to those with right-sided tumors (90% vs. 60%). For those patients where an eventual resection was planned, more patients receiving panitumumab achieved resection (60% vs. 36%). Apart from the RAS/RAF molecular status, this trial and several others indicate that primary tumor sidedness is predictive for response to EGFR inhibitors [77,78]. In deciding between the use of anti-VEGF or an anti-EGFR, the sidedness of the primary tumor needs to be considered. In a combined analysis of six clinical trials (CRYSTAL [31], FIRE-3 [79], CALGB 80405 [80], PRIME [32], PEAK [81], and 20050181 [82]), significant improvements in PFS and OS with the addition of cetuximab or panitumumab to chemotherapy were only seen in patients with left-sided (distal to the splenic flexure), wild-type RAS primary colorectal tumors [78]. For the response rates, there was a trend towards a greater benefit from anti-EGFR therapy in those with left-sided tumors compared to those with right-sided tumors [78]. The National Comprehensive Cancer Network (NCCN) and the European Society of Medical Oncology (ESMO) recommend that the incorporation of EGFR inhibitors in the first-line setting should be indicated only for those patients with wild-type RAS/RAF and left-sided tumors [83,84]. Contrastingly, for patients with more aggressive right-sided primary tumors, addition of bevacizumab to chemotherapy may be more beneficial.
Surveillance
Post-treatment surveillance is indicated for patients who would be considered for a second potentially curative surgical procedure. Approximately 60% of cancers recur a year after the complete resection of colorectal liver metastases, and about one-quarter of patients who undergo surveillance after initial treatment of liver metastases can undergo curative-intent treatment again [85]. The five-year DFS and OS for those who undergo repeat hepatectomy have been reported as 22-26% and 41-73%, respectively [86,87]. NCCN recommends CEA testing and contrast-enhanced CT scans of the thorax, abdomen, and pelvis every 3-6 months in the first two years after adjuvant chemotherapy and then every 6-12 months for a total five years [83].
Future Directions
This year, the Federal Drug Agency (FDA) approved anti-PD1 pembrolizumab for first-line treatment of patients with metastatic CRC with microsatellite instability-high (MSI-H) or mismatch repair-deficient (dMMR) CRC based on the findings of the KEYNOTE-177 study [88]. Compared to chemotherapy (FOLFOX or FOLFIRI with bevacizumab or cetuximab), first-line pembrolizumab resulted in doubling of PFS (16 vs. 8 months) and improved ORR (44% vs. 33%) [89]. In the updated analysis of CheckMate 142, the combination of nivolumab and low-dose ipilimumab achieved an ORR of 64%, a complete response rate of 9%, and a disease control rate of 84% [90]. Although the dMMR status is uncommon (<5% of metastatic CRC patients), the high and durable response seen with checkpoint inhibitors in this group of patients raises the question of liver metastatectomy in cases which are converted to resectable disease. Interestingly, a small retrospective study observed pathologic complete response in the majority of resected specimens which were treated with checkpoint inhibitors, despite presence of residual tumors on pre-operative imaging [91]. This suggests that residual radiographic tumors may not require resection following response to anti-PD1-based therapy. Hence, the role of liver metastatectomy in dMMR patients remains uncertain and needs to be prospectively validated.
The lack of OS benefit with modern post-operative chemotherapy after liver resection indicates that patient selection needs improvement. One emerging area is the potential use of circulating tumor DNA (ctDNA), as it defines minimal residual disease, reflecting the existence of micro-metastases after surgical resection [92]. Following hepatectomy for liver metastases in CRC, detection of ctDNA was associated with a high risk of recurrence [93]. In future post-operative studies, ctDNA could be used to select patients with a high risk of recurrence and be considered for the escalation of chemotherapy regimens. ctDNA has also been suggested to help select patients for resection of colorectal liver metastases, in that the absence of ctDNA after four weeks of systemic chemotherapy correlated with an 85% R0/R1 resection rate of liver metastases [94].
Conclusions
In conclusion, the management of CRC liver metastasis requires risk stratification and a multi-disciplinary input. Those patients with upfront resectable disease and low clinical risk should undergo resection followed by a risk-benefit discussion regarding adjuvant chemotherapy post-operatively or surveillance. In patients who fulfill the criteria of the EORTC 40983 study, peri-operative chemotherapy may be considered. The peri-operative chemotherapy approach may also be considered for resectable but high-risk cases to test tumor biology in order to avoid futile surgery. For unresectable liver metastases, a good response to systemic therapy may provide an opportunity for liver metastatectomy. It is important to note that the goal of conversion chemotherapy is surgical resection rather than maximal response; hence, regular imaging at close intervals is required to determine the optimal time for resection in order to reduce the risk of chemotherapy-induced liver injury. The choice of treatment regimen for conversion chemotherapy depends on tumor burden, RAS/RAF mutation status, primary tumor sidedness, exposure to previous adjuvant chemotherapy, and pre-existing toxicities. Regardless of the regimen chosen prior to resection, a total of six months of peri-operative systemic therapy is recommended. Chemotherapy based on 5FU with or without oxaliplatin is recommended in the post-operative setting, as there is no proven benefit for irinotecan, bevacizumab, or EGFR inhibitors when there is no evaluable disease. More reliable biomarkers are required in this setting to better select patients for treatment in order to optimize the current standards of care.
Conflicts of Interest:
The authors declare no conflicts of interests. | 2020-12-02T14:11:19.537Z | 2020-11-26T00:00:00.000 | {
"year": 2020,
"sha1": "a3869f3a5871063a30d5d1c677cce57e3ec1a77d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/12/3535/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2374a591bd2cbecf4684441fc14d4ae41971e15",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6561528 | pes2o/s2orc | v3-fos-license | Alcohol metabolism and cancer risk.
Chronic alcohol consumption increases the risk for cancer of the organs and tissues of the respiratory tract and the upper digestive tract (i.e., upper aerodigestive tract), liver, colon, rectum, and breast. Various factors may contribute to the development (i.e., pathogenesis) of alcohol-associated cancer, including the actions of acetaldehyde, the first and most toxic metabolite of alcohol metabolism. The main enzymes involved in alcohol and acetaldehyde metabolism are alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH), which are encoded by multiple genes. Because some of these genes exist in several variants (i.e., are polymorphic), and the enzymes encoded by certain variants may result in elevated acetaldehyde levels, the presence of these variants may predispose to certain cancers. Several mechanisms may contribute to alcohol-related cancer development. Acetaldehyde itself is a cancer-causing substance in experimental animals and reacts with DNA to form cancer-promoting compounds. In addition, highly reactive, oxygen-containing molecules that are generated during certain pathways of alcohol metabolism can damage the DNA, thus also inducing tumor development. Together with other factors related to chronic alcohol consumption, these metabolism-related factors may increase tumor risk in chronic heavy drinkers.
pidemiologic studies of the last decades have unequivocally iden tified chronic alcohol consump tion as an important risk factor for the development (i.e., pathogenesis) of various types of cancers, including can cers of the organs and tissues of the res piratory tract and the upper digestive tract (i.e., upper aerodigestive tract), liver, colon or rectum (i.e., colorectum), and breast (for a review, see Bagnardi et al. 2001). For these types of cancer, the following associations with alcohol consumption have been found: • The highest cancer risk associated with alcohol consumption is seen for the upper aerodigestive tractthat is, the oral cavity, throat (i.e., pharynx), voice box (i.e., larynx), and esophagus. Heavy drinking (i.e., consumption of more than 80 g alcohol, or more than five to six drinks, per day 1 ), especially com bined with smoking, increases the risk of developing these cancers by a factor of 50 or more, depending on the population studied (Pöschl and Seitz 2004).
• Alcohol-related liver cancer (i.e., hepatocellular carcinoma) primarily develops in people with liver cirrho sis resulting from chronic excessive alcohol use.
• The risk for alcohol-related colorec tal and breast cancer is smaller than that for the upper aerodigestive tract cancer. However, because these types of cancer have a high prevalence in the Western world, alcohol likely is an important risk factor. One study (Longnecker 1994) calculated that 4 percent of all newly diagnosed breast cancer cases in the United States primarily result from alcohol consumption.
Overall, however, only a small percentage of chronic heavy drinkers develop certain types of cancer; more over, some people develop cancer even at relative moderate daily alco hol consumption. These observations suggest that a genetic predisposition 1 In the United States, a standard drink is frequently defined as the amount of beverage containing 0.5 ounces, or 14 grams, of pure alcohol. This amount is found in 12 fluid ounces (fl oz) of beer, 5 fl oz of wine, or 1.5 fl oz of 80-proof distilled spirits. may influence cancer risk. At least part of this genetic predisposition may be related to alcohol metabolism because the rate of alcohol metabolism is genet ically determined. Alcohol metabolism primarily involves three groups of enzymes (see Figure) (for more infor mation on the pathways of alcohol metabolism, see Alcohol Research & Health Vol. 29,No. 4, "Alcohol Metabolism: Mechanisms of Action"): • Alcohol dehydrogenase (ADH) enzymes that oxidize beverage alco hol (i.e., ethanol) to acetaldehyde.
• Cytochrome P450 2E1 (CYP2E1), a protein that is part of the micro somal ethanol oxidizing system (MEOS) and is involved in alcohol metabolism primarily after chronic alcohol consumption.
For several of these enzymes more than one genetic variant exists as fol lows (for more information, see the article by Edenberg, p. 5): • Two of seven genes encoding ADH enzymes (i.e., the ADH1B and ADH1C genes) show polymorphism-that is, they exist in variants (i.e., alleles) that differ in their activities, result ing in the generation of different quantities of acetaldehyde.
• For the ALDH2 enzyme, the most important enzyme in the metabolism of acetaldehyde to acetate, two alleles exist, one of which has a very low activity, resulting in acetalde hyde accumulation after alcohol consumption; this genetic variant is present in a large proportion of Japanese and other East Asian people.
• The degree to which CYP2E1 is inducible by chronic alcohol con sumption varies among people, and the induction may be genetically determined. Ethanol is oxidized to acetaldehyde through the actions of various alcohol dehy drogenase (ADH) enzymes (e.g., enzymes encoded by the ADH1B and ADH1C genes), through the microsomal enzyme cytochrome P450 2E1 (CYP2E1), and by microbes living in the human gastrointestinal tract (e.g., mouth and colon). The relative contributions of these pathways and the differences in activity between enzymes encoded by different ADH1B and ADH1C alleles is represented by the thickness of the arrows. Acetaldehyde is oxidized to acetate primarily by the enzyme aldehyde dehydrogenase 2 (ALDH2). Again, the thickness of the arrows indicates the rate of acetaldehyde oxidation in people carrying two active ALDH2*1 alleles, one active ALDH2*1 and one inactive ALDH2*2 allele, or two inactive ALDH2*2 alleles, respectively. Cancer-inducing substances (i.e., carcinogens) generated during the various pathways of alcohol metabolism are highlighted. These include acetaldehyde; highly reactive, oxygen-containing compounds (reactive oxygen species [ROS]) generat ed by CYP2E1; and adducts formed by the interactions of acetaldehyde or ROS with DNA.
This review discusses the role of alcohol metabolism in alcohol-associ ated cancer development (i.e., car cinogenesis 2 ), focusing mainly on the contribution of acetaldehyde and on genetic risk factors leading to increased acetaldehyde levels, such as certain alleles of the genes encoding ADH1C and ALDH2. This article also briefly describes the role in carcinogenesis of CYP2E1 and of compounds generated during CYP2E1-mediated alcohol metabolism. For a discussion of other mechanisms involved in alcoholassociated carcinogenesis-such as malnutrition with vitamin deficiency, concomitant smoking, the presence of certain bacteria in the gastroin testinal tract (resulting from poor oral hygiene and diet), and underlying alco hol-related diseases-see the recent review article by Pöschl and Seitz (2004).
Acetaldehyde-A Carcinogen
According to the International Agency for Research on Cancer (IARC) (1999), overwhelming evidence indicates that acetaldehyde should be classified as a carcinogen in experimental animals. For example, acetaldehyde inhalation in rats and hamsters results in cancer of the nasal mucosa and the larynx. Similarly, long-term administration of acetaldehyde in drinking water results in changes characterized by excessive cell growth of the mucosa cells of the 2 For a definition of this and other technical terms used in this article, see the glossary p. 32.
upper digestive tract. These mucosal alterations are similar to those observed following chronic alcohol ingestion. Finally, acetaldehyde induces inflam mation and transformation of the cells lining the windpipe (i.e., trachea), interferes with the normal reproduc tion of cells, and enhances cell injury of the gastrointestinal mucosa associ ated with excessive cell growth.
One of the pathways through which acetaldehyde promotes cancer forma tion is by interfering, through several mechanisms, with the copying (i.e., replication) of DNA that occurs when cells divide. For example, acetaldehyde has been shown to cause alterations ranging from the exchange of single DNA building blocks (i.e., point mutations) in certain genes to gross chromosomal alterations (Obe et al. 1986). Moreover, acetaldehyde impairs the process through which naturally occurring damage to the DNA is repaired by inhibiting an enzyme that is important for the repair of a certain type of DNA damage.
In addition to these mechanisms, acetaldehyde can interact with DNA building blocks to form new molecules (i.e., DNA adducts). These adducts may trigger replication errors and/or mutations in cancer-causing genes (i.e., oncogenes) or in genes that nor mally prevent cancer development (i.e., tumor suppressor genes). For example, a major stable DNA adduct called N2-ethyl-2'-deoxyguanosine (N2-Et-dG) can be incorporated efficiently into new DNA molecules during DNA replication. However, although this DNA adduct has been detected in human white blood cells and in rat liver after alcohol adminis tration, there is relatively little evidence that it actually induces DNA mutations.
Most DNA adducts are formed only at relatively high acetaldehyde concentrations that are not normally found in the body. However, a class of compounds known as polyamines can facilitate the formation of one mutagenic DNA adduct at acetalde hyde concentrations found in the gas trointestinal tract (50 to 100 µM). Moreover, the polyamine spermidine (which is found in tissues with rapidly dividing cells, such as the gastroin testinal mucosa) may react directly with acetaldehyde to form a molecule called crotonaldehyde, which can bind to the DNA and cause mutations (Theravathu et al. 2005;). This conversion of acetalde hyde to crotonaldehyde in the presence of spermidine and other polyamines also can occur in the mouth and throat (i.e., oropharynx), an area that is lined by a mucosa that undergoes rapid cell division.
Acetaldehyde is found in the saliva, which can lead to an elevated risk of oropharyngeal cancer. Cancer risk increases with the amount of acetalde hyde generated in the saliva, and patients with oropharyngeal cancer have elevated acetaldehyde concentra tions in their saliva (Jokelainen et al. 1996). Because acetaldehyde in saliva is derived primarily from alcohol metabolism, it is clear in this case that the alcohol-associated cancer risk increases with the amount of alcohol consumed. Furthermore, the activity of the enzymes that regulate acetalde hyde formation and degradationthat is, ADH and ALDH-influences the incidence of alcohol-related gas trointestinal tract cancer among regular or heavy alcohol consumers.
Sources of Acetaldehyde
Most acetaldehyde in the body is generated during ethanol metabolism, when the ethanol is oxidized to acetaldehyde by ADH or CYP2E1. Another source of acetaldehyde is bacteria living in the gastrointestinal tract (see Figure). For example, in the absence of oxygen, ADH-containing bacteria in the mouth and stomach can convert carbohydrates to acetalde hyde and ethanol. Although the stomach itself is usually free of bacte ria 3 because it is highly acidic, some people suffer from an inflammation of the stomach that is characterized by insufficient stomach acid produc tion and which can lead to the devel opment of stomach cancer. In these patients, bacteria can grow in the stomach because not enough stomach acid is produced. If the patients con sume sugar (i.e., glucose), these bac teria can produce small amounts of ethanol and acetaldehyde. More importantly, if these patients consume alcohol, acetaldehyde concentrations in the stomach increase 6.5-fold (Väkeväinen et al. 2002;Salaspuro et al. 2006).
In addition to the acetaldehyde generated by cellular enzymes or gas trointestinal bacteria, considerable amounts of acetaldehyde are present in certain alcoholic beverages (e.g., calvados [an apple brandy]) and in cigarette smoke.
Role of ADH in Alcoholor Acetaldehyde-Associated Carcinogenesis
Genetic linkage studies conducted in alcoholics have provided striking evidence that acetaldehyde plays a central role in alcohol-associated car cinogenesis. These studies found that people who accumulate acetaldehyde because they carry certain alleles of the genes encoding ADH or ALDH have an increased cancer risk (Yokoyama et al. 1998). There are at least seven types (i.e., isozymes) of human ADH that are encoded by seven genes. These isozymes are categorized into five different classes based on struc tural characteristics. Class I isozymes account for most of the alcohol metabolism. These are three isozymes known as alpha (α), beta (β), and gamma (γ), which are encoded by the ADH1A, ADH1B, and ADH1C genes, respectively. (For more information on the classification of ADH isozymes, see the article by Edenberg, p. 5.) For both the ADH1B and the ADH1C genes, several alleles exist that result in differences in the activi ty of the ADH molecules they encode (e.g., the rate with which the ethanol is oxidized to acetaldehyde). For exam ple, the ADH1B*2 allele encodes an enzyme that is approximately 40 times more active that the enzyme encoded by the ADH1B*1 allele. Similarly, the 3 The exception to this statement are the Helicobacter pylori bacteria, which cause a large proportion of stomach ulcers.
Alcohol Consumption and Colorectal Cancer
The epidemiologic data concerning the association between alcohol con sumption and colorectal cancer are not as clear as those concerning cancers of the upper aerodigestive tract. Most studies, however, detected a positive correlation between chronic alcohol consumption and colorectal cancer. In 5 of 10 case-control studies and all prospective cohort studies that consid ered alcohol consumption, researchers found a positive trend with respect to dose response (see Cho et al. 2004). Thus, the analysis of eight pooled cohort studies showed a significant trend between increasing alcohol intake and the risk of colorectal cancer, with consumption of more than 45 g (or about three drinks) per day increasing the risk by 45 percent (Cho et al. 2004). Other studies investigated the association between alcohol con sumption and the development of growths in the colon that precede, and may develop into, colon cancer (i.e., adenomatous polyps). In five of six studies in which the effect of alcohol on adenomatous polyps was investi gated, such a correlation was observed. Alcohol also may influence the pro gression from an adenoma to a carcinoma and may favor the development of high-risk polyps or cancer among patients with adenomas . In 1999, a consensus conference of the World Health Organization on Nutrition and Colorectal Cancer concluded that chronic alcohol inges tion, even at low daily intake (one to three drinks or 10 to 40 g per day) results in a 1.5-to 3.5-fold increase in risk of rectal cancer and a lesser increase in risk of colonic cancer in both sexes (Scheppach et al. 1999). This conclusion was confirmed at an International Agency for Research on Cancer meeting on alcohol and cancer (Baan et al. 2007).
Alcohol Metabolism and Cancer Risk
enzyme encoded by the ADH1C*1 allele is 2.5 times more active than the enzyme encoded by the ADH1C*2 allele (Bosron et al. 1993). People who carry the highly active ADH1B*2 allele rapidly convert ethanol to acetaldehyde. This leads to acetaldehyde accumula tion following alcohol consumption and results in toxic side effects, such as a flushing syndrome with sweating, accelerated heart rate, nausea, and vomiting. These adverse symptoms exert a protective effect against acute and chronic alcohol consumption (i.e., people with this allele typically drink little or no alcohol) and also appear to protect against alcoholassociated cancer development. The ADH1B*2 allele rarely is found in Caucasians but occurs more frequent ly in Asian populations.
The effects of the different ADH1C alleles on alcohol metabolism and, consequently, on drinking levels and alcohol-related carcinogenesis, are more subtle. They can best be studied in Caucasian populations in which the highly active ADH1B*2 allele is rare. (Similarly, Caucasians rarely carry a certain variant of an ALDH-encoding gene that also results in high levels of acetaldehyde accumulation and which will be discussed later in this article.) Studies on the relationship between ADH1C alleles and cancer occurrence in Caucasians have led to in contra dictory results (Brennan et al. 2004;Harty et al. 1997). Harty and col leagues (1997) compared the risk of oral cancer associated with various alcohol consumption levels in people who carried two copies of the more active ADH1C*1 allele (i.e., who were homozygous for that allele 4 ) with the risk in people who carried only one copy of this allele (i.e., who were het erozygous) or were homozygous for the less active ADH1C*2 allele. The study found that people who con sumed eight or more drinks per day and were homozygous for the more active ADH1C*1 allele had a 40-fold increased risk for oral cancer compared with nondrinkers. In contrast, people who consumed the same amount of alcohol but who were heterozygous or homozygous for the less active ADH1C*2 allele had only four-to seven-fold increased risk compared with nondrinkers. At lower levels of alcohol consumption, the difference in cancer risk between the various gene carriers was less striking. This is not surprising, however, because higher levels of alcohol consumption also result in production of more acetalde hyde, which then can exert its car cinogenic effect.
Additional studies have confirmed an increased risk of oropharyngeal and laryngeal cancer in alcohol con sumers with the ADH1C*1 allele (Coutelle et al. 1997). Other casecontrol studies, however, have not been able to confirm this association (Olshan et al. 2001;Brennan et al. 2004). The negative results of these studies may, at least in part, result from the fact that the alcohol intake of the participants was low and may not have led to sufficiently high acetalde hyde levels.
More recently, two studies deter mined ADH1C polymorphisms in more than 400 heavy drinkers (i.e., people who consumed more than 60 g alcohol, or more than four drinks, per day) with various cancers of the upper aerodigestive tract, liver, and breast. The data were compared with carefully matched control patients with alcohol-related diseases (e.g., cir rhosis of the liver, pancreatitis, and alcohol dependence) but without can cer (Homann et al. 2006;Coutelle et al. 2004). Cancer patients and control subjects were of similar age and had similar histories of alcohol consump tion (i.e., amount and duration of drinking) and cigarette smoking. The studies found that significantly more patients with alcohol-related cancers had at least one ADH1C*1 allele, or were homozygous for ADH1C*1, than did patients with other alcohol-related diseases. Statistical analyses determined 4 Every person carries two copies of each gene, one inherit ed from the father and one inherited from the mother. These copies can be of the same allele or of different alleles. M any epidemiological studies have demonstrated a positive association between alcohol consump tion and an increased risk for breast cancer (for a recent review, see Hamajima et al. 2002). These studies have shown that even moderate alcohol consumption leads to a significantly increased risk for breast cancer. Moreover, Longnecker (1994) calculated that approxi mately 4 percent of newly diagnosed cases of breast cancer in the United States result from chronic alcohol intake.
Although the exact mechanisms through which alcohol exerts its cocarcinogenic effect on the breast remain unknown, a genetic predisposition may play an important role. This predisposition could involve the enzyme alcohol dehydrogenase (ADH), which breaks down alcohol to acetaldehyde. Recent studies have shown that people who carry genes encoding highly active ADH enzymes leading to elevated acetaldehyde levels are at particularly high risk for a variety of cancers, including breast cancer.
ADH Variants and Breast Cancer Risk
As described in more detail in the accompanying article "Alcohol Metabolism and Cancer Risk," there are seven types of ADH that are encoded by different genes. Moreover, two of the seven ADH genes, called ADH1B and ADH1C, are polymorphic-that is, they exist in more than one variant (i.e., allele). The enzymes encoded by these alleles differ in their activity and therefore result in the accumulation of different quantities of acetalde hyde. In Caucasians, polymorphism of the ADH1C gene is particularly relevant to cancer risk. This gene has two known alleles: a highly active allele called ADH1C*1 and a less active allele called ADH1C*2. Several case-control studies 1 have assessed the relationship between the active ADH1C*1 allele (and, thus, elevated acetaldehyde levels) and the risk of breast cancer. Of these, three studies concluded that the ADH1C*1 allele plays a role in breast cancer development, particularly in women who have not yet entered menopause (i.e., are premenopausal); con versely, one study did not find such a positive correlation.
Freudenheim and colleagues (1999) compared 315 breast cancer patients and 356 age-matched control sub jects. Among premenopausal (but not postmenopausal) 1 For a definition of this and other technical terms, see the glossary p. 32.
2
The group of women with higher alcohol intake was defined as including those women for whom the median of the reported average consumption over the past 20 years was 6.5 drinks per month or more for premenopausal women and 4.4 drinks per month or more for postmenopausal women.
women, the researchers found that breast cancer risk was higher in women carrying two copies of the ADH1C*1 allele (i.e., homozygous for ADH1C*1) compared with women carrying only one or no copy of the ADH1C*1 allele (i.e., heterozygous or homozy gous for the ADH1C*2 allele). Moreover, premenopausal women who were homozygous for ADH1C*1 and had a higher level of alcohol intake 2 were at greater risk of breast cancer than were comparable women with mod erate alcohol intake (odds ratio 3.6, 95% CI 1.5-8.8).
More recently, Terry and colleagues (2006) com pared more than 1,000 breast cancer patients with more than 1,100 control subjects. These researchers found that among women homozygous for ADH1C*1, a lifetime consumption of 15 to 30 g alcohol per day (which corresponds to approximately one to two drinks per day) was associated with a two-fold increase in breast cancer risk (95% CI 1.1-3.5). However, this increase in risk was not seen in women with the same alcohol consumption who were heterozygous or homo zygous for ADH1C*2. Again, the increase in risk was particularly pronounced among premenopausal women. In a European study, Coutelle and colleagues (2004) reported that the ADH1C*1 allele was signifi cantly more common in moderate alcohol consumers with breast cancer than in age-matched control sub jects without cancer. Furthermore, women homo zygous for ADH1C*1 had a 1.8-times greater risk of breast cancer than women with other allele combina tions (95% CI 1.4-2.3).
In contrast to these three studies, Hines and col leagues (2000), who evaluated 465 breast cancer patients and 621 control subjects participating in the Nurses' Health Study, did not find any effect of ADH1C polymorphism on breast cancer risk. How ever, this study included women with relatively low weekly alcohol consumption, which may not result in sufficiently high acetaldehyde concentrations.
It is important to note that the relationship between ADH1C polymorphism and cancer only can be studied in populations with significant alcohol consumption resulting in substantial acetaldehyde levels. Moreover, for accurately assessing the relationship between alcohol consumption, alcohol metabolism, and breast cancer, a substantial sample size is required.
ADH and Estrogen Levels
One of the risk factors for breast cancer is an increased blood level of female sex hormones (i.e., estrogens), the most important of which is called estradiol. In some cases of breast cancer, the cancer cells carry docking molecules (i.e., receptors) for estrogens on their surface; these cancer cells depend on estrogen to grow. In other breast cancer cases, however, no estro gen receptors are found and these cells grow even without estrogen. Alcohol consumption and alcohol metabolism by ADH appear to affect the levels of estrogen and estrogen receptors, which may contribute to the alcohol-breast cancer association. However, this issue still needs to be fully elucidated. What currently is known is the following: • The enzyme encoded by ADH1C not only metabo lizes alcohol to acetaldehyde but also is involved in the metabolism of steroid hormones, including estro gens (McEvily et al. 1988). • Some evidence suggests that alcohol enhances the expression of estrogen receptors in breast cells (Fan et al. 2000), which would increase breast cancer risk. • Both in women with a normal menstrual cycle and in women taking oral contraceptives, blood concentra tions of acetaldehyde after alcohol consumption were shown to be particularly high when estradiol levels reached their highest during the menstrual cycle (Eriksson et al. 1996). This could impact cancer risk because under these conditions the two risk factors acetaldehyde and estrogen would be combined. • In premenopausal women, alcohol ingestion has been asso ciated with higher estrogen concentrations in the blood, although some studies observed this effect only in women taking oral contraceptives (Singletary and Gapstur 2001). • In another study of healthy premenopausal women, the administration of alcohol doses as low as 0.225 g pure alcohol per kg body weight (corresponding to approxi mately one drink) resulting in blood alcohol concentra tion not exceeding 25 mg per 100 mL (0.025 percent), caused an increase in serum estradiol concentrations of 27 to 38 percent while alcohol was detectable in the blood (Coutelle et al. 2004). This effect was observed during all phases of the menstrual cycle. • For postmenopausal women, an increase of estrogens following alcohol consumption appears to depend on whether they use hormone replacement therapy (HRT) to treat menopausal symptoms. In women without HRT, estrogen increases were variable and depended on multiple factors. In women with HRT, the increase in estrogen levels was consistent and pro nounced (Singletary and Gapstur 2001).
Conclusions
Some of the published data suggest that acetaldehyde generated during alcohol metabolism may be involved in alcohol-associated development of cancer (i.e., carcino genesis) in breast tissue, especially in women with high er alcohol consumption. If this evidence is confirmed, women with high levels of daily alcohol consumption who also are homozygous for the highly active ADH1C*1 allele may be at particularly high risk of developing breast cancer. In addition, there is some evidence that alcohol, even at low doses, increases serum estrogens under certain conditions and that this effect may be another important mechanism involved in alcohol's cocarcinogenic actions. Alcohol Metabolism and Cancer Risk a significant association between ADH1C*1 allele frequency and rate of homozygosity and an increased risk for alcohol-related cancer (p < 0.001). Finally, people who were homozygous for ADH1C*1 had a relative risk of developing esophageal, liver, and head and neck cancers of 2.9, 3.6, and 2.2, respectively, compared with people homozygous for ADH1C*2.
References
Other studies found that people who are homozygous for the ADH1C*1 allele had significantly higher acetalde hyde levels in their saliva than did heterozygous people or people who are homozygous for the ADH1C*2 allele (Visapää et al. 2004), similar to people with an inactive ALDH2 allele (which is discussed in the following section). As mentioned earlier, acetalde hyde levels in the saliva may be impor tant for cancer development. Saliva rinses the mucosa of the upper aerodi gestive tract, and any acetaldehyde in the saliva may be taken up by mucosal cells. Moreover, mucosal cells contain little of the ALDH2 enzyme and therefore cannot effi ciently break down acetaldehyde. As a result, acetaldehyde may bind to proteins and DNA in the mucosal cells and may initiate carcinogenesis. The hypothesis that acetaldehyde in the saliva contributes to tumor develop ment is supported by the observation that acetaldehyde-fed rats with intact salivary glands showed excessive pro liferation of the upper gastrointestinal mucosa, similar to the changes observed following chronic alcohol consump tion. When the glands were surgically removed (i.e., when the animals no longer produced saliva), however, this excessive cell proliferation disappeared (Pöschl and Seitz 2004).
Because the γ-ADH enzyme, which is encoded by ADH1C, also is found in the mucosa lining the colon, other investigators have studied the rela tionship between the various ADH1C alleles and the development of alcoholrelated colorectal cancer. Several recent studies suggest that the ADH1C*1 allele can play an important role in the development of in alcohol-associated colon cancer (Tiemersma et al. 2003).
However, other studies have not come to the same conclusion.
In summary, numerous studies sug gest that ADH1C alleles that result in acetaldehyde accumulation in the cells can enhance a drinker's risk of devel oping alcohol-related cancers in a variety of tissues.
Role of ALDH in Alcoholor Acetaldehyde-Associated Carcinogenesis
The main enzyme that breaks down acetaldehyde in the body is ALDH2. It is encoded by the ALDH2 gene, for which there are two main alleles, ALDH2*1 and ALDH2*2. The ALDH2*2 allele is caused by a point mutation in the normal ALDH2*1 allele, resulting in an almost inactive ALDH enzyme. This allele does not occur in Caucasians and is only found among Asian people. For example, approximately 10 percent of the Japanese population are homozygous for ALDH2*2. Moreover, approxi mately 40 percent of the Asian popu lation are heterozygous. People who are homozygous for ALDH2*2 have an extremely low ALDH activity; when these people drink alcohol, acetalde hyde accumulates and the "flushing syndrome" develops. These people do not tolerate alcohol at all and are therefore generally protected against developing alcoholism. People who carry only one copy of the ALDH2*2 allele (i.e., who are heterozygous) also have greatly reduced (i.e., less than 10 percent) ALDH2 activity. Nevertheless, they can consume alcohol and may even become heavy drinkers and alcoholics.
Several epidemiological studies have demonstrated that the risk of alcoholassociated cancer of the aerodigestive tract is significantly elevated in people with low ALDH2 activity, with a rel ative risk of 11.0 for oropharyngeal and laryngeal cancer and 12.5 for esophageal cancer (Yokoyama et al. 1998;Seitz et al. 2004). In addition, these people have a 50-fold-higher risk than people without the ALDH2*2 allele of simultaneously developing a second tumor at another site of the esophagus. Finally, the risk of colon cancer is increased by a factor of 3.4 in people with an ALDH2*2 allele.
As is the case with people who are homozygous for the highly active ADH1C*1 allele, people who have one ALDH2*2 allele have elevated acetaldehyde levels in their saliva after a moderate dose of alcohol (Väkeväinen et al. 2000). In fact, acetaldehyde lev els are nine times higher in the saliva than in the blood in these people, suggesting that it is reduced ALDH activity in the salivary glands rather than in the blood that leads to acetaldehyde accumulation in the sali va. ALDH-deficient heavy drinkers therefore represent an exceptional human "knock-out" model for longterm acetaldehyde exposure . 5 Whenever they drink, people in this group are exposed to extremely high acetaldehyde concentra tions in their saliva, which is associated with a strikingly increased cancer risk. The harmful effects of salivary acetaldehyde are exacerbated further by the previously mentioned fact that acetaldehyde can be converted to cancer-causing crotonaldehyde in the presence of polyamines, which are elevated in tissue already injured by the local action of alcohol, such as the mucosa of the upper aerodigestive tract.
Role of Ethanol Metabolism by Gastrointestinal Bacteria in Alcohol-Related Carcinogenesis
Bacteria and other microorganisms (e.g., yeasts) throughout the digestive tract can metabolize alcohol. For example, microorganisms (e.g., bacte ria and yeasts) normally found in the mouth oxidize alcohol to acetalde hyde in the saliva. In addition, fecal bacteria can metabolize alcohol in the colon. (Following alcohol ingestion, the alcohol concentration in the colon is comparable with that in the blood.) The capacity of oral and gastrointestinal 5 This statement applies almost exclusively to Asians; because the ALDH2*2 allele does not occur in Caucasians, this mechanism of acetaldehyde accumula tion is irrelevant in this population subgroup. microbes as well as mucosal enzymes to metabolize acetaldehyde, in contrast, is rather limited. As a result, acetalde hyde concentrations in the saliva and in the colon during and after alcohol consumption are 10 to 100 times high er than in the blood ).
Role of Oral Bacteria
Several lines of evidence support the assumption that oral bacteria play a role in salivary acetaldehyde production: • Salivary acetaldehyde concentrations can be reduced by 30 to 50 percent by rinsing the mouth with antiseptic mouthwash (which kills bacteria in the mouth) after alcohol consump tion ).
• Certain risk factors for the develop ment of oral cancer, such as poor dental and oral hygiene as well as tooth loss, all of which are associ ated with high levels of bacteria in the mouth, also are associated with increased acetaldehyde concentra tions in the saliva following alcohol consumption.
Increasing alcohol consumption increases salivary acetaldehyde con centrations in a dose-dependent manner. Salivary acetaldehyde con centrations are significantly higher in alcoholic patients with head and neck cancer than in a control population.
Many drinkers also smoke, and smoking affects acetaldehyde genera tion in the saliva after alcohol con sumption. For example, smokers have twice as much acetaldehyde in their saliva as nonsmokers, if they consume the same amounts of alcohol . Smoking approximately 20 cigarettes daily increases in-vitro salivary acetaldehyde by about 50 percent following alcohol consumption.
Smoking influences salivary acetalde hyde levels through two mechanisms. First, it increases the capacity of oral yeasts and bacteria to produce acetaldehyde from ethanol. Second, cigarette smoke itself contains consid erable amounts of acetaldehyde that dissolve in the saliva during smoking (Salaspuro and Salaspuro 2004). The significant increase in salivary acetalde hyde concentration during drinking and active smoking may explain why the two habits have a synergistic effect on the risk for upper digestive tract cancer ).
Role of Fecal Bacteria
Acetaldehyde also can be produced by fecal bacteria. In fact, of all the tissues in the body, the mucosa in the colon contains the greatest amount of acetaldehyde per gram of tissue fol lowing alcohol ingestion (Seitz et al. 1990). Animal studies with normal rats and rats that have no bacteria in their intestines (i.e., germ-free animals) have clearly demonstrated that this acetaldehyde is produced by fecal bacteria. These studies found that acetaldehyde production and excessive cell growth are significantly reduced in germ-free animals compared with normal rats (Seitz et al. 1990).
Acetaldehyde has toxic effects on the colon mucosa, resulting in exces sive cell growth, including growth at abnormal sites in the colonic mucosa, which is associated with an increased cancer risk. These effects were initial ly observed in experimental animals but recently have been confirmed in humans . Moreover, the alcohol-related excessive growth of the colonic mucosa is especially pronounced in older animals, possibly because the sensitivity of the colon mucosa to acetaldehyde increases with age. This may have practical implica tions, as age alone is a risk factor for colorectal cancer.
One of the mechanisms through which acetaldehyde may increase the risk of colon cancer may involve folic acid, a vitamin that is important for cell regeneration and which helps protect the DNA by promoting the production of a compound called S-adenosyl-methionine. However, only in the colon are acetaldehyde concentrations high enough to break down folic acid, thereby destroying its function. Therefore, this mecha nism may explain why people who consume small amounts of folic acid and methionine and ingest more than 20 g alcohol per day have a seventimes increased risk for a certain type of colon cancer compared with people with high folic acid and methionine intake and low alcohol consumption (Giovannucci et al. 1995).
Role of Ethanol Metabolism by CYP2E1 in Alcohol-Related Carcinogenesis
In addition to being oxidized by ADH, ethanol also can be metabolized by the microsomal ethanol oxidizing system (MEOS), whose key compo nent is CYP2E1. This enzyme is found in the liver but also is present in the mucosa of the entire gastrointestinal tract. The MEOS normally accounts for only a small percentage of alcohol metabolism; after chronic alcohol consumption, however, the activity of CYP2E1 can be increased (i.e., induced) 10-to 20-fold. According to recent studies, this induction of CYP2E1 may occur at a daily dose of 40 g ethanol (i.e., approximately three drinks) and after 1 week of consumption (Oneta et al. 2002). Moreover, CYP2E1 activity returns to normal within a few days following withdrawal from alcohol.
Several alleles of the gene encoding CYP2E1 have been identified, and not every person exhibits the same degree of CYP2E1 induction. Thus, some people show substantial induc tion, whereas others show no induc tion at all (Oneta et al. 2002). The specific reasons for this difference are still unknown. Moreover, the studies conducted to date in a variety of pop ulations on CYP2E1 polymorphism as a risk factor in alcohol-related car cinogenesis do not yet allow for final conclusions.
During alcohol metabolism by CYP2E1, highly reactive, oxygencontaining molecules known as reactive oxygen species (ROS) are generated that can damage proteins and DNA. Accordingly, induction of CYP2E1 activity is not only associated with increased acetaldehyde production but also with increased ROS production. For example, the CYP2E1 concentra tion in the liver is correlated with the generation of one type of ROS called hydroxyethyl radicals (Seitz and Stickel 2006).
One way through which ROS pro duced by CYP2E1 exert their harmful effects is by interacting with fat (i.e., lipid) molecules in the cell membrane in a process called lipid peroxidation. This process results in the generation of additional reactive molecules that are chemically related to acetaldehyde, especially malondialdehyde and 4-hydroxynonenal (4HNE). For exam ple, 4HNE binds to DNA and gener ates adducts that can cause mutations (i.e., are mutagenic) and are carcino genic (Seitz and Stickel 2006). The role of ROS in cancer development is underscored by the fact that in animal experiments, the administration of molecules such as vitamin E that can interact with and detoxify ROS (i.e., radical scavengers), inhibits chemicalinduced carcinogenesis in the esopha gus (for a review, see Seitz and Stickel 2006). 4HNE also may contribute to cancer development by causing muta tions in a tumor suppressor gene called p53. The protein encoded by this gene helps prevent tumor development by inducing a type of programmed cell death (i.e., apoptosis) in cells that are damaged and which could become cancer cells. When 4HNE binds to p53, the damaged cell becomes more resistant to apoptosis, which gives it a growth advantage because it is not removed adequately.
In addition, CYP2E1 also activates a variety of procarcinogens-compounds that in the body can be converted into cancer-causing substances. Some of these procarcinogens are present in tobacco smoke and the normal diet (e.g., compounds known as polycyclic hydrocarbons, hydrazins, aflatoxins, and nitrosamines). However, the interac tions between ethanol metabolism and procarcinogen metabolism are complex and may depend, among other factors, on the degree of CYP2E1 induction, the chemical structure of the procarcinogen, and the presence or absence of ethanol in the body dur ing procarcinogen metabolism. These issues are reviewed elsewhere (Seitz and Osswald 1992).
Summary
Alcohol metabolism is a major contrib utor to the increased risk of certain cancers that is associated with heavy alcohol consumption. A crucial factor in this process is acetaldehyde, the first metabolite generated during ethanol oxidation. Acetaldehyde is a carcinogen that can promote cancer development through multiple mechanisms, includ ing interference with DNA replication, induction of DNA damage, and for mation of DNA adducts. The relative contributions of these mechanisms, however, remain controversial.
Because of the potentially harmful effects of acetaldehyde, any condition that leads to elevated acetaldehyde levels in the body increases cancer risk. Accordingly, people who carry ADH alleles that encode ADH enzymes with high activity or ALDH alleles that encode ALDH enzymes with particu larly low activity are at increased risk of developing alcohol-related cancer. Bacterial production of acetaldehyde, particularly in the mouth and colon, also contributes to elevated acetaldehyde levels after alcohol consumption and further exacerbates its detrimental effects.
Other pathways of alcohol meta bolism, such as the one mediated by CYP2E1, also play a role in alcoholrelated carcinogenesis, particularly after chronic heavy alcohol consump tion, when CYP2E1 activity is induced. In this case, it is not only the acetalde hyde that causes the damage but also the ROS that are generated during the CYP2E1-mediated reaction. But as with ADH-and ALDH-mediated alcohol metabolism, not all people are equally susceptible to this path way of alcohol-related cancer devel opment because different CYP2E1 alleles result in different levels of CYP2E1 induction following chronic alcohol consumption. ■ | 2017-03-31T00:54:59.785Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "439a9e22aac33fb9503ff1e76086b0b3991028bc",
"oa_license": "CC0",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "439a9e22aac33fb9503ff1e76086b0b3991028bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210876956 | pes2o/s2orc | v3-fos-license | Fabrication and Characterization of Antibacterial Titanium Dioxide Nanorods Incorporating Gellan Gum Films
The aim of this work was to develop bionanocomposite films based biopolymer GG and TiO2-NRs for active food packaging. The TiO2-NRs were incorporated as antimicrobials agent into GG via solvent casting method. The films appearance was have 94 % of transparency. SEM micrographs indicate that the TiO2-NRs were successfully incorporated and attached to the surface of developed GG films. FTIR results revealed the interaction between TiO2NRs and hydroxyl group of GG polymer. XRD results showed the crystalline peaks of TiO2-NRs and amorphous peaks of GG+TiO2-NRs films. The thermal stability of GG films were increasing by incorporation of TiO2-NRs materials. The GG+TiO2-NRs films showed good antibacterial activity against Gram-positive (Staphylococcus aureus (S. aureus) and Streptococcus (strep.)) and Gram-negative (Escherichia coli (E. coli) and Pseudomonas aeruginosa (P. aeruginosa)) and potentially used as antibacterial packaging films.
INTRODUCTION
The most favourable biopolymers that are currently used commercially and have been extensive study is gellan gum (GG).GG is an anionic straight chain biopolymer containing β-D-glucose, α-L-rhamnose and β-D-glucuronate residues 1 and usefulness industrially due to its unique structure and physical properties.Previously, GG has been approached in wound healing study 2,3 as well as candidate material for tissue engineering application 4 and hold a vital role in pharmaceutical.Since 1992, this materials were approved to be utilized in food industry by the US FDA (United States Food and Drug Administration) and the EU (European Union) regulation classified as E 415 5 .GG polymers have been used in varied application included as agar replacement due to its excellent gelling properties as well as highly purified polysaccharide.It has high potential to be apply in industrial as a high-viscosity biogum, a suspending agent and acts as a gelling agent 6 .Not limited to that, GG has been utilized for personal care included in cosmetics and toiletries products 7 .It has thermal stability, clarity and compatibility with nutrient additives 8 .Thus, it's stable at varied temperatures and can offer an excellent stability for varied applications.
TiO 2 nanostructured can be found in four dimension which is 0-D, 1-D, 2-D, and 3-D.It have been create a huge impact on the nanodevices, nanocomposite materials, nanoelectronics, and etc. 9 due to their unique and large surface areas properties.Apart from that, due to the high surface area as well as provided enough absorption sites for all involved molecules in a small space, the TiO 2 nanostructured materials become subject of interest in the research field 10 .Nowadays, the development based on the nanostructured are getting highest ranking over the world either in nanotechnology or nanobiotechnology disciplinary due to their peculiar and fascinating properties.The utilization of TiO 2 nanostructured have been explored widely as a polymers, metals, and ceramics.Their applications not only limit to against human pathogen likes bacteria but also to fight cancer and its was become focal subject in research and applications due to their potent bactericidal activity to against a resistant strains of pathogens.TiO 2 is one of the inorganic materials that have getting special interest to be applied in inorganic and organic nanocomposites 11 .Among the versatile of TiO 2 nanostructured family, TiO 2 -NRs have become subject of interest and it's typically in 1-D formation and consists of solid elongated structure.Previously, TiO 2 -NRs have been used in development of glucose biosensor 12 , facilitate the DET (direct electron transfer) of GOx (glucose oxidase) 13,14 , in immunotherapy for cancer therapy application 15 , and also as a photoelectrochemical immunosensor 16 .Not limited to that, due to its high extraction proficiency, enhanced for mechanical strength, and outstanding in stability, TiO 2 -NRs display an excellent extraction performance as solid-phase microextraction (SPME) coatings 17 .Above all, TiO 2 is a popular substance in varied application regarding to their properties which is chemically and thermally stable, nature friendly, be biocompatible, cheap, and anti-corrosion.
Therefore, in this study, GG and TiO 2 nanostructured was became an interesting candidate material to be explored as an active food packaging regarding to the environmental concern.The developments of biomaterials as an active food packaging were important in reducing the environmental pollution that causes from accumulation of synthetic packaging.Thus, GG+TiO 2 -NRs films were prepared and characterized using UV, SEM, FTIR, XRD, and TGA and the performance of antibacterial properties also were evaluated.
EXPERIMENTAL
Prior to the fabrication of film, TiO 2 nanorods was prepared using hydrothermal method 18 .In order to prepare GG incorporating 1wt% TiO 2 -NRs (GG+TiO 2 -NRs) film, 0.01 gram of synthesized TiO 2 -NRs was added into the 100 ml of GG solution.The GG solution was prepare earlier by dissolving 1 gram of GG into 100 ml distilled water.The solution was stirred continuously for 2 hours at 70 °C.Then the homogeneous solution was poured into the casting plate and was dried at 50 °C for 24 h.The films were preconditions for ≥48 h in the humidity chamber at controlled conditions (relative humidity 50 % at room temperature).The pristine GG films were fabricated using similar technique excluding the addition of TiO 2 -NRs.
U l t r a v i o l e t -v i s i b l e ( U V -V i s ) transmission was conducted using Varian, Cary 50 spectrophotometer at 200-800 nm of wavelength.Tecnai Biotwin FEI transmission electron microscopy (TEM) was used to observe the TiO 2 nanorods appearance.The morphological images of samples were observed by JOEL JSM 6360 LA electron microscopy.Fourier Transform Infrared Spectroscopy (FTIR) spectra were verified using a Perkin Elmer Spectrum 100 FT-IR spectrophotometer with a PIKE Miracle ATR accessory at 4000 to 600 cm -1 .XRD analysis was recorded using Rigaku Miniflex (II) X-ray diffractometer from 10° to 80° of 2thetha.Thermogravimetric analysis was done using Pyris 6, Perkin-Elmer-TGA6 with heating rate at 10°C/ min from room temperature to 1000 °C in an atmosphere of N 2 flow at 50 ml/min.
T h e a n t i b a c t e r i a l a c t i v i t y o f nanostructured TiO 2 materials were established by inhibition ring method under UV light irradiation conditions using four types of bacteria which is S. aureus, Strep., E. coli, and P. aeruginosa 19 .At 0.5 McFarland standards, inoculants of each bacteria were evenly spread in sterile petri plates contained the MH agar.Antibacterial properties were evaluated by measuring the transparent antibacterial circle diameter of the materials surrounding bacterial growth after 24 hours.
RESULTS AND DISCUSSION
Fig. 1a shows the photo images of gellan gum (GG) and gellan gum with the addition of titanium dioxide nanorods (GG+TiO 2 -NRs).The diameter and thickness of both films were almost similar which are ~9 cm and ~60 μm, respectively.The optical transparency of films was evaluated by UV-visible analysis.The GG and GG+TiO 2 -NRs films were transparent with transmittance values were 98 % and 94 %, respectively (Fig. 1b).Fig. 2a displays the TEM images of TiO 2 -NRs samples.Its exhibits elongated nanostructures (rod-like particles) proposed that the nanorods were successfully obtained.The diameter and length of rod-like particles was found to be 25 -50 nm and 2.25 -3.25 µm, respectively.Fig. 2b shows smooth and homogeneous surface of GG films.In contrast, less homogenous surface was observed on GG+TiO 2 -NRs films attributed to the agglomeration of TiO 2 -NRs into and onto GG film and contributed to the rough film surfaces that may beneficial for antibacterial activity.
The FTIR spectra of TiO 2 -NRs, GG, and GG+TiO 2 -NRs films are demonstrated in Fig. 3.A broad peak below 1000 cm -1 has been detected in Apart from that, a broad peak also was display by GG spectra assigned to OH stretching vibrations modes at 3408 cm -1 , carbonyl group at 1666 cm -1 and C-C aromatic stretching at 1424 cm -1 20 .FTIR spectra of GG films also show a prominent peak at 2936 cm -1 and 1043 cm -1 due to the C-H stretching modes and glycosidic bonds linking to galacturonic sugar unit in GG chain 21 .Last but not least, another peak was appeared at 692 cm -1 attributed to the C-H bending mode.Similar peaks were observed in GG+TiO 2 -NRs films as GG, with slightly different in intensity and peak positions due to the interactions among TiO 2 nanostructures and GG biopolymer chain.004), ( 020), ( 015), ( 121), ( 024), ( 116), ( 220) and (125) planes of anatase TiO 2 22 .The sharp peaks of anatase TiO 2 proposed that crystalline TiO 2 -NRs were produced in this study.In contrast, broad peaks which are assigned to amorphous behavior were found at 18° and 21° for both GG and GG+TiO 2 -NRs films.These peaks were corresponded to the characteristics of GG chains aligned through intermolecular interactions.Additionally, for GG+TiO 2 -NRs films, one peak with low intensity at 25° was appeared suggesting that TiO 2 -NRs were successfully incorporated into GG films.This crystalline peak appeared at low intensity due to the small amount of TiO 2 nanostructures incorporated into GG films and it more apparently appear at high concentration of TiO 2 nanostructures.
films.It might be due to the hydrogen bonds formation as well as coordinate interaction between TiO 2 and GG.Addition of TiO 2 -NRs into GG increased the temperature to end up the degradation step regarding to the final temperature were verified at 390 °C for GG+TiO 2 -NRs films showed it contains higher heat resistant.Addition of TiO 2 -NRs exhibits better outcome in thermal stability regarding to the certain degree of interaction between GG and TiO 2 -NRs as revealed Fig. 6 shows no antibacterial activities were found after 24 h incubation of pure GG film for all the bacteria.However, the inhibition zone was recorded using GG+TiO 2 -NRs films at 10±0.35 mm, 12±0.06 mm, 11±0.06 mm, and 10±0.38 mm against S. aureus, Strep., E. coli, and P. aeruginosa which is comparable with penicillin control sample (Table 1).Good antibacterial activity by GG+TiO 2 -NRs films was due to the presence of TiO 2 nanostructures which is promoted the ROS (reactive oxygen species) generation, release of metal ion, malfunction of membrane system, and internalization of nanoparticles 23 .With the present of UV light, the TiO 2 -NRs that were incorporated into GG films produced ROS species (eg.HOO•, HO•, and •O 2 −) from O 2 and H 2 O through reduction reaction.The electron in the TiO 2 conduction band will leave a positive hole in the valence band when the TiO2-NRs are irradiated by UV.Then, the photo-generated electron will react with molecular oxygen as well as with water to produce superoxide radical anions (•O 2 −) and hydroxyl (•OH) radicals, respectively.Therefore, these reactive radicals will executed the bacteria by decomposed the organic compounds in bacterial cells [24][25][26][27][28] .The result obtained suggesting that, TiO 2 -NRs is a promising antibacterial agent as GG+TiO 2 -NRs films exhibited excellent antimicrobial property.
CONCLUSION
GG+TiO 2 -NRs film was successfully developed by solvent casting technique.The transparency of films was reduced from 98 % to 94 % when the TiO 2 -NRs were added into GG films.FTIR, XRD, and TGA results obtained exhibited the interaction of TiO 2 nanorods with the GG polymer chain.SEM micrograph shows the presence of TiO 2 nanorods on the surface of film and were enhanced the antibacterial activity.
Fig. 2 .
Fig. 2. (a) TEM micrographs of TiO 2 -NRs (b) SEM micrographs of GG and GG+TiO 2 -NRs films the TiO 2 -NRs and GG+TiO 2 -NRs spectrum, which is attributed to metal-oxygen (Ti-O) stretching mode.Apart from that, a broad peak also was display by GG spectra assigned to OH stretching vibrations modes at 3408 cm -1 , carbonyl group at 1666 cm -1 and C-C aromatic stretching at 1424 cm -1 20 .FTIR spectra of GG films also show a prominent peak at 2936 cm -1 and 1043 cm -1 due to the C-H stretching modes and glycosidic bonds linking to galacturonic sugar unit in GG chain21 .Last but not least, another peak was appeared at 692 cm -1 attributed to the C-H bending mode.Similar peaks were observed in GG+TiO 2 -NRs films as GG, with slightly different in intensity and peak positions due to the interactions among TiO 2 nanostructures and GG biopolymer chain.
Fig. 4 .
Fig. 4. XRD diffractogram of TiO 2 -NRs, GG, and GG+TiO 2 -NRs films Fig. 5 shows the thermogram of GG and GG+TiO 2 -NRs films.At the 1st stage, the thermal stability shows 18 % of weight loss due to the moisture removal from GG and GG+TiO 2 -NRs surface.A weight loss at 58 % (GG) and 54 % (GG+TiO 2 -NRs) in 2nd stage assigned to the depolymerisation and degradation of GG biopolymer chain.A broad decomposition range of GG was observed suggesting to the homogeny distribution of TiO 2 -NRs in the nano-composites
Table 1 .
Inhibition zone of films against Staphylococcus aureus, Streptococcus, Escherichia coli, and Pseudomonas aeruginosa (Data presented are mean±SD, n=3) | 2020-01-02T21:47:37.100Z | 2019-12-30T00:00:00.000 | {
"year": 2019,
"sha1": "7b8eb8d079d4888aa9bfc72678be15bbd5e0ab8a",
"oa_license": "CCBY",
"oa_url": "https://microbiologyjournal.org/download/29563/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "83f2ed2cc5664b883eaae0985ca1a9f5294da6dc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
246259936 | pes2o/s2orc | v3-fos-license | Effects of geomorphological characteristics on urban expansion of Jeddah city-Western Saudi Arabia: a GIS and Remote Sensing Data-Based Study (1965–2020)
This study aims to highlight the natural risks induced by spatiotemporal interaction between the geomorphological features and the urban growth of Jeddah, the largest city in Makkah Province, Saudi Arabia. The methodology is based on remote sensing and Geographic Information System (GIS) techniques. A spatial analysis of the GIS layers extracted from multisource remotely sensed data from 1965 to 2020 was performed. The results show that the urban area has doubled more than 30 times and has increased from around 36 sq. km in 1965 to more than 1,130 sq. km in 2020. Consequently, the city expands to all morphological units, resulting in multihazard susceptibility and exposure. In order to identify and and prioritize the various risks associated to geomorphic processes, a map has been designed to classify the natural risk areas according to their intensity and their factors. Thus, geomorphological constraints have a significant impact on urban planning.
Introduction
In the wake of rapid urbanization throughout the world since the early 1970s, Saudi cities have experienced high rates of urbanization and accelerating urban expansion, mostly in the province of Makkah where the study area is located [1][2][3][4]. Cities are considered to be indicators of a country's economic growth and development [1,[4][5][6][7]. The level of urbanization in Saudi Arabia rose from 21% to 83% between 1950 and 2015 and is expected to reach 90% in 2050 [7]. This represents a higher expected rate than most urbanized regions of the world [1]. However, the utilization of urban land in the form of sprawl is a common problem for Saudi cities [7]. Urban sprawl refers to an uncontrolled, unplanned, and excessive development of urban areas on the fringes of urban agglomerations [8][9][10][11][12] must interact with the geomorphological features and geomorphological processes that are the basis of the natural system's balance and a major consideration for urbanizable zone selection [13][14][15][16]. The selection of sites for the urban settlement, as well as the determination of the areas for future expansion and natural constraints in its growth, are some of the fundamental concerns of management and planning for the urban environment [16]. The initial urban site location is chosen during the founding of a city; it is designated according to one or more priority objectives [17,18]. It is extended by the surrounding spaces permanently included in these areas and potentially faces multiple risks. Geomorphic features and processes have played a very decisive role in the establishment and development of settlements [5,19,20]. Geomorphological units and processes can either constitute a constraint on urban growth, determine and control the direction of urban expansion or exacerbate the natural hazards of built-up areas [21]. However, the location and planning of major cities are generally based on socioeconomic and strategic factors [22][23][24].
Applied research in geomorphology is considered to be one of the most important areas of study for many development projects because it is used to assess natural potential, identify determinants and available geomorphological barriers and develop appropriate methods for solid urban expansion. Urban geomorphology examines geomorphic constraints on urban development [25], the suitability of different landforms for specific urban uses and the effects of urbanization on natural landforms and urban morphology [14,21,26]. These new urban areas are often based on existing infrastructure and do not always take into account various natural events, geomorphic hazards, geomorphology, or prior scientific studies to assess their suitability for urbanizable land [10,11]. Thus, geomorphological characteristics should be used as a baseline for the planning and management stages of cities' sustainable urban development [16,20]. Coastal cities are a natural and cultural heritage of environmental and strategic value, with space for economic projects and vital activities while also posing several risks [27][28][29][30][31][32][33].
Remote sensing (RS) has become one of most valuable tools for diachronic analysis on the urban spatial expansion studies. It provides spatiotemporal data and can be combined with GIS in integrated geospatial data to map and monitor the dynamics of urban growth [9,10]. RS coupled with GIS can be used to efficiently assess the impacts of geomorphic features on urban sprawl [9]. The study shows the importance of CORONA satellite images for studies of spatial and temporal changes, as these images are important sources of historical RS data with very high resolution and wide spatial coverage [33]. Population growth and road construction have been identified as the major driving forces of urban expansion [9,18,34]. In the beginning, the urban growth was characterized by its randomness. In recent years, there has been a tendency toward tight urban planning in response to the increasing population needs due to rapid demographic growth, as the city's population increased from one million in 1970 to more than three million in 2010 [35][36][37][38][39][40][41]. Several geomorphological units such as littoral plain, alluvial plain, mountain hills, isolated hills, alluvial fan, sabkha, wadis and volcanic fields, have been affected by the rapid urban expansion, which has in turn affected the surface and its fragility, resulting in the formation and expansion of some dangerous areas [27,[42][43][44][45]. Among the most important risks that threaten the study area are flooding, soil swelling, landslides and the cracking of buildings and roads.
This research aims to: • Investigate the geomorphological setting of the study to understand geomorphic constraints on future urban expansion. • Highlight the influence of geomorphological features and the extent of their control on the expansion and urban planning of Jeddah. • Determine the type of expansion based on the geomorphological units of the surface or roads from the sea to other areas. • Highlight the current geomorphological hazards and risks and the role of the nature of landforms regarding the future of urban planning. • Identify and map the various geomorphic units and their components during the different stages of urban expansion • Produce a natural hazard classification map according to geomorphological determinants and processes and consider the possibility of generalizing the results of the research to other areas that have the same variables as the sector under study, thereby protecting the environment and serving the community.
The various studies conducted on the city of Jeddah in the field of urban expansion have mainly focused on the vectors of urban development [36], urban planning and management, spatial trends and directions of urban sprawl [6,35,39,41] or the environmental consequences of urbanization [27,46]. Studies concerning risks have mostly highlighted those related to sabkhas [42,44] and floods [45,47]. To the best of the authors' knowledge, no consistent study has been conducted on the urban geomorphology or interaction between geomorphological units and urban expansion in Jeddah. This research addresses the theme of urban geomorphology for Jeddah. It investigates the multitemporal evolution of the agglomeration of Jeddah and the effects of geomorphological units on the spatial expansion of the city, the direction of the sprawl of the urban fabric, as well as natural risks and urban planning problems induced by geomorphological processes.
Study area
The city of Jeddah is located on the Tihamah plainwhich has a maximum width of 40 km -at the foot of the Hejaz mountains, which reach heights of hundreds of metres and have granite formations dating back to the Precambrian period. This area of research is situated in the central-western part of Saudi Arabia on the Red Sea coast and is represented by the city of Jeddah and its surroundings, as it is the main urban centre of the western region ( Figure 1). The importance of the city is evident for many historical, demographic, economic and urban reasons. For hundreds of years, Jeddah has been regarded as the main entrance and great gate for pilgrims to Makkah Al-Mukarramah. It is also the second largest city in the Kingdom of Saudi Arabia in terms of population after Riyadh. The city has a population of about 4.6 million -13.2% of the total population of Saudi Arabia -according to 2020 estimates [7]. Economically, the Islamic port of Jeddah is the oldest and largest port in Saudi Arabia in terms of volume of commercial goods and passenger traffic. The port has contributed to the transformation of Jeddah into one of the most important commercial cities in Saudi Arabia. In terms of the urban aspect, the city exceeds 100 km along the Red Sea coast with an average width of about 25 km, making it the largest urban expansion after the capital, Riyadh.
Geological and geomorphological characteristics
The topography of the study area consists of a group of remote hills separated by flat terrain and penetrated by a temporary flow that is mostly directed toward the Red Sea. The geomorphological phenomena have mainly resulted from states of stability and instability in the regional geology, coinciding with the rise and fall of the Red Sea level during the Pleistocene period [48]. On the basis of the texture of surface features, three structural units can be distinguished from the oldest to the most recent, respectively: the Precambrian rock unit, the lava flow unit and the recent sediment unit ( Figure 2). The coastal region is surmounted by a transition zone between the Tihamah plain and Hejaz mountains. The Hejaz mountains are considered to be a source of volcanic lava (harrat), where the conical tongues, fracture lines and ancient valley streams coincide. The paths of the hydrographic network have been identified, and most of the lines are fractures, ruptures and fissures that take directions parallel or perpendicular to the stratigraphy of rocks and the Red Sea ( Figure 2). They generally represent the appearance and exposure of layers and places of difference in the hardness of the rocks [49][50][51][52].
The presence of volcanic tongues with Quaternary sediments has greatly contributed to the settlement of the population and the extension of transport lines of various ranks, mainly linked to alluvial fans. This is a situation that requires determination of the degree of risk according to proximity or distance to areas likely to be at risk of flooding and disasters. Hydrologically, the region contains 24 watersheds, 16 of which face west towards the city of Jeddah, while the other eight face southwest towards the great Wadi Fatima [47] (Figure 2). The surface of the study area consists of the following main units: mountain hills, isolated hills, alluvial fans, sabkhas, harrats (lava deposits) streams, the alluvial plain and the coastal plain (see Figure 4).
Material and methods
RS techniques and GIS are among the most important tools used in the continuous monitoring of the spatial distribution of terrestrial phenomena in a broad framework [53][54][55][56]. High spatial resolution satellite images are essential documents for studying the historical evolution of rapidly changing phenomena, providing enormous amounts of information from consecutive periods [9,10,53]. This study highlights the importance of multisource and multitemporal satellite imagery with a spatial resolution of up to two metres from 1965 and 1972 and further images from 1986, 2009 and 2020. The methodology for this paper includes an analysis of geomorphological setting and a historical study of urban growth of more than half a century based on data from multiple sources, between 1965 and 2020. This is supplemented by cartographic documents; a Digital Elevation Model (DEM) and field work. Prior to a series of digital image processing, the data were combined into a unified map projection system. It therefore seems important to determine the initial morphology of the surface forms before urban expansion and the extent to which these govern the orientation of the urban planning through the use GIS. The flowchart of the methodology is illustrated in Figure 3.
Data used
The multitemporal and multisource data sources used in this research include satellite images, aerial photography, a DEM and a digital geologic map. The characteristics of the data are presented below: earthexplorer.usgs.gov. The spatial resolution of the photographs used in this study varies between 2 and 3 metres, allowing us to observe the relationship between the morphological and urban characteristics of Jeddah in the 1960s and 1970s when the urban space was concentrated in the coastal and alluvial plains. • SPOT-CIB satellite images with a spatial resolution of 10 m. This is a set of free panchromatic images that have been geometrically and vertically corrected. The data were obtained between 1986 and 1993 by the French Space Agency and the National Center for Space Studies. The technical specifications of these various data sets are presented in Table 1. The data were obtained from the USGS website https://earthexplorer.usgs.gov, with the exception of the Color Digital Aerial photography of 2009, which were downloaded from https://geoportal.sa. Based on these different data sources, it was possible to monitor the effects of landforms on urban expansion in 1965,1972,1986,2009 and 2020. Note that the reasons for the choice of these uneven time periods are linked to the availability of high resolution remote sensed data for the study area.
Image processing and geospatial data extraction
Multisource and multitemporal geospatial data were geometrically corrected from control points on the orthorectified reference image, from which all others were rectified using the ERDAS IMAGINE software, aiming at the assessment, map and analyzing the interplay between the geomorphic units and urban growth from 1965 to 2020. The CORONA satellite photography, acquired in 1965 and 1972, captured as many strips were firstly mosaicked. Second, the images were geometrically corrected with carefully selected ground control points (GCPs) using the georeferenced Sentinel 2 projected in UTM and WGS Datum 1984, Zone 37 N as a reference image.
After rectification, it was found that the root mean square error (RMSE did not exceed one pixel, revealing a high geometric match between the different images. The very high spatial resolution of the images minimized errors and improved the precision of the extracted data. Note that most of the data were aerial photos it was not necessary to carry out an atmospheric correction which was only applicable on the Sentinel 2 image from 2020. In this context we have chosen a visual interpretation for identification of geomorphic features and urban fabric, all the more so as, automatic classification is not always very precise for the delimitation and identification of the different features [53]. All rectified images were exported to ArcGIS to digitize the various features. A GIS geodatabase was created with different layers included. The high-resolution of the CORONA satellite data allowed accurate delineation of urbanized areas and improved the identification and mapping of geomorphological features in the study area by visual interpretation, between 1965 and 1972. GIS layers (polygon) of the initial morphology and geomorphological map of the urban environment that characterized the study area before the rapid urban expansion were produced. The other images from different dates have been used to generate GIS layers illustrating the landforms created by urbanization, the urban morphology and the impact of urban activities on geomorphic processes at different periods. The generated layers were compiled and presented in a GIS database including the geomorphic units' evolution during the different periods. Spatial analysis techniques with various GIS geoprocessing tools have been utilized in order to appreciate the effects of geomorphological features and their control on the urban expansion of Jeddah. The urban features that are located within each geomorphic feature were highlighted. This provided a visual and historical perspective of the interplay between the urban expansion and geomorphological features or processes. Finally, the spatial extent of the urban settlement located within each morphologic unit were measured. In addition, the geological maps and field investigations permitted the highlighting of map hazards and risk zones.
Initial morphology of the study area before urban expansion
The initial geomorphic units of the study area were identified and mapped from the high-resolution CORONA image of 1965. Figure 4 illustrates the mapping of geomorphological characteristics of the study area in 1965, before the urban expansion of the last few decades. This map includes the major geomorphologic units that provide baseline information for urban management and planning. It is the reference for urban geomorphology configuration and can serve to define suitable areas for future urban expansion planning and development. The major geomorphic unit components of the study area are mainly mountainous areas consisting of mountain hills and isolated hills, alluvial fans, harrats (volcanic fields), wadis (streams), sabkhas (saltcrusted depressions), and alluvial and littoral plains. The mountainous areas located in the eastern part of the study area are one of the main factors that influence flood development [23,45]. The altitude in these areas ranges between 150 and 800 m. Many geomorphic hazards and processes are associated with mountainous areas; which constitute a significant constraint to urban expansion and planning.
The distribution of the various geomorphic units is illustrated in Figure 4. The mountains occupy an area of 573 sq. km (24% of the study area), while the harrats and alluvial fans cover 94 sq. km (4%) and 46 sq. km (2%), respectively. The alluvial plain represents the largest part of the study area, covering 897 sq. km (37% of the study area). An area of 435 sq. km (18%) is occupied by valleys and the sedimentary plains of wadis, while the remainder of the study area is covered by littoral or coastal plains (330 sq. km, 14%) or urban areas (36.2 sq. km, only 1.5% of the study area). The urban areas are located mainly on the alluvial plain (28.2 sq. km) and the coastal plain (8 sq. km). The sabkhas account for approximately 26% of the coastal plain.
Note that other relevant terrain related information such as lithology and geological structures were derived from the digital geologic map. This initial morphological framework of the study area indicates that geomorphological features are fundamental determinants of the city's expansion and development. The increasing demand regarding urban space will exacerbate the geomorphic hazards and processes associated with the various geomorphological units.
Stages of the urban expansion of Jeddah during 1965-2020
Spatial monitoring of the extent of built-up areas in Jeddah through the use of satellite RS data highlighted a rapid urban expansion of the city during the period of 1965-2020. The city underwent unprecedented changes in its urban growth due to the increase in population and developments in the economy and service sector, which affected the Red Sea coastal strip adjacent to the city due to urban expansion and various land uses, both governmental and private. The map presented in Figure 5 was produced based on visually interpreted high spatial resolution images from different sources and years to allow for comparison and evaluation of urban growth patterns in Jeddah since 1965. The GIS baseline layer of the urban fabric was generated by the CORONA satellite photography in 1965, when the city of Jeddah was small and compact after the demolition of the wall that surrounded it in 1947. The urban expansion over the years 1965,1972,1986,2009 and 2020 is shown in Table 2 and Figure 5. The total urban area covered 36.2 sq. km in 1965, which means that the Jeddah of that time occupied only 3.2% of the city's current total area. The urban land growth was accelerated and more than doubled in the following seven years; hence, Jeddah came to cover 77.1 sq. km (6.8% of the current area), growing at a rate of 5.9 sq. km per year. The highest rate of urban growth was recorded during the years 1972-1986. This urban explosion corresponds to the oil boom period [33,36,39]; the total built-up area was multiplied by 7.8 and came to cover 605.6 sq. km, with an urban growth rate average of 37.7 sq. km per year. This also means that every year for 14 years, the city of Jeddah gained a piece of urban area greater than the initial urban area it had in 1965, with 53.5% of the actual urban patterns built in this period. Jeddah witnessed an economic opening, public and private investments and oil revenues in the public sector. The private sector also benefitted from government facilities and bank loans, which assisted in increasing the size of the commercial and industrial sectors of the city. In the next period 1986-2009, which is the longest period at 23 years -a deceleration of urban growth was recorded; however, the built-up area increased and reached 841.8 sq. km and the city grew by 236.2 sq. km, with an average rate of 10.2 sq. km per year. This slowdown in urban expansion may have been due to the large increase in the previous period and the advantage that government agencies had to acquire land without using it. Land acquisition was instead a means of proving ownership and benefiting from the real estate circulation of land during this period [33]. At present, the city has grown considerably and reached an area of 1,130.9 sq. km, with an annual increase of 26.2 sq. km between 2009 and 2020. Most of this increase has occurred in the northern and eastern directions for residential use or tourist complexes and entertainment projects in northern Jeddah. It is worth noting that urban growth is taking place at the expense of the sea, which has been backfilled over an area of 32 sq. km during the monitoring period for various land use types, namely residential, commercial, investment, governmental and public activities [33].
Impact of urban growth patterns on initial geomorphology
The ArcGIS Spatial Analyst extension was used to perform geospatial analysis on GIS layers generated from multitemporal images to monitor and evaluate the geomorphic features converted to built-up areas. The Intersect tool was used to identify the geomorphic features located within urban components for each period and calculate the areas of landforms that fell in the urban growth during the 55 years observed.
The results of urban growth at the expense of initial geomorphology are presented in Figures 6 and 7 and Table 3. The total area affected by the urban expansion over the years 1965,1972,1986,2009 and 2020 exceeds 1,000 sq. km. The urban patterns occupied 36.2 sq. km in 1965, equivalent to 1.5% of the study area, at the expense of the Tihamah alluvial plain (28.2 sq. km) and the coastal plain (8 sq. km). During the second period (1965)(1966)(1967)(1968)(1969)(1970)(1971)(1972), which coincided with the beginning of the first oil boom, the area of the original morphology affected by urban extent doubled to 77 sq. km (3.2% of the study area) after only seven years. The city began to expand to other morphological units in addition to alluvial and littoral plains (58 and 15.6 sq. km, respectively), particularly the wadis (2.45 sq. km), the sea and alluvial fans (0.7 and 0.24 sq. km, respectively). The most area and the largest percentage of geomorphic features impacted by the urban growth were recorded between 1972 and 1986, with more than a quarter of the study area affected. The city extended in all directions and into all morphological landforms as follows: alluvial plain (346.7 sq. km), littoral plain (192.3 sq. km), wadis (39.8 sq. km), sea and Sharm -a small arm of the sea -(14.8 sq. km), mountain hills (4.5 sq. km), isolated hills (3.5 sq. km), alluvial fans (3.15 sq. km) and harrats (0.85 sq. km). During the longest period 1986-2009, urban expansion was relatively limited compared to the previous period. The built-up area continued to expand on different sides, while urban fabrics were built in the heights and valleys and parts of isolated hills and mountainous hills were removed. The increased built-up area at the expense of the morphological units in the period of 2009-2020 represents a cumulation of all the years observed. The area of the urban patterns occupied about 46% of the original geomorphological area and was distributed as follows: alluvial plain (23.4%), coastal plain (10.3%), wadis (5.7%), sea and Sharm (1.4%), mountain hills (3.1%), isolated hills (1.9%), alluvial fans (0.9%) and harrats (0.2%). This analysis found that the urban areas first occupied low areas (such as the coastal and alluvial plains) but gradually reached the heights. Furthermore, it has been found that the construction of roads has reduced the geomorphological constraints.
Urbanization in mountainous areas and valleys increases the risks related to geomorphology. It should be noted that the granitic rocky hills, basaltic lava blankets and valleys that played roles in preventing the Eastern urban expansion of Jeddah in the early periods of the monitored years seem to have become less influential, particularly in regard to the establishment of roads to facilitate the movement of transport and provide service facilities, as can be seen in Figure 8. Among the manifestations of urban sprawl, we can note unplanned urban settlement on alluvial fans. Through the urban sprawl, the mountainous area neighbouring and beds of streams were occupied by settlements (Figure 8).
In order to identify and and prioritize the various risks associated to geomorphic processes, a map has been designed to classify the natural risk areas according to their intensity (high, medium, low) and their factors, both geomorphological and structural (flooding, alluvial fans, slopes, cracks, sabkhas). This risk map represents an important component of risk management and can be used as a guide for urban expansion and planning ( Figure 9). This map is the result of integrated geospatial data including geological information such as lineaments and sabkhas, hydrological processes such as wadi beds, geomorphological processes inherent to geomorphic units, topographical information such as elevations, slopes and field work observations, subsidence and flood areas.
Discussion
This study emphasizes the importance of the geomorphologic features for urban growth and planning. Geomorphology is often the basis of the choice of the first site and constitutes a constraint to urban development direction. However, the urbanization is a factor of modification of initial geomorphological setting. Thus, the interaction between the rapid urban expansion and geomorphological processes can result in increasing exposure to natural hazards. This study, like most of the results obtained in similar research show the importance of taking into account geomorphological characteristics for urban development plans [5,[15][16][17][18][19][20][21]31,45,56,57]. The majority of these similar studies indicate also that flood Risk and tectonic hazards constitute the most important threat linked the geomorphic hazards in urbanized areas.
Geomorphological constraint on urban development
The morphological framework mapped in 1965 represents the baseline information for monitoring and planning future urban trends of the study area. Geomorphological features are fundamental determinants of the city's expansion and development. Geomorphological planning policies are very important in reducing the impact of natural disasters on urban areas [58].
The increasing demand for urban space will exacerbate the geomorphic hazards and processes associated with the various geomorphological units in Jeddah city. The main geomorphological features identified in this study are units of varying origin, including structural, fluvial, erosional and depositional. The mountainous areas are an important constraint that affects urban growth. The elevated area of Jeddah city has been subject to hazardous geomorphological processes (such as mass wasting) and fluvial geomorphological processes during the catastrophic floods of Jeddah in 2009 and 2011. In this sector the flow turns into torrents which can be defined as a stream of water flowing with great rapidity and violence. This part of the city was the most damaged area during these events [7,43,45,47,59,60]. The Table 3. Areas in sq. km and percentages of the urban extent on the geomorphic units during the years of observation. elevation map is derived from the DEM (see Figure 2) and shows that the highest terrains and slopes are located near Wadi Fatima which crosses the south of the city to flow into the Red Sea. It is notable that most of the alluvial fans are situated in the south-eastern part of the study area, on harrat foot slopes in the vicinity of the Wadi Fatima valley. The main hazards affecting alluvial fans include inundation by flood water, debris flow and debris flood deposits. The harrats (volcanic landforms), with altitudes sometimes exceeding 600 meters, also present a significant limitation for urban expansion. The many wadis flowing from the mountains towards the Red Sea and passing through Jeddah play an important role in flood hazards [61]. The alluvial and coastal plains that house the initial site of the city have an almost flat relief covered with marine and fluvial sediments among sabkhas, aeolian sands and sea terraces [62]. The sabkha hazards include geotechnical problems occurring in buildings and roads [9,42,44]. The sabkhas are developing all across the coastal plain. The slope percentage is considered to be a surface flood susceptibility indicator [8]. The topographic map presented in the Figure 1 shows that altitude and the slope are significant constraints to urban development.
The interaction between geomorphic processes /hazards and urban sprawl
The interplay between geomorphic units and urban growth from 1965 to 2020 can be analyzed from different perspectives. The interrelations between natural landforms and urbanization are complex [21]. The geomorphological processes may increase disaster risk. These interrelationships were noted during the different stages of development of Jeddah between 1965 and 2020. The geomorphological features of the coastal area have been profoundly modified during the observation period [20,62]. Consequently, risks related to geomorphic processes threaten all recreative infrastructures and roads built along the coastal area [33,57]. The constraint formed by the mountains is marked in certain places by the appearance of mountainous areas surrounded by unplanned urban settlement in the eastern zone of the study area, this is apparent in places where the mountains could not be removed. The urban sprawl extended to the mountains during recent decades after the alluvial plains and isolated hills. Thus, urban sprawl has become the main challenge of urban growth management at the local level.
Geomorphic hazards implications of urban expansion
Various geomorphic hazards are associated with areas of urban expansion can be observed in the field ( Figure 10). The expansion of new urban masses on the beds of the wadis has led to exposure to frequent catastrophic floods, such as those that occurred in 2009, 2011 and 2017. The topographical and geological characteristics of the mountainous regions mean that these torrents (large streams of water that moves very fast) carry with them huge quantities of sediment with gravel, sand, silt and clay textures, which increases the risk of flooding. Furthermore, the existing drainage channels are not sufficient to absorb the quantities of floodwater. To protect the city from the risk of flooding and to find urgent solutions to these dangers, dams have been built across streams in some valleys [45].
In addition, the alluvial fans overlooking the plain of Wadi Fatimah, located in the southeast and upon which urban blocks are installed and crossed by some main roads (Figure 8), are very vulnerable to flood and can be more dangerous than other places. This is because they carry huge quantities of sediments of different sizes and erosion products from the top of the catchment area. Urban growth also involves geomorphological effects that lead to units of anthropogenic origin, including drainage and stream channel modifications, destruction of initial geomorphological features and modification of natural geomorphic processes all of which have contributed to the worsening of flooding problems.
There are also risks related to geology, including numerous long faults visible on the geological map and risks of subsidence related to tectonics which are observed in certain sections of the city and should be considered for future urban planning. Among these tectonics' risks observed in the field, we can note swelling of the ground and the cracking of buildings and roads (Figure 10), including the Al-Haramain project, asphalt cracks on the Abdulaziz bin Al-Baz street and cracking of the embankment areas, broken embankments in the Al-Sanabel and Al-Ajaweed plans, cracking on several streets in the Al-Tayseer plan and landing activity in the Riyadh plan and on Usfan road. Also, to be noted are the sabkha areas north and south of Jeddah that are close to the sea, as well as cracks due to the salinity of the soils and the nature of the surface formations [33,44].
Growth direction and future perspective
It is essential to understand and integrate the geomorphological environment choosing suitable lands for future urban expansion and to reduce the associated geomorphic risks. The multitemporal analysis of the RS data of Jeddah city indicates that the expansion occurred toward the northern and southern directions along the coast and coastal plain during the first decades due to existence of geomorphological constraints in the east, such as streams and mountainous areas. The axes of the urban expansion in mountain areas are linked to the lithology and the degree of slope. This has played an important role in the urban delimitation patterns. Population growth is the main driving force behind the urban expansion of the city, both in terms of providing housing and space for infrastructures, and for public and social services and economic activities. Jeddah has experienced faster and larger population growth than other Saudi Arabian cities, reflecting the economic development of the Kingdom of Saudi Arabia [33,42]. There are currently unplanned urban settlements and land subdivisions outside current urban footprints and urban sprawl has become the main challenge of urban growth management at the local level (7). The rapid urban expansion occurred randomly without taking geologic and geomorphic situations into consideration which has contributed to increasing geomorphic risks especially in the context of climate change.
Conclusion
This study emphasizes the importance of geomorphological characteristics for urban growth and planning. The geomorphological setting is often the basis of the choice of the first site and constitutes a constraint to the urban development. This study has also demonstrated that RS and GIS techniques are effective tools to monitor and map geomorphological processes and hazards related to urban expansion. They provide spatiotemporal data and can be combined with GIS in integrated geospatial data to better understand the progressive changes in natural hazard risk induced by the dynamic of urban growth. The results indicated significant urban expansion since 1965. The total builtup area has grown rapidly from around 36 sq. km in 1965 to more than 1,130 sq. km in 2020. During this period, more than 1,000 sq. km of geomorphological units were converted into built-up areas. The expansion occurred firstly toward the northern and southern directions due to existence of geomorphological constraints in the east, such as streams and mountainous areas. The urban growth towards mountain regions in recent years, involves geomorphic hazards and urban sprawl which have contributed to the worsening of flooding problems. Consequently, all future urban planning projects related to the expansion of the city from the eastern slope should contain an appropriate solution to the problems of flooding and determinants related to the geomorphological characteristics. In addition, various risk zones appear in several places in the city. To reduce disaster risks, urban planning and spatial organization work have become necessary. Thus, natural hazard classification map, according to their intensity and their driving forces, represents an important component of risk management and can be used as a guide for future urban expansion and planning. | 2022-01-25T16:17:53.639Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "7f7852d11919e04baee9acbd44dff9e969b87eae",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16583655.2022.2026616?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "cfd764483c66b9c71d91ce2f1afef56321d95b92",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
251359792 | pes2o/s2orc | v3-fos-license | Medley: A Membership Service for IoT Networks
Efficient and correct operation of an IoT network requires the presence of a failure detector and membership protocol amongst the IoT nodes. This paper presents a new failure detector for IoT settings wherein nodes are connected via a wireless ad-hoc network. Our failure detector, named Medley, is fully decentralized, allows IoT nodes to maintain a local membership list of other alive nodes, detects failures quickly (and updates the membership list), and incurs low communication overhead. We adapt a failure detector originally proposed for datacenters (SWIM), for the IoT environment. This adaptation is non-trivial. In Medley each node picks a medley of ping targets in a randomized and skewed manner, preferring nearer nodes. We also provide optimizations to achieve time-bounded detection, as well as to reduce tail detection times. Via analysis, simulation, and Raspberry Pi deployments, we show that Medley can simultaneously optimize detection time and communication traffic.
I. INTRODUCTION
T HE IoT market is expected to reach 500 Billion dollars in size by 2022 [1]. For instance, during just the second quarter of 2018, Amazon Echo + Dot sold 3.6 million units, while Google Home + Mini sales were 3.1 million units [2]. IoT deployments in smart buildings, smart homes, smart hospitals, smart forests, battlefield scenarios, etc., are proliferating. While today's deployments in smart homes are typically a few tens of devices, tomorrow's vision, in smart buildings and cities, is for hundreds or thousands of devices communicating with each other.
Such large IoT deployments are in essence distributed systems of devices. As such, there is a need to provide Manuscript familiar abstractions and a similar substrate of distributed group operations as those which exist in Internet-based distributed systems like datacenters, peer-to-peer systems, clouds, etc. In other words, a distributed group communication substrate is required for IoT settings, atop which management functions and distributed programs can then be built. This is critical in order to build large-scale IoT deployments that are truly autonomous, self-healing, and self-sufficient.
One of the first problems that such a substrate needs to solve is detecting failures (we consider only fail-stop failures in this paper). 1 At large scale, failures are the norm rather than the exception. When a device fails, other affected devices need to know about it and take appropriately corrective action, and in some cases inform the human user. This is a very common way of building Internet-based and datacenter-based distributed systems. In the IoT environment, examples of corrective actions after failure include (but are not limited to): backup actions to ensure user needs are met (e.g., maintain sufficient lighting in an area), re-initiating and re-replicating device schedules that were stored on failed devices (e.g., timed schedules), informing the upper management layer, informing the user, etc.
Existing techniques in IoT literature detect failures either centralized or semi-centralized [3], [4], [5], [6]. These typically provide a central clearinghouse where information is maintained about currently-alive nodes. Yet, they require access to a cloud or a cloudlet, but this is not always feasible. For instance, IoT deployments may span remote scenarios (e.g., battlefields, forests, etc.), and in some cases sending data to the cloud may be prohibited by laws (e.g., GDPR [7] or HIPAA [8] laws for data from smart hospitals). Additionally, if the centralized service becomes inaccessible (e.g., to due to failures or message losses), the IoT devices no longer have access to the failure detection and membership service.
In this paper, we present Medley, which is the first fullydecentralized membership service for IoT distributed systems running over a wireless ad-hoc network. The Medley membership service maintains at each IoT node, a dynamic membership list containing a list of currently alive nodes in the system. The membership service's critical goal is to detect device failures (crashes) and update membership lists at nonfaulty nodes-this is the responsibility of the failure detector component, which is the focus of this paper. Like other practical membership systems [9], Medley is also weaklyconsistent membership service: membership changes (failures, joins, leaves) propagate eventually. We measure how quickly they propagate, and how much bandwidth they consume.
Maintaining full membership lists at devices does not use excessive memory. For instance, even with up to 5K devices 1 Classical distributed systems literature builds a wide swath of distributed algorithms over a full membership list (at each node). Examples include multicast, coordination, leader election, mutual exclusion, virtual synchrony etc. [10]. Essentially, full membership offers maximal flexibility in designing arbitrary distributed algorithms on top of it. It also helps make analysis tractable. For IoT networks, Medley opens the door for similar algorithms to be built on top of it. For instance, to build a multicast tree, one algorithm could choose only nearby nodes, or alternatively a mix of near and far nodes. Both can be built atop a full membership algorithm.
Failure detector protocols for Internet-based distributed systems fall into two categories: heartbeat-based (or lease-based), and ping-based. Heartbeat-based protocols [11], [12], [13] have each node send periodic heartbeats to one or more other monitor nodes; when a node n i dies, its heartbeats stop, the monitors time out, and detect the node n i as failed. Ping-based protocols [9], [14] have each node periodically ping randomly-selected target nodes from the system. Analysis in [14] has shown that compared to heartbeat protocols, ping-based protocols are faster at detecting failures and impose less network traffic, and can completely detect failures.
We thus adopt a ping-based approach for our IoT failure detection protocol Medley. The key challenge for Medley is that existing ping-based protocols [9], [15] select ping targets uniformly at random across the system. Randomized selection is attractive due to its fast detection, congestion avoidance and load balancing. Yet in a wireless ad-hoc IoT network, uniform random selection leads to large volumes of network traffic that span major portions of the IoT network.
Medley solves this by proposing a new spatial ping-target selection strategy which prefers nearer nodes but also has some probability of pinging farther nodes. Compared to fully randomized pinging, always picking nearby nodes as ping targets localizes and reduces network traffic. But this always-local selection leads to high detection times due to lowered randomness of pinging. It also causes non-detection of failures when multiple simultaneous failures occur (e.g., failures caused by a circuit breaker tripping), because all nearby pingers of a failed node have also failed.
Medley attempts to gain the advantages of both approaches by using a hybrid of the uniform-random and the always-local target selection. It utilizes a mix (medley) of nearer and farther ping targets. A key question we answer both analytically and empirically is: What is the best way to mix these targets? We also present two optimizations to reduce tail latency of detection time.
The contributions of this paper are: 1) A new fully-decentralized failure detector protocol, named Medley, for wireless ad-hoc IoT networks. 2) Mathematical analysis of the key parameter (exponent) in spatial pinging, in order to optimize detection time as well as communication traffic.
3) An optimization to provide time-bounded detection of failures in Medley. 4) Two optimizations to reduce tail detection times. 5) Evaluation of Medley via simulations in Java (matching deployment) and NS-3 (for link layer fidelity). 6) Implementation of Medley for Raspberry Pi, and subsequent deployment experiments. 7) Compared to classical techniques (SWIM), Medley provides comparable failure detection times, lowers bandwidth by 37.8% (given a detection time), and has false positive rates of 2% under 20% packet drop rates. We cut tail detection time up to 47.2%.
II. BACKGROUND
System Model: We consider the fail-stop model: once a node crashes it executes no further instructions or operations. Failrecovery models can be seen as a special case (with nodes rejoining under a new id or incarnation number). Byzantine failures [16] are beyond our current scope (but represent an interesting future direction).
The network is asynchronous, and messages may be delayed or dropped. Multiple nodes may fail simultaneously. Nodes are allowed to join and voluntarily leave the system. We use N to denote the number of nodes in the system.
Each node maintains a membership list consisting of entries for all other nodes in the system-our membership protocol's goal is to delete entries for failed/departed nodes soon after their failure departure, and to add entries for joining nodes soon after they join. Our protocol makes no assumptions about clock synchronization, but our analysis assumes (for tractability) that clock speeds are similar.
Failure Detector Properties: Failure detectors have three desirable properties. The two desirable correctness properties are called [17] Completeness and Accuracy. Completeness requires that every failure is detected by at least one non-faulty node. Accuracy means that no failure detections are about healthy nodes, i.e., there are no false positives. In their seminal paper [17] Chandra and Toueg proved that it is impossible to design a failure detector for asynchronous networks, to satisfy both completeness and accuracy. Due to the need to perform corrective recovery actions after a failure, today's failures navigate this impossibility by always guaranteeing completeness, while attempting to maximize accuracy (i.e., minimize false positive rate).
Besides the above two properties, failure detectors also aim to minimize detection time, i.e., time between failure and first non-faulty node discovering this failure. Finally, scalability and load balancing are often goals of failure detectors.
SWIM Failure Detector: Our Medley system is adapted from the failure detector and dissemination component of the SWIM protocol [9], [14]. SWIM is popular and various versions of it are today widely deployed in datacenters and in open-source software, including at Uber [18], and HashiCorp's Serf [19] and Consul [20].
We next describe the base SWIM protocol to set the context for Medley. The SWIM membership protocol handles failure detection and dissemination separately. The former detects failures, while the latter multicasts to the system information about node joins, leaves, and detected failures. Fig. 1 (from [9], [14]) depicts the SWIM failure detector. Each node M i periodically runs the following protocol every T time units. T is fixed at all nodes but nodes run their periods asynchronously from each other. Each period consists of a direct pinging phase and an optional indirect pinging phase.
At the start of a period, M i picks a member from its membership list, uniformly at random, and sends it a ping message. Any node M j receiving a ping responds immediately with an ack. If M i receives the ack within a small timeout t (based on message RTT), then M i is satisfied and does nothing else in this period. Otherwise, M i picks k other nodes (denoted as indirect pingers), also at random, and sends each of them a ping-req(M j ) message which requests each of them to ping M j . If any of these k nodes hears back an ack from M j , they pass on the ack back to M i . If M i receives at least one such ack before the end of the period, it is satisfied and does nothing else in this period. Otherwise, i.e., if M i hears no acks, then it marks M j as failed at the end of this period. Pings and acks carry unique identifiers to avoid confusion with other rounds and pingers.
Indirect pinging essentially gives a "second chance" to pinged nodes that might have been congested or slow during the initial ping. It also avoids potential network congestion on the direct M i − M j network path. Both of these reduce false positive rates.
Analysis in [14] shows that even without the indirect pinging, failures are detected within O(1) protocol periods on expectation. In addition, the SWIM protocol guarantees eventual detection of all failures (eventual completeness).
SWIM Dissemination Component: SWIM nodes continuously piggyback the information about node join/leave/failure atop the messages they send out, namely ping, ack, and indirect ping request for quick dissemination. In addition, a receiving node records new information in the message and reacts accordingly.
This "infection-style" dissemination provides a gossip-like behavior for all membership information. Analysis [14] shows that in a system with N nodes, information spreads with high probability to all nodes within O(log(N)) time periods.
Using a Membership List in IoT: Full membership has long been used as a building block in Internet-wide distributed systems for building reliable and fast protocols [12], from leader election and multicast trees to consensus. Medley enables the same class of protocols to be designed over IoT networks, without the need to rely on any kind of centralization. These protocols are already used in IoT networks for spreading commands and for coordination. Medley allows these protocols to work in scenarios without access to clouds (e.g., battlefield, environmental observation systems), or where low latency operations are needed (e.g., smart farms), or where centralization is infeasible (e.g., with data privacy constraints such as hospitals). The fundamental tradeoff that Medley enables is to impose a low background bandwidth in maintaining these membership lists, so that these other protocols, on top of membership, can be run quickly and with minimal message exchange. Centralized and "cluster"-based approaches can incur delay in communication or on-demand cluster selection. For instance, a 1 RTT leader election protocol with (N − 1) messages can be designed atop Medley's full membership lists-each node selects the lowest ID (for instance) node in its membership list, and the leader multicasts an "I am the leader" message to all. We believe this is the right direction for IoT deployments-Medley begets simpler probabilistic protocols without corner cases, thus making it easier to deploy, debug, measure, and optimize IoT applications.
A. Spatial Pinging
We target settings where IoT devices are connected via a wireless ad-hoc network. In such scenarios, the SWIM failure detector described in Section II is inefficient because it picks ping targets uniformly at random. This spreads pings and acks across far distances in the ad-hoc network. Far pings and acks require more routing hops, incurring higher communication overhead on intermediate nodes, longer latency, and create congestion and packet losses.
Thus, we propose in Medley a way to replace the randomized target selection in SWIM with a skewed randomized mechanism which takes distance to target into account. We call this as spatial target selection. Spatial Target Selection: In Medley, a node chooses to ping a given target with probability proportional to 1 r m , where r is the distance to the target and m is a fixed exponent.
An example is shown in Fig. 2. M i has in its membership list nodes M p , M q , and M r at distances d, 2d, and 4d respectively. In a period of the SWIM protocol at M i , it has the highest probability (∝ 1 d m ) of pinging M p . Similarly, the probabilities for pinging M q , M r are respectively ∝ 1 (2d) m , and ∝ 1 (4d) m . Using appropriate normalization constants, we depict two points in the space of m. If m = 1, then the respective ping probabilities to M p , M q , M r are 0.57, 0.28, and 0.15. However, increasing the exponent m to 2 localizes pings morethe changed ping probabilities are respectively 0.75, 0.2, and 0.05. M p with probability 0.75 will be pinged even more frequently.
The above calculations indicate that higher values of m localize ping-ack traffic more and incur lower communication overhead. At the same time, more localized pinging reduces the randomness of pinging and thus increases the detection time. We wish to find "good" values for m that optimize both network traffic and detection time. We do so in Section III-B.
We point out that Spatial Pinging (Medley) is a generalization of SWIM. When m = 0, spatial pinging degenerates to SWIM with uniform target selection. m = ∞ means that each member uniformly pings to its closest neighbours.
Other Components: Just like SWIM, Medley disseminates information by piggybacking atop pings, acks, and indirect pings (Section II, "SWIM Dissemination Component"). This is a gossip style of dissemination and is also used to disseminate node join/leave information. When a new node joins, it: i) borrows the membership list from any of its 1-hop neighbors, and ii) starts piggybacking its information atop the dissemination component, and thus becomes included in other nodes' membership lists.
Medley is able to seamlessly borrow optimizations from SWIM. One such important optimization is suspicion, which allows mistakenly-detected alive nodes a second chance to disprove their false detection. Here a detected node is not marked as failed but instead is suspected and this suspicion gossiped to other nodes (via pings and acks). If another node successfully pings the suspected node via normal pinging, before the suspicion times out, the suspected node rejuvenates in membership lists and is not deleted from membership lists. More details can be found in the SWIM paper [9].
B. Analysis
We analyze Medley's spatial pinging under certain idealized assumptions. For tractability, we assume that: i) the N nodes are uniformly spread with a density of D, and ii) a pinging node picks targets only up to a distance of R away.
First, to minimize detection time we wish to maximize the expected number of pings a given node receives during a pinging period. We denote this expected number as E[Pings received per period] or EP(m), where: In the first line of the equation, the integral term contains the probability of being picked as a ping target( 1 r m ), multiplied by the number of nodes in an annulus at radius r (D(2πr ) · dr ). The term beyond parentheses is a normalizing constant to ensure that when m = 0, which is the uniform default SWIM, Equation (1) comes to an expected 1 received ping.
Second (along with maximizing ping probability), we simultaneously wish to minimize communication cost C(m) incurred by pings received at a given node. A message transits over multiple hops in the underlying ad-hoc network. Assuming a fixed size for messages, C(m) is proportional to the number of hops incurred by the message. Again for tractability, we calculate a message's cost as proportional to the distance between its sender and receiver (as this is correlated with hop count). We obtain: This is obtained by multiplying the expected number of pings in the annulus of radius r (similar to Equation (1)), by the communication cost incurred by the multi-hop network, which is proportional to the target distance r.
In order to simultaneously minimize C(m)) and maximize EP(m), we define our optimization function that we wish to maximize as: 1) If the ratio of deployment area dimension to inter-node distance is high, then m = ∞ is optimal; 2) If the ratio of deployment area dimension to inter-node distance is low, then m = 3 is optimal. Proof: First, to prove completeness, consider a failure of node M j . We observe that with at least one non-faulty node M i in the system, M i has a non-zero probability of pinging M j during any protocol period subsequennt to M j 's failure. Because of the (biased) randomness of picking ping targets, M i is guaranteed to eventually pick M j as a ping target in a future period. M j will be unresponsive (because it is failed), and thus M i will mark M j as failed.
where d represents the distance to the nearest node (for a 2-dimensional deployment, d ∝ 1
√ D
). Similarly, for communication cost C(m): 1) m = 0 can be ignored as Ratio(m = 1) is higher than Ratio(m = 0), 2) m = 2 can be ignored as x log(x ) has a minimum of e (> 1), 3) m = 1 can be ignored as Ratio (3) Ratio(1) = x 2 log(x ) has a minimum at e 2 > 1. Therefore, the choice for optimizing Ratio(m) boils down to either m = 3 or m = ∞. Next we observe that: In other words, if the dimension of the IoT installation area is much larger than internode distances, local pinging is optimal.
2) If R d < e 2.718, m = 3 is optimal.In other words, for small installation areas (e.g., a room or a floor, where R is small), or areas of low node density (where inter-node distance d is high), Medley with m = 3 is optimal. Theorem 2: In an area with symmetric pinging (e.g., large deployment, or 3 dimensional area), when Medley is configured to have each node send 1 ping per period, 2 When m = 0 (the default uniform SWIM), each of the nodes This is equivalent to tossing a coin with heads probability (1 − e −1 ) per period. Thus: i) the expected detection time at periods, which is O(1); and ii) the time for the failure to be detected with high probability (w.h.p.) IV. TIME-BOUNDED FAILURE DETECTION Theorem 1 was able to prove that detection is eventual. In practice this could still mean particularly long detection times in IoT scenarios. Consider a node M i that is "far" from most other nodes. Because ping probabilities to M i are low, when M i fails, the biased target selection choices imply that it may be an arbitrarily (and indeterminately) long time for the first non-faulty node to pick M i as ping target.
We now present an optimization that preserves the biased randomness of the Medley's spatial pinging from Section III, but is additionally able to specify an absolute time bound on how long a failed node takes to be detected.
A. Design of Time-Bounded Medley
The key idea is to ping via a round-robin mechanism which is weighted by ping probability. Randomly pick one node in onePassTargets as PING target 13: Remove M j from onePassTargets 14: Reduce where p j is the pinging probability of respective member M j from ML i .
The p j values in P M i are calculated using the spatial ping probabilities of Section III. The pseudocode for our approach is shown in Algorithm 1. We explain below.
Let p min = min{P M i }, the lowest probability among all non-faulty members in ML i . Now, denote Count j = p j p min . We create a initial bag list as: The weighted round-robin pinging at node M i creates a bag B i which consists of Count j instances of node M j for each M j ∈ ML i . This can be thought of as a bag of balls, with Count j balls of color M j .
During each period, M i picks one ball from this bag (without replacement), and uses the corresponding member as ping target for that period. The bag is created at the start of a super round (which consists of multiple periods), and a super round completes when the bag is empty. Thus, a super-round consists of (Σ K 1 Count j ) number of protocol periods. Picking these balls (targets) uniformly at random from the bag causes high variance in detection times. To reduce this, we introduce the notion of passes. Algorithm 1 depicts how M i selects targets in a super round. At M i , ping target selection is done randomly but in multiple passes through the bag. Each pass consists of multiple periods. In each pass at M i , every node M j (in M i 's bag), which has at least one leftover instance in the bag, is touched (removed, and pinged) only once. These instances are removed in a random order (Line 8 -Line 13).
Suppose a particular pass contains r instances (thus consisting of r protocol periods). Then during these r periods, M i sequentially picks one instance as ping target based on the order. When the final pass is done (and no instances are left in the bag), all instances are put back in the bag, a new super round is started, and the above process is repeated.
Note that the different super rounds may contain different numbers of periods, as the membership list is continuously changing (we discuss node joins and leaves in Section IV-C). Fig. 3 shows an example of Algorithm 1 in action. There are four active members in the network, aligned topologically in a straight line. Fig. 3(a)
B. Time Bound
The approach above preserves relative ping selection probabilities because ping Count i 's are normalized derivations of ping probabilities p i . At the same time, this protocol provides time-bounded completeness, as we prove now. From this point onwards: (i) M i will spend the rest of this super round by executing (N − 2) · α periods pinging nodes other than M j . At the start of the next super round, when M i creates a new bag, the first pass will pick every node once, including M j . Thus, the worst case occurs when M j is picked last at the end of this first pass (in this next bag). This means: (ii) M i will take another (N − 2) protocol periods to get around to pinging M j . Finally: (iii) one additional protocol period is needed where M i actually pings M j .
Adding (i), (ii), and (iii), the worst-case detection time of faulty node M j at M i is (in protocol periods):
C. Node Joins and Removals
If a new node M j is added to, or removed from, M i 's membership list just as the bag is about to be refilled, then all the members' ping probabilities (and thus Counts) are recalculated and normalized to reflect the changed membership. Additionally, Medley also allows node joins and removals in the midst of passes-the only rule required for correctness (to preserve relative ping probabilities) is to normalize the ping probability (and thus Counts) of the added/removed nodes to match current super round progress, based on the leftover nodes in the bag. When the bag becomes empty next, probabilities (and thus Counts) of all other members are recalculated and re-normalized anew.
V. MEDLEY-F: FEEDBACK-BASED TARGET SELECTION
Medley, as described so far, may have a long tail of detection time for a small subset of nodes. We define an unlucky node n i as one whose neighbors have all their (respective) neighbors much closer to themselves, while n i is relatively far from each of its neighbors. When the exponent m is high and pings stay local, unlucky nodes have fewer pingers. If an unlucky node fails, its detection time will be longer than other nodes. To reduce this tail, we explore a variant of Medley, called Medley-F. Medley-F consists of competing approaches: an active approach wherein a node actively realizes it is unlucky, and a passive approach wherein other nodes realize the unlucky node. In both cases, the modified Medley adjusts the rate of pinging to the unlucky node-permanently for the active approach and temporarily for the passive approach.
A. Active Feedback Strategy
In active-feedback, every node actively monitors itself and reports its unluckiness to its 1-hop neighbors. These neighbors adjust their pinging probability to the unlucky node.
1) Member Self Monitoring: Each node estimates, via exponential averaging, the average interval of incoming pings. Given a new measurement M of pinging interval, Medley-F updates the estimate pinging interval I via exponential averaging: I ← (1 − α) · I + α · M . We use Get target probability for unlucky node M i 2: P above = all p k in P M j with p k > p M i or k = M i 3: p target = mean(P above ) 4: Migrate probability from high-prob nodes to M i 5: P sponsor = all (p k -p target ) in P M j with p k > p target 6: for each p k in P sponsor do 7: 8: end for 9: p M i = p target α = 0.125 in our implementation. If the estimate I is above ACTIVE_TIMEOUT, M i considers itself as UNLUCKY and reports to all its direct neighbors. We recommend setting ACTIVE_TIMEOUT to be less than suspicion timeout (e.g., half of suspicion timeout), so that an unlucky node can report its unluckiness and potentially update its aliveness to other nodes in a timely manner.
2) Unlucky Handling: When a node M j receives an UNLUCKY report from a 1-hop neighbor M i , M j uses Algorithm 2 to boost the pinging probability of M i to reach the average pinging rate for other non-unlucky (or lucky) nodes.
We describe Algorithm 2 via an example. Consider a node This approach boosts unlucky nodes and reduces pinging only to nodes with already-high ping rates. This approach also works with Section IV's bag strategy-instances in the current bag are updated immediately based on new probabilities.
B. Passive Feedback Strategy
The passive-feedback strategy is the reverse of activefeedback, and uses neighbors to detect an unlucky node. In passive-feedback, each node M i actively maintains timestamp information about the last contact from other members. Such contact may be either through a direct contact where 1-hop neighbor sends or forwards a message, or via an indirect contact where a multi-hop member originates a message. To reduce the message payload, we do not keep information for intermediate routing path nodes. When selecting the next ping target, M i flips a coin with probability p passive , and if it turns up heads, M i does a passive check. During a passive check, M i looks at its membership list and checks whether any node has not contacted M i in the last PASSIVE_TIMEOUT time units. If any, M i suspects it as unlucky and randomly selects one of them as the next ping target. In our implementation we set to a less aggressive p passive = 0.1.
Under the bag strategy of Section IV, the above selection does not remove any instance from the bag. We recommend setting PASSIVE_TIMEOUT larger than t period × N , where t period is the pinging interval and N is network size, so that the ping target selection does not regress to uniform random pinging.
In active-feedback, the unlucky members that a node gets notified about are always 1-hop neighbors, while in passivefeedback the reported unlucky nodes are often "far" members whose information tends to stay local (at high m). While pinging such far nodes involves more hop-to-hop communication, passive-feedback can: i) still save considerable bandwidth compared to basic SWIM since the majority of ping targets are still local, and ii) avoid extra messages to report unlucky nodes that active-feedback needs.
VI. SYSTEM DESIGN
We now discuss practical considerations that were needed in order to implement Medley in a real IoT network.
Distance Metric: The analysis in Section III-B is based on physical distances. However, exact physical locations are hard to calculate; furthermore, physical distance may not be proportional to end to end (multi-hop) routing latency. As a result, our Medley implementation replaces the use of physical distance in the ping-probability equations (Section IV) with the metric of hop-distance. The hop-distance is the actual total distance that a message travels between two nodes, i.e., sum of distances of all intermediate hops. This can be measured during bootup via messages between all 1-hop node pairs, e.g., by [21]. 2 which would have been the physical distance. In our deployment experiments (Section VII), for comparison, we also implemented two alternative distance metrics: 1) latency metric: actual end-to-end latency (which can vary significantly over time, due to link characteristics), and 2) hopnumber metric: count of number of hops. Over multiple experiments, we found that: a) Medley with latency metric was comparable to hop-distance metric, and b) Medley with hop-number metric behaves similar to hop-distance metric under grid topology. Thus hereafter we only show results using hop-distance metric, with a few differing results shown using.
Other Medley Features: We clarify a few other features of Medley. First, the spatial probabilities we just described are for selecting not only ping targets, but also indirect pings. (Section II). Second, the rejoin of a failed node is considered as a new node. We denote the ID of each node with its IP address and local timestamp when it joins the network. Two IDs with the same IP but different join timestamps are considered as two incarnations. If M i receives an active update for M j with ID (ip j , ts 1 ) that is different from its local record for M j : (ip j , ts), M i will consider the old incarnation as failed and continue with the latest ID for M j . In practice this scenario occurs rarely as Medley dissemination times are fast.
VII. EXPERIMENTS
We perform both simulations and deployments using Raspberry Pi devices. We present simulation results first in Section VII-A, and then deployment results in Section VII-D.
A. Simulation Results
The theoretical analysis of Section III made simplifying assumptions about uniformity and used physical distances. In this section, we explore realistic node layouts and measure the behavior and performance of our real Medley system.
There is a dearth of reliable simulators for IoT networks. We wrote our first simulation using NS-3 (v3.27), to be able to capture link layer effects [22]. However, NS-3 code cannot be deployed directly on Raspberry Pis. Thus for this current paper, we developed a second matching simulator, in Java, that uses the same code as our Raspberry Pi deployment but without NS3's fine-grained link layer modeling. We verified that the Java simulator's results match with both: 1) our deployment at small scales, allowing us to use the simulator to extrapolate deployment results; and 2) NS-3 results at small scales. Hence we present only the Java simulation results.
We evaluated Medley and Medley-F in three topologies: i) Random (nodes are randomly placed), ii) Grid (7x7 grid), and iii) Cluster (there are 5 clusters with 7, 7, 9, 10, 16 nodes respectively where each cluster is bounded by a fixed square area), each with 49 nodes deployed in 15m × 15m area. The communication radius for each node is 4m. The random and cluster topologies are newly generated (new seed) for each trial run. The default number of members chosen as indirect pinger was K = 3, and protocol period was 20 time units. The suspicion timeout, ACTIVE_TIMEOUT and PASSIVE_TIMEOUT were set as 160, 80, and 400 time units in the experiments respectively. Each data point reflects data from 1000 independent runs. In every period, the probability to apply passive-feedback is 10%. Unless otherwise specified, Medley uses the hop-distance metric from Section VI.
1) Failure Detection and Dissemination Latency:
We define first detection time as the time gap between a failure occurring and the first non-faulty node detecting this failure (after suspicion timeout). Fig. 4 shows how exponent m affects first detection time (averaged across 1000 runs), and square root of standard deviation. Across the three topologies using the hop-distance metric, Grid has the lowest detection time, with Random next, and Cluster the worst. In Random and Cluster topologies, there might be unlucky nodes (Section V). When m is high, pings stay local, and unlucky nodes are pinged less frequently, thus prolonging their detection times. Comparably, Grid is more deterministic in assigning every node at least a small set of neighbors at short distances, producing more stable detection times. This result is different from Section III-B's analysis because: i) the node layout and density assumptions are different, and ii) use of hop-distance metric (Section VI) to calculate ping probabilities.
For Cluster and Random topologies, first detection time stays low for m ≤ 3.5 and rises quicker when m ≥ 3.5. This is due to that: i) a quick increase (with rising m) in initial bag size increases the duration of a super round (Section IV), and ii) unlucky node's lower probability (fewer instances in bag) to be as a ping target by any of its neighbours. When m is low enough (below 3.5), the bag sizes are manageable and nodes have sufficient pingers for fast failure detection. Beyond m = 3.5, the bag size increases quickly, and thus super round length. It takes much longer for a pinger to pick a failed unlucky node, which could be as long as a super-round in the worst case (Theorem 3). We also observe from Fig. 4 that Medley using the hop-number metric in the Random topology behaves similar to Grid topology, with relatively stable first detection time. The reason is that each member has at least one one-hop (shortest distance) neighbour as a pinger, making unlucky nodes rarer than when using hop-distance metric.
Dissmination Latency and Active & Passive-feedback Optimizations: We measure dissemination time, the time for all nodes to know about a failure after the first detection. We find that active-feedback (only) is the most effective at reducing dfdt by up to 31.1%. Active and passive together reduce by 27.4% and passive-feedback-only by 11.5%. Combining both active and passive provides 54.6% reduction in dissemination time (dsm), with passive-only at 44.5% and active-only at 31.6%. Intuitively, active-feedbackonly offers the shortest and stablest first detection time even under high m, since it helps each unlucky node get frequent pings. For dissemination, intuitively, the combination of active-feedback spreading locally and passive-feedback spreading far away, is fastest. Overall, at m = 3, we recommend combining active and passive, to reduce P95 dfdt by 31.2% and dsm by 47.2%.
Simultaneous Failures: We simultaneously fail 50% (randomly chosen) nodes (24 out of 49) in the Random topology. Fig. 6 shows the average first detection time. The lower and higher error bars are respectively the earliest and latest time any failure is detected, averaged across runs.
The average (raw) first detection time of Medley and its variants rises gently as m and number of failures increase. As m rises, a failed unlucky node waits longer to become a ping target because pings stay local. Now, define the detection gap as the percentage by which detection time is prolonged under massive failure (50% nodes) vs. just a single failure scenario. In base SWIM (at m = 0), the detection gap is only 12.3%-due to the uniform randomness, a failed node has a high probability 1 − ( 48 49 ) 24 39.0% of being pinged each round after failure. In Medley, the detection gap is 20.6% due to localized pings and unlucky nodes' higher detection times. Applying active and passive feedback reduces the gap to 15.8%. Note that Medley's slightly longer detections come with massive bandwidth savings, which will be shown later in this section. Domain Failure: Next, we explore the effect of massive failures in an area (e.g., connected to a power breaker). 49 nodes are located in five clusters in the square area of interest. Each run randomly fails a whole cluster.
From Fig. 7 (bars similar to Fig. 6) we observe that the average first detection time stays low when m < 2, and as expected it increases as m rises. The increase in detection time with m is because of the ping localization under higher m, implying that the typical way a detection proceeds at higher m is from the edges of the failed cluster towards the cluster's middle. In comparison, lower values of m would detect nodes near the middle of the failed cluster much quicker due to the higher probability of far-away non-faulty pingers. Under high m, both active and passive reduce detection time, with passive more effective since it provides a higher chance to detect nodes near the "middle" of the failed cluster. Fig. 8 shows the CDF of the hop count of messages. Point(x, y) means y% of messages travel fewer than x hops. As expected, lower m (basic SWIM with uniform pinging) incurs far more hops, while Medley localizes traffic. Active feedback does not affect traffic much, since the ping probability modification occurs only among nodes with already-high pinging probabilities, i.e., already close to pinger. Passive feedback raises traffic as farther nodes are affected.
Since Medley's goal is to minimize both communication cost (messages sent, counting multiple hops) and detection time, we measure the square root of their product in Fig. 9, for Random topology. Each experimental run was identically long at 300K time units, so trends would remain unchanged if we With active-feedback, Medley-F's product cost (under m = 3) is 37.8% lower than base SWIM, as active-feedback lowers communication cost effectively. Applying passivefeedback, and active+passive, do not bring higher benefits, since passive sends faraway pings. However, product cost is still lowered by up to 30% and 31.2% respectively compared to basic SWIM. At m > 3, in all feedback-based strategies, communication reduction balances out detection time increase.
2) False Positive Rate: We measure the rate of false detections, which are non-faulty nodes mistakenly detected as failed (this may occur due to slow nodes, dropped packets, etc.). Because false detections are affected by link layer behaviors, we use the high-fidelity NS3 simulator under 25 nodes. In Table II, we drop a random fraction r loss of packets (on hops). We measure false positive rate as the fraction of time, over the entire run, that a false positive detection persists, i.e., fraction of time that at least one non-faulty node is considered failed by at least one other non-faulty node.
In Table II, higher packet loss rates imply higher false positive rates, as expected. We also observe that false positive rate drops with increasing m (for a given packet loss rate). This is because at lower m, pings and acks have to transit more hops, thus increasing the chances that at least one of the hops will drop the packet, and a non-faulty node will be detected as failed due to a timeout. Further, at higher m, the suspicion (Section III-A) arising from a failure detection has a higher chance of being resolved due to the more repetitive and localized nature of pings.
B. Scalability
We evaluated the scalability of Medley and Medley-F up to N = 2048 nodes under m = 3, Random topologies, and in a square area, with fixed density of 0.22 node/sq meter. Fig. 10 shows: a) dfdt: first detection time (same as Fig. 5), and b) msdm: median dissemination time (stacked atop dfdt). At large N, runs took long (e.g., N=2048 took 40 min per run, and thus 5 days for the full experiment), and so we truncate these experiments. While Medley is complete, for full transparency of results we also show the incomplete dissemination (due to experiment truncation) in the lower part of the plot. We plot the median and also plot standard deviation bars. Each data point in the plot is from 1000 runs, with the exception of 200 simulation runs at N = 2048.
We observe that: a) detection time is constant and insensitive to system size, b) median dissemination time increases logarithmically with system size (note the logarithmic x axis), and c) both active and passive strategies reduce dissemination time, with passive being both faster and able to reduce incomplete dissemination.
C. Performance Under Mobility
While Medley is intended for static topologies, we show that it is tolerant to moderate amounts of mobility. The experiment in Fig. 11 starts with 49 nodes in a random topology over a 15m × 15m area. Each run starts Medley in a new topology and reaches a steady state. Then we instantly move a fraction of the nodes in random directions by distances randomly uniformly chosen among [0, min(X meters, distance to edge of square)] (X is the x axis value on the plots). With this move, we do not update Medley's internal topology-related data at any nodes: distances, ping probabilities, etc., all remain stale from the pre-move topology. However, routing tables are updated post-move, as would be expected in a mobile network so that packets can be routed correctly.
In Fig. 11, the solid lines show the post-move performance of Medley, operating still with stale (pre-move) internal data structures. Dotted lines show Medley with corrected post-move internal data. Essentially, the gap between the solid and dotted lines shows the effect of Medley continuing with stale data structures. We observe that: a) at small mobility up to X = 4 meters, even with up to 40% of nodes moved, stale data does not affect Medley performance; b) when few nodes move (10% plot), larger mobility distances can be tolerated-the metric rises by at most 7.1% at X = 16; c) when more nodes move (40% plot), metric degradation is worse at 35% at X = 16. Overall, we conclude that: 1) moderate mobility degrades Medley performance only moderately, and 2) Medley continues offering low communication and detection times even if its internal data (ping distances and hence ping probabilities) remain stale.
D. Deployment Evaluation
We implemented a prototype of Medley in the Raspberry Pi (RP) 4 [23] environment. Our Java implementation was around 3000 lines of code, under Raspbian 4.19. We deployed Medley in a network of 16 IoT devices in our lab space. Figs. 12(a) and 12(b) show a photograph and a map of one of our topologies. This random topology was in a 6m × 6m area (grid lines only for reference purposes). Each device was a Raspberry Pi 4 model B, with 2GB LPDDR4 RAM and Broadcom BCM2711, 1.5 GHz quad-core Cortex-A72 CPU. While Medley works modularly with any ad-hoc routing protocol, for concreteness we use OLSR routing [24] due to its ease of configurability for Pi4s, and popularity in discussion forum posts. Since the signal strength of Pis were too strong to make multi-hop routing with respect to the limited deployment area, we attenuated each Pi by both: a) consistently wrapping in aluminum foil, and b) setting transmit power to 15 dBm, to force more multi-hop transmissions. Red lines Fig. 12(b) is a screenshot of routine paths. Prior to these experiments, we performed benchmark experiments to verify that this attenuation was stable and consistent across Pis.
1) Failure Detection and Dissemination Latency: From Fig. 13 we observe that failure detection time and dissemination time both increase as m becomes larger. (The plot used 32 data points per failure, with average and standard deviation shown.) This is because disseminating failure information of unlucky nodes (e.g., nodes 0, 9 in Fig. 12) takes a while since spatial pinging (hence piggybacking of failure information) stays largely local especially at high m. Similar to simulation results, both active-feedback and passive-feedback produce benefits for first detection time and dissemination time. From the simulation (Section VII-A), we expected active-feedback to work the best for first detection time and passive-feedback to be effective on dissemination latency reduction. In the deployment, active-feedback and applying both strategies do act as expected. However, at high m, passive-feedback performs poorly on dissemination time, because benefits of multi-hop dissemination do not emerge in our smaller deployment scales. passive-feedback may only be more preferable at larger scales.
2) Bandwidth Cost Over Time: We denote links that lie on more routing paths (of node pairs) as hotter links, and those on fewer paths as colder links. For simplicity we used a smaller topology and a fixed routing table with 7 Raspberry Pi [22]. Fig. 14 plots real-time bandwidth on a hotter link and a colder link. In each run a node fails at time 60 (a hotter node or a colder node). Compared to m = 0, m = 3 consumes lower bandwidth on average (61.8% less for hotter link, and 52.9% less for colder link), but fluctuates inside a super round.
Both far pings and local pings tend to go through hotter links. Bandwidth cost is high right after new nodes join (time 5 to 10) and right after failures occur (time 60 to 70)-this is due to increase in indirect pings. Larger exponent values (m) mean that a failure will cause bandwidth to rise more (3× at m = 3 and 1.5× at m = 0). Yet the peak bandwidth consumption in Medley (m = 3) stays lower than base SWIM's (m = 0).
At high m bandwidth usage has a periodic behavior caused by the cyclical nature of the super-round. Fig. 15 depicts the bandwidth and FFT for a 600s run with no failures. We first observe that the bag selection strategy does not affect average bandwidth. Second, the random selection from bag has lower bandwidth fluctuation over time, while pass-based has bigger amplitudes. This is because in the pass-based approach (Algorithm 1), the pings in the second half of each super-round tend to focus on close neighbors (a small group of nodes which have higher counts in the bag), leading to temporally unbalanced communication load on links. In comparison, selecting from the bag targets at random (rather than via passes) has less pronounced periodicity.
Although random strategy benefits from balanced bandwidth, it has longer detection times: 2×((N −2)·α)+1 periods, almost twice as pass-based (for high m). If the application prefers reducing detection time than minimizing bandwidth, the pass-based approach is preferable.
VIII. DISCUSSION
Partial Membership Lists: Medley maintains full membership lists, useful for building a swath of distributed algorithms (Section I). Nevertheless, full membership lists can be "pared" down to partial membership lists, without affecting properties, while reducing overhead. Two examples follow. Ex. 1: If a multicast tree (built atop Medley) uses only nearby neighbors, the partial membership list can maintain mostly nearby neighbors. Ex. 2: It is well-known that uniformly-randomlyselected partial membership lists give identical properties as a full list, for gossip multicast applications [25]. For this case, Medley's partial membership lists could be built in one of two ways: i) apply a uniform-random selection strategy to pick the partial membership list, and use spatial pinging, or ii) apply the spatial distribution to pick the partial membership list, and use uniform-random pinging. In both cases [25] would extend, meaning that gossip over Medley with partial lists would behave identically as gossip over Medley with full lists.
Topology Optimizations: An open direction is leveraging knowledge of network topology. For instance, one could avoid intersecting routes for pings, route pings/acks avoiding failure domains, and avoid routine via failed nodes.
IX. RELATED WORK
Classical Failure Detection: Failure detection in datacenters is well-studied. The earliest failure detectors send periodic "I am alive" heartbeats [11] to all other or a subset of nodes. Timeout on the next heartbeat leads to failure detection. Heartbeats may be multicast or gossiped [12] or spread hierarchically [26]. As described earlier, SWIM [9] is the inverse of heartbeating, relying on pinging, and has bandwidth provably within a constant factor of optimal. FUSE [15] disseminates failure information via applications, to reduce network costs.
Failure Detection in IoT Networks: Existing IoT failure detection schemes largely focus on data anomalies and can be used orthogonally with Medley. Sympathy [27] uses flooding and aggregates distributed data at the sink, detecting failure by finding insufficient flow of incoming data. Memento [28] uses a tree for failure monitoring, limiting its scalability under failures. Network-level delays and packet traces can be used for failure detection [29], [30]. Yet, these are hard to analyze mathematically. DICE [6] uses context (e.g., sensor correlation, state transition probabilities) to identify anomalous readings and their sensor nodes. All the above works can be used orthogonally with Medley. Asim et al. [31] partitions the network into cells, detects failures within cells, and multicasts it across cells-this however assumes a homogeneous network.
X. CONCLUSION
We have presented design, analysis, and implementation of Medley, a decentralized membership service for distributed IoT systems running atop wireless ad-hoc networks. Our key idea is a spatial failure detector, that prefers pinging nearby nodes with an exponentially higher probability. Compared to classical SWIM, Medley and its variants detects failures just as quickly, while lowering the product of failure detection time and communication cost by 37.8%, and incurring low false positive rates around 2% even with 20% dropped packets. Active and passive feedback reduce tail detection time by up to 31%, and dissemination time by up to 54%. Code is available at: http://dprg.cs.uiuc.edu/downloads.php. | 2022-08-06T15:15:35.299Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "e6418dff00b7c41f276640876ce5e62726ec00cf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1109/tnsm.2022.3196268",
"oa_status": "HYBRID",
"pdf_src": "IEEE",
"pdf_hash": "b4fe1bc19ae35a562499f18c4823de556e0497f7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
251988433 | pes2o/s2orc | v3-fos-license | FORMULATION AND EVALUATION OF FAST DISINTEGRATING TABLET OF GLIMEPIRIDE
Objective: Glimepiride is the third generation sulphonylurea agent, used for the treatment of type II diabetes mellitus. Glimepiride is given once daily in doses from 1-4 mg. The objective of the present study was to formulate and evaluate the fast disintegrating tablets of Glimepiride. Method: The fast dissolving tablets of glimepiride were prepared by direct compression method by using different disintegrating agents. The drug and the excipients were evaluated for angle of repose, bulk density, tapped density, carr’s index and hausner’s ratio for the determination of flow property of the powder. The formulated tablets were evaluated for thickness, hardness, friability, wetting time, water absorption ratio, and dissolution and disintegration time. Result: Fast disintegrating tablets were successfully prepared by direct compression method. The disintegration time was less than 1 minute which is acceptable. The obtained results clearly indicated that the prepared tablets of glimepiride were fast disintegrating. Conclusion: The fast disintegrating tablets of glimepiride are prepared with disintegrating time less than 1 minute, this will help in future for further improvement of dosage and formulation and will help to control immediately the rise glucose level of chronic diabetic patients.
Melting Point
Reduce the substance to a very fine powder and
Tapped Density
Tapped density of the powder was measured by pouring gently some amount of sample through a glass funnel into a graduated cylinder. The cylinder was tapped from height of 2 inches until a constant volume was obtained (100 taps). Volume occupied by the sample after tapping was recorded and tapped density was calculated.
Angle of Repose
The angle of repose of the powder was determined by fixed funnel method. The powder was poured through a funnel that can be raised vertically until a maximum cone height (h) was obtained. Radius of the heap I was measured. It is expressed in gm/ml. By applying following formula, angle of repose was Friability test: Ten tablets were randomly selected and weighed them. Friability was measured using Roche friabilator. Pre-weighed tablets were placed in a plastic chambered friabilator attached to a motor revolving at a speed of 25 rpm for 4 min. The tablets were then de-dusted, accurately reweighed, and jcponline.pk percentage loss in weight (friability) was calculated [14].
Content Uniformity
The tablets were kept in 100ml volumetric flask containing phosphate buffer pH 4.4 for 24hrs.When the tablets were completely dissolved the solution was centrifuged. After centrifugation the supernatant was dissolved. Absorbance was measured spectrophotometrically at 228 nm. Dilution was made using phosphate buffer (pH 7.8) as per requirement [12].
Wetting Time and Water Absorption Ratio
It is related with each other and performed in the same manner. The tissue papers were placed in a petri dish by folding it into five circular form in a 10 cm diameter. Petri dish containing ten milliliters of water with 0.5 % nigrosine, a water-soluble dye, was added to the petri dish. The dye solution was used to identify complete wetting of the tablet surface. The Tablet was carefully placed on the surface of the tissue paper in the center of the petri dish at 25 °C.
The time required for water to reach the upper surface of the tablets and to completely wet them was noted as the wetting time. It was carried out in six times. Wetting time was recorded using a stopwatch and presented as mean standard deviation [11].
Dissolution Test
The invitro drug release study was performed using USP dissolution apparatus (Paddle Apparatus) Dissolution study was carried out for 12 hours.
Phosphate buffer having pH 7.8; 900ml was used as dissolution media. Samples of each 5ml were withdrawn after everyone hour for a period of 12h.Volume in dissolution vessel was kept constant by equal replacement with fresh media. The samples were collected in test tubes after filtration through filter paper. The amount of the drug in the aliquots was quantified by taking absorbance of the sample at 288 nm spectrophotometrically, using phosphate buffer pH 7.8 (dissolution media) as the blank [7].
Disintegration Test
The disintegration of tablets was determined using a USP disintegration testing apparatus type II with pH 6.8 phosphate buffer as a disintegrating medium. The medium was maintained at 37±0.5 °C throughout the test. Six tablets were placed into an apparatus and disintegration time was recorded. Measurements were carried out in replicates of six and mean standard deviation were recorded [7].
RESULTS
Different tests were performed on powder material before compression of tablet.
Pre-compression Parameters
Physical appearance of the powder material was solid (amorphous). Melting point determined by capillary method was 207 ˚C. Bulk density and tapped density was found to be 0.499 and 0.898 respectively. Carr's index and Hausner's ratio was found to be 16 % and 1.16 respectively. Angle of repose was measured as 33.
Post compression Parameters
In weight variation test, the standard deviation was 7.3% which is within acceptable range.
After applying force, the hardness limit of all prepared tablets was 2.6-3.5 kg/cm 2 that is acceptable. The mean of the thickness of the tablets was±5% that is somehow acceptable for fast dissolving tablets. Friability percentage was 0.98% that is an indication of good mechanical resistance of the tablets.
Water absorption ratio and wetting time, which are important criteria for understanding the capacity of disintegrants to swell in the presence of little amount of water were found to be in the range of 64-84% and jcponline.pk 22-94 seconds respectively. Drug content was found to be in the range of 96 to 98%, which is within acceptable limits. Disintegration time was less than 1 minute that is in acceptable limits for fast disintegrating tablets. In-vitro dissolution studies showed that more than 50 % of the drug was released from the formulation within 5 min.
FTIR studies did not indicate any excipient incompatibility, either during mixing or after compression. | 2022-09-02T15:38:14.870Z | 2018-07-31T00:00:00.000 | {
"year": 2018,
"sha1": "7a24d079dd04c577cb7625caf4dea02b642541c9",
"oa_license": "CCBY",
"oa_url": "https://ammanif.com/journal/jcp/index.php/home/article/download/13/5",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "667e6ec0c0b25a012795c8a6880456179831951f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
16777926 | pes2o/s2orc | v3-fos-license | Recent Advances in the Emission and Functions of Plant Vegetative Volatiles
Plants synthesize and emit a large variety of volatile organic compounds, which possess extremely important ecological functions. In most case, most plant volatiles are liquids, rather than gases, at room temperature. Some volatiles are emitted “on demand” when plants, especially vegetative parts, are exposed to abiotic or biotic stress. In this review, we summarize some of the highlights of plant vegetative volatile emission and functions research published during the past few years.
Introduction
Plants synthesize and emit a large variety of volatile organic compounds. In principle, plant volatiles are low molecular weight metabolites (<300 Da) with a relatively low boiling point (<260˝C). These volatiles can be emitted from flowers, leaves, fruits, and roots into the atmosphere or soil, allowing the plant to interact with other organisms [1]. Until now, more than 1700 volatiles have been identified from more than 90 plant families, which contain approximately 1% of all plant specialized metabolites currently known [2]. According to the chemical structure, plant volatiles can be classed into hydrocarbons, alcohols, aldehydes, ketones, ethers, and esters. In the research field of plant science, plant volatiles are generally divided into volatile terpenes, volatile phenylpropanoid/benzenoid, and volatile fatty acid derivatives according to the different biosynthetic pathways. The synthesis of plant volatiles involves the removal of hydrophilic moieties and oxidation/hydroxylation, reduction, methylation, and acylation reactions [3]. The current researches on plant volatiles are mostly concentrated on answering the following questions, (1) how are plant volatiles formed? (2) how are plant volatiles emitted? (3) why do plants emit volatiles? or what are functions of plant volatiles? There are many reports and reviews [1][2][3][4][5][6][7][8] on the three issues above. In general, the answers can be briefly summarized as (1) plant volatiles are formed from the mevalonate/2-C-methyl-D-erythritol 4-phosphate pathway, shikimate pathway, and lipoxygenase pathway; (2) www.mdpi.com/journal/molecules of pollinators, direct defense against herbivores, attraction of natural enemies of herbivores, and within-plant or plant-plant signaling. The rapid progress in technologies for metabolomics, proteomics, transcriptomics, and genomics allows the formation of plant volatiles to be rigorously investigated at the biochemical and molecular levels. Researchers are paying more attention to elucidating the emission and functions of plant volatiles, especially vegetative volatiles.
Herbivore Induced Events Relating to Volatile Emission from Vegetative Parts
Plant volatiles emitted from vegetative parts are known to be involved in many types of biotic interactions. When herbivores attack plants, plants are subjected to mechanical damage and herbivore derived elicitors, which can induce a series of events in plants. Several herbivore derived elicitors of induced plant volatile emission have been identified, including fatty acid-amino acid conjugates [9,10], β-glucosidase [11], inceptin [12], caeliferins [13], and an unidentified heat-labile constituent [14]. These reported elicitors are identified in chewing herbivores, whereas piercing-sucking herbivore derived elicitors of induced plant volatile emission are not reported yet. This may be due to the fact that it is not easy to determine the amount of elicitors from the small size of piercing-sucking herbivores. So far, no direct evidence indicates that piercing-sucking herbivores contain the elicitors inducing plant volatile emission, but β-glucosidase was tentatively identified in the rice brown planthopper, and proposed as a potential elicitor inducing emission of rice volatiles [15]. In plants, the glycosidically-bound volatiles occur in vacuoles, whereas beta-glycosidases were observed to be localized in cell walls and the cavity areas among cells [16]. This compartmentation of substrates and enzymes in plant cells leads to the rare available evidences of hydrolysis of glycosidically-bound volatiles in intact plants [17]. Therefore, an exogenous β-glucosidase from herbivores possibly meets glycosidically-bound volatiles in plants through the piercing-sucking action of herbivores.
The events during the period from herbivore attack to volatile emission ( Figure 1) represent generally the first occasions from which measurements can be made. Maffei et al. (2007) well summarized the earliest events within the first seconds to minutes, which are responsible for recognition and triggering of signal transduction pathways of the plant-insect interaction [18]. The earliest events measurable are plasma transmembrane potential, immediately followed by changes in the intracellular cytosolic Ca 2+ concentration and the formation of H 2 O 2 , which take place within seconds or minutes after attack and are mediated by insect oral secretions. Although Ca 2+ influx has been demonstrated in plant-insect interactions, the role of anion and proton channels is not yet clear. Herbivore attack induces activation of the NADPH oxidase complex that generates the anion superoxide, which is quickly converted to H 2 O 2 under the action of superoxide dismutase. H 2 O 2 can accumulate in the extracellular matrix or enter the cell, afterwards induce a variety of late events such as kinases activation, and phytohormones jasmonic acid (JA) and salicylic acid (SA) signaling, which are detectable within minutes or hours. Kinases mediate subsequent steps in the interaction between plants and insects and are connected to later steps of phytohormone signaling. In plants, mitogen-activated protein kinases (MAPKs) regulate cellular responses to both external and endogenous stimuli in eukaryotes, and play an important role in the signaling of biotic stresses, pathogens and plant hormones. In addition, calcium-dependent protein kinasesare regularly involved in signal transduction of a variety of biotic and abiotic stresses. These kinases subsequently enhance transcript levels of genes involved in JA, SA, JA-Ile, and ethylene biosynthesis, which in turn enhance levels of these phytohormones [19]. The genes involved in formation of plant volatiles can be activated by the phytohormones, especially JA, which can lead to the high emission of herbivore-induced volatiles [20].
Multiple Stresses Induced Volatile Emission
Individual biotic and abiotic stresses, such as high temperature, high light, and herbivore attack, are well known to increase the emission of volatiles from vegetative parts of plants. In nature, plants are exposed to multiple stresses, either simultaneously or sequentially, causing much more complex volatile profiles than have usually been investigated in the individual stress-induced volatiles. However, in current researches, stresses have usually been considered as single or independent factors. Much less is known about the effect of multiple or co-occurring stress factors. When two or more stresses co-occur, in some cases, their effects are additive, while in other cases the influence of one stress may have priority [21]. A study on maize indicated the additive effects of biotic and abiotic stresses. In maize, the combination of high temperature and simulated herbivore infection resulted in higher volatile emission than individual stress [22]. Besides co-effects of biotic and abiotic stresses, different biotic stresses such as different herbivore attackers can affect the emission of herbivore-induced volatiles and, consequently, the attraction of carnivorous natural enemies of the inducing herbivores [23]. For instance, using a Y-Tube olfactometer, it was found that predatory mirid bugs and predatory mites prefer the herbivore-induced volatiles blends emitted by pepper plants and lima bean plants infested by two herbivore species (aphids and spider mites, caterpillars and spider mites) over the blends emitted by plants infested by either herbivore species separately [23], which may be due to the antagonism or synergia between JA and SA signaling pathways. Such crosstalk between phytohormone pathways also occurs when plants are subject to simultaneous attack by both herbivores and pathogens [24]. For example, in maize, fungal infection reduced the emission of volatiles induced by herbivore alone by about 50%, possibly suggesting a diversion of
Multiple Stresses Induced Volatile Emission
Individual biotic and abiotic stresses, such as high temperature, high light, and herbivore attack, are well known to increase the emission of volatiles from vegetative parts of plants. In nature, plants are exposed to multiple stresses, either simultaneously or sequentially, causing much more complex volatile profiles than have usually been investigated in the individual stress-induced volatiles. However, in current researches, stresses have usually been considered as single or independent factors. Much less is known about the effect of multiple or co-occurring stress factors. When two or more stresses co-occur, in some cases, their effects are additive, while in other cases the influence of one stress may have priority [21]. A study on maize indicated the additive effects of biotic and abiotic stresses. In maize, the combination of high temperature and simulated herbivore infection resulted in higher volatile emission than individual stress [22]. Besides co-effects of biotic and abiotic stresses, different biotic stresses such as different herbivore attackers can affect the emission of herbivore-induced volatiles and, consequently, the attraction of carnivorous natural enemies of the inducing herbivores [23]. For instance, using a Y-Tube olfactometer, it was found that predatory mirid bugs and predatory mites prefer the herbivore-induced volatiles blends emitted by pepper plants and lima bean plants infested by two herbivore species (aphids and spider mites, caterpillars and spider mites) over the blends emitted by plants infested by either herbivore species separately [23], which may be due to the antagonism or synergia between JA and SA signaling pathways. Such crosstalk between phytohormone pathways also occurs when plants are subject to simultaneous attack by both herbivores and pathogens [24]. For example, in maize, fungal infection reduced the emission of volatiles induced by herbivore alone by about 50%, possibly suggesting a diversion of plant resources from anti-herbivore to presumptive anti-pathogen defenses. The authors hypothesized that fungal infection could stimulate the SA based signal transduction pathway which would reduce signaling through the herbivore-triggered JA pathway because of negative crosstalk [24]. These observations provide some evidences that multiple attackers can enhance, attenuate, or otherwise alter stress-induced volatile responses. However, more studies are needed to understand whether volatiles emission induced by multiple stresses is due to interlinkages among phytohormones such as JA, SA, and ethylene.
Effects of Green Leaf Herbivore Attack on Floral Scent Emission
Emissions of vegetative parts induced by green leaf herbivore attack have attracted much attention of researchers. Some researchers have directed their interest to investigate whether green leaf herbivore attack can affect emissions of floral parts. Effmert et al. (2008 [25]) reported Nicotiana suaveolens plants retained the quality of the volatile organic compounds' composition as well as the quantity and emission patterns of their floral volatile organic compounds when the green leaf tissues experienced considerable damage by Manduca sexta. Moreover, there was no immediate floral response or delayed interference with the floral volatile organic compounds' production/emission due to green leaf herbivory [25]. The authors proposed that metabolism in flowers at and post-anthesis is an autonomous process and is independent of metabolic changes in green leaves in Nicotiana suaveolens plants. By this sustaining mechanism, Nicotiana suaveolens plants ensure sexual reproduction even under unfavorable conditions [25]. However, not all plants have such similar phenomena. Theis et al.
(2009) reported that mechanical simulation of chewing increased volatile terpenoid emission from male flowers [26]. Kessler et al. (2010) found that damage by Manduca spp. caterpillars caused reduced emission of the floral volatile benzyl acetone along with major changes in flower phenology [27]. These reports suggest that vegetative feeding by chewing herbivores can result in floral volatile emissions that are either enhanced, reduced, or unaffected. Furthermore, recent finding demonstrates that phloem-feeding herbivory can affect floral volatile emission, and that the outcome of interaction between herbivory and floral chemistry may differ depending on the herbivore's feeding mode and degree of specialization [28].
Vegetative Volatiles Reduce Negative Effects of Stress on Plants
Several positive functions have been described for volatiles in vegetative plant tissues. The first one is resistance to high temperatures and oxidative stress. Volatiles are known to be involved in many types of biotic interactions, but they, especially volatile isoprenoids, also play important but relatively unappreciated roles in abiotic stress responses [29]. High temperature is known to reduce leaf photosynthetic rate in many tree species. In contrast to non-emitting birch leaves, isoprene-emitting aspen leaves are more resistant against heat stress, which may be due to the fact that isoprene production increases tolerance of high temperatures [30]. Similarly, in exposure to oxidative stress, non-emitting transgenic tobacco plants showed a classical ozone-induced cell death response, whereas isoprene-emitting plants resisted better against ozone-induced damage [30]. Based on these observations, Vickers et al. proposed a plausible mechanism, i.e., the "single biochemical mechanism for multiple stressors", which is that abiotic stress responses generally involve production of reactive oxygen species in plant cells, and volatile isoprenoids mitigate the effects of oxidative stress by mediating the oxidative status of the plant, and the protective effect against abiotic stress is exerted through direct or indirect improvement in resistance to damage by reactive oxygen species [29]. A recent investigation on influences of heavy metal stress on plant volatiles also supported the "single biochemical mechanism for multiple stressors" model, and suggested that heavy metal stress is a prime factor for herbivore-induced plant volatile emission, and Cu stress correlated with increased levels of reactive oxygen species in roots and priming of herbivore-induced JA in leaves [31].
The second function is direct defense against herbivores. Many reports indicate that volatiles emitted from vegetative tissue can act as direct repellents of herbivores. For instance, green leaf volatiles can improve plant resistance against herbivores and fungal pathogen [32], and blends of monoterpenes, sesquiterpenes or green leaf volatiles were able to deter ovipositing females of several lepidopteran species [21]. Monoterpenoids, especially geraniol, can induce apoptosis-like cell death [33,34], which is caused as a defense reaction against bacterial infection [35].
The third function is attraction of herbivore enemies. Plants can defend themselves against herbivores by attracting natural enemies of the herbivores. This can be proofed by the evidences that transgenic plants engineered to produce specific terpenes and green leaf volatiles are involved in enemy attraction [32,36]. Moreover, in some case, a single gene can be sufficient to mediate the indirect defense of plants against herbivore attack, namely attraction of herbivore enemies. For instance, transcripts of tps10, a terpene synthase forming herbivory-induced sesquiterpene hydrocarbon, were found to be restricted to herbivore-damaged maize. Overexpression of tps10 in Arabidopsis thaliana resulted in plants emitting high quantities of sesquiterpene products. The parasitic wasps preferred transgenic Arabidopsis that emits TPS10 sesquiterpenes in contrast to wild type after interaction with the plants [32].
The fourth function is within-plant or plant to plant signaling. Airborne communication among neighboring plants has been a controversial topic for many years, but there are now multiple examples in the literature where plants responded to signals mediated by volatiles emitted from neighboring plants under herbivore attack [37]. The controversy surrounding the plant-to-plant interactions also stimulated another research regarding intra-plant signaling among different organs. Since the studies of Narváez-Vásquez and Ryan (2004) showing how wounding triggers an increase in defenses in distant leaves, herbivore-induced within-plant signaling has usually been assumed to be transmitted via vascular connection [38]. Heil and Bueno (2007) demonstrated that herbivore-induced volatiles can serve as an external signal for within-plant (Lima bean) signaling [39]. Also, Frost and coworkers (2007) showed that within-plant (hybrid poplar) signaling mediated by volatiles can overcome vascular constraints to systemic signaling, and suggested that intra-plant signaling may have equal or greater ecological significance than signaling between plants [40]. These reports provide evidence that volatiles involved in signaling can improve defense abilities of neighboring tissues or plants and, thus, reduce the damage of neighboring tissues or plants. Although it is unclear how volatiles improve defense abilities of neighboring tissues or plants, there are a few reports investigating effects of volatiles on neighboring tissues or plants at gene or metabolite levels. Yao et al. (2011) found that UV-C-irradiated plants produce a volatile signal, such as methyl salicylate, or methyl jasmonate, or one unidentified volatile compound, which trigger an increase in genome instability in neighboring nonirradiated Arabidopsis thaliana plants. This volatile signal is interspecific, as UV-C-irradiated Arabidopsis plants transmit genome destabilization to naive tobacco (Nicotiana tabacum) plants and vice versa [41]. Dong et al. (2011) preformed a principal component analysis of metabolites (m/z 70-1000) in undamaged tea (Camellia sinensis) leaves exposed or not exposed to herbivore-induced volatiles, and found that external signaling via herbivore-induced volatiles may lead to more drastic changes in the metabolite spectrum of tea leaves than internal signaling via vascular connections [42]. Quite recently, Sugimoto et al. (2014) obtained a breakthrough finding on how plants receive volatiles and, consequently, how they fortify their defenses [43]. Undamaged tomato plants absorbed the airborne (Z)-3-hexenol emitted from neighboring conspecific plants exposed to herbivore attack and subsequently converted the alcohol to (Z)-3-hexenylvicianoside. The glycoside negatively affected the performance of common cutworms and suppressed their growth and survival rates [43]. The accumulation of glycoside in the receiver plants explained the defense acquired via "smelling" their neighbors. In addition, the authors found that (Z)-3-hexenylvicianoside biosynthesis was independent of JA signaling, revealing a previously unidentified mechanism of plant defense [43].
Quality or Quantity of Volatiles and Plant Defense Against Stress
Herbivore induced plant volatiles have several different metabolic origins, of which the isoprene-derived terpenoids and fatty acid-derived green leaf volatiles are the best-studied classes [2]. Terpenoids are released with a delay from the whole plant, not just attacked leaves, after a few hours or with the plant's next photosynthetic phase. Because of their delay, the terpenoids likely function in the long distance attraction of carnivores. Unlike terpenoids, green leaf volatiles were immediately released from wounded leaves. Therefore, green leaf volatiles likely provide rapid information about the exact location of a feeding herbivore [44]. In general, herbivore attack induces an increase in plant volatile emission. Less is known about the effects of the quality of volatiles on plant defense against herbivore attack. Bruce et al. (2010 [45]) reported an unusual case that an African forage grass can release (Z)-3-hexenyl acetate as a major volatile, and, surprisingly, after stemborer ovipositing, the major released volatile (Z)-3-hexenyl acetate reduced, whereas other minor volatiles did not show significant changes. In the test of behavioral responses of parasitoids, parasitoids prefer volatiles from grass with oviposition than healthy grass. In addition, in oviposition bioassays, grass with eggs was less preferred for subsequent oviposition by stemborer than grass without eggs. These results suggest that changes in volatile ratios can alter the ability of herbivores to locate their host [45]. Moreover, Allmann et al. (2010) reported that attack by the specialist herbivore Manduca larvae and the addition of their oral secretions to mechanical wounds induce a rapid (Z)/(E) isomeric change in the green leaf volatiles release of Nicotiana plants. This change lowers the (Z)/(E) ratio of the green leaf volatiles blend, and increases the predation rate of the predator that is feeding on eggs of the herbivore [14] ( Figure 2). On the other hand, the larvae may benefit from the enhanced antimicrobial properties of a green leaf volatiles blend enhanced in (E)-2-hexenal [46] (Figure 2). These findings suggest that, in some cases, insect responses are dependent on the quality of volatile emission rather than merely the quantity in this multitrophic interaction. or with the plant's next photosynthetic phase. Because of their delay, the terpenoids likely function in the long distance attraction of carnivores. Unlike terpenoids, green leaf volatiles were immediately released from wounded leaves. Therefore, green leaf volatiles likely provide rapid information about the exact location of a feeding herbivore [44]. In general, herbivore attack induces an increase in plant volatile emission. Less is known about the effects of the quality of volatiles on plant defense against herbivore attack. Bruce et al. (2010 [45]) reported an unusual case that an African forage grass can release (Z)-3-hexenyl acetate as a major volatile, and, surprisingly, after stemborer ovipositing, the major released volatile (Z)-3-hexenyl acetate reduced, whereas other minor volatiles did not show significant changes. In the test of behavioral responses of parasitoids, parasitoids prefer volatiles from grass with oviposition than healthy grass. In addition, in oviposition bioassays, grass with eggs was less preferred for subsequent oviposition by stemborer than grass without eggs. These results suggest that changes in volatile ratios can alter the ability of herbivores to locate their host [45]. Moreover, Allmann et al. (2010) reported that attack by the specialist herbivore Manduca larvae and the addition of their oral secretions to mechanical wounds induce a rapid (Z)/(E) isomeric change in the green leaf volatiles release of Nicotiana plants. This change lowers the (Z)/(E) ratio of the green leaf volatiles blend, and increases the predation rate of the predator that is feeding on eggs of the herbivore [14] ( Figure 2). On the other hand, the larvae may benefit from the enhanced antimicrobial properties of a green leaf volatiles blend enhanced in (E)-2-hexenal [46] (Figure 2). These findings suggest that, in some cases, insect responses are dependent on the quality of volatile emission rather than merely the quantity in this multitrophic interaction.
Concluding Remarks and Perspectives
This paper summarizes some of the highlights of plant vegetative volatile emission and functions research published during the past few years. The profile of the events from herbivore attack to plant vegetative volatile emission is basically characterized. However, to date, how volatiles are released from plant cells is largely unknown. Before being emitted into the environment, plant volatiles must cross the membrane, the aqueous cell wall, and, sometimes, the cuticle. Volatiles are primarily
Concluding Remarks and Perspectives
This paper summarizes some of the highlights of plant vegetative volatile emission and functions research published during the past few years. The profile of the events from herbivore attack to plant vegetative volatile emission is basically characterized. However, to date, how volatiles are released from plant cells is largely unknown. Before being emitted into the environment, plant volatiles must cross the membrane, the aqueous cell wall, and, sometimes, the cuticle. Volatiles are primarily nonpolar compounds which preferentially partition into membranes, making diffusion into aqueous compartments slow, although it is presumed that volatiles move through each barrier via passive diffusion. However, Widhalm et al. (2015) proposed that volatiles must cross multiple cellular compartments to reach the environment, and biological mechanisms involved in trafficking other hydrophobic compounds must contribute to volatile emission, which may reduce barrier resistances [47]. compartments slow, although it is presumed that volatiles move through each barrier via passive diffusion. However, Widhalm et al. (2015) proposed that volatiles must cross multiple cellular compartments to reach the environment, and biological mechanisms involved in trafficking other hydrophobic compounds must contribute to volatile emission, which may reduce barrier resistances [47]. [37][38][39][40][41][42][43]) and the derived questions. | 2016-03-01T03:19:46.873Z | 2016-01-22T00:00:00.000 | {
"year": 2016,
"sha1": "383d9892775820a27074dd65b9b177ef6f5f2e39",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/21/2/124/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "383d9892775820a27074dd65b9b177ef6f5f2e39",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17786716 | pes2o/s2orc | v3-fos-license | Identity Matters in Deep Learning
An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as \emph{batch normalization}, but was also key to the immense success of \emph{residual networks}. In this work, we put the principle of \emph{identity parameterization} on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for linear feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.
Introduction
Traditional convolutional neural networks for image classification, such as AlexNet ( [13]), are parameterized in such a way that when all trainable weights are 0, a convolutional layer represents the 0-mapping. Moreover, the weights are initialized symmetrically around 0. This standard parameterization makes it non-trivial for a convolutional layer trained with stochastic gradient methods to preserve features that were already good. Put differently, such convolutional layers cannot easily converge to the identity transformation at training time.
This shortcoming was observed and partially addressed by [9] through batch normalization, i.e., layer-wise whitening of the input with a learned mean and covariance. But the idea remained somewhat implicit until residual networks ( [6]; [7]) explicitly introduced a reparameterization of the convolutional layers such that when all trainable weights are 0, the layer represents the identity function. Formally, for an input x, each residual layer has the form x + h(x), rather than h(x). This simple reparameterization allows for much deeper architectures largely avoiding the problem of vanishing (or exploding) gradients. Residual networks, and subsequent architectures that use the same parameterization, have since then consistently achieved state-of-the-art results on various computer vision benchmarks such as CIFAR10 and ImageNet.
Our contributions
In this work, we consider identity parameterizations from a theoretical perspective, while translating some of our theoretical insight back into experiments. Loosely speaking, our first result underlines how identity parameterizations make optimization easier, while our second result shows the same is true for representation.
Linear residual networks. Since general non-linear neural networks, are beyond the reach of current theoretical methods in optimization, we consider the case of deep linear networks as a simplified model. A linear network represents an arbitrary linear map as a sequence of matrices A · · · A 2 A 1 . The objective function is E y − A · · · A 1 x 2 , where y = Rx for some unknown linear transformation R and x is drawn from a distribution. Such linear networks have been studied actively in recent years as a stepping stone toward the general non-linear case (see Section 1.2). Even though A · · · A 1 is just a linear map, the optimization problem over the factored variables (A , . . . , A 1 ) is non-convex.
In analogy with residual networks, we will instead parameterize the objective function as min A1,...,A E y − (I + A ) · · · (I + A 1 )x 2 . (1.1) To give some intuition, when the depth is large enough, we can hope that the target function R has a factored representation in which each matrix A i has small norm. Any symmetric positive semidefinite matrix O can, for example, be written as a product O = O · · · O 1 , where each O i = O 1/ is very close to the identity for large so that A i = O i − I has small spectral norm. We first prove that an analogous claim is true for all linear transformations R with positive determinant 1 . Specifically, we prove that for every linear transformation R with det(R) > 0, there exists a global optimizer (A 1 , . . . , A ) of (1.1) such that for large enough depth , Here, A denotes the spectral norm of A. The constant factor depends on the conditioning of R. We give the formal statement in Theorem 2.1. The theorem has the interesting consequence that as the depth increases, smaller norm solutions exist and hence regularization may offset the increase in parameters.
Having established the existence of small norm solutions, our main result on linear residual networks shows that the objective function (1.1) is, in fact, easy to optimize when all matrices have sufficiently small norm. More formally, letting A = (A 1 , . . . , A ) and f (A) denote the objective function in (1.1), we can show that the gradients vanish only when f (A) = 0 provided that max i A i ≤ O(1/ ). See Theorem 2.2. This result implies that linear residual networks have no critical points other than the global optimum. In contrast, for standard linear neural networks we only know, by work of [12] that these networks don't have local optima except the global optimum, but it doesn't rule out other critical points. In fact, setting A i = 0 will always lead to a bad critical point in the standard parameterization.
Universal finite sample expressivity. Going back to non-linear residual networks with ReLU activations, we can ask: How expressive are deep neural networks that are solely based on residual layers with ReLU activations? To answer this question, we give a very simple construction showing that such residual networks have perfect finite sample expressivity. In other words, a residual network with ReLU activations can easily express any functions of a sample of size n, provided that it has sufficiently more than n parameters. Note that this requirement is easily met in practice. On CIFAR 10 (n = 50000), for example, successful residual networks often have more than 10 6 parameters. More formally, for a data set of size n with r classes, our construction requires O(n log n + r 2 ) parameters. Theorem 3.2 gives the formal statement.
Each residual layer in our construction is of the form x + V ReLU(U x), where U and V are linear transformations. These layers are significantly simpler than standard residual layers, which typically have two ReLU activations as well as two instances of batch normalization.
The power of all-convolutional residual networks. Directly inspired by the simplicity of our expressivity result, we experiment with a very similar architecture on the CIFAR10, CIFAR100, and ImageNet data sets. Our architecture is merely a chain of convolutional residual layers each with a single ReLU activation, but without batch normalization, dropout, or max pooling as are common in standard architectures. The last layer is a fixed random projection that is not trained. In line with our theory, the convolutional weights are initialized near 0, using Gaussian noise mainly as a symmetry breaker. The only regularizer is standard weight decay ( 2 -regularization) and there is no need for dropout. Despite its simplicity, our architecture reaches 6.38% top-1 classification error on the CIFAR10 benchmark (with standard data augmentation). This is competitive with the best residual network reported in [6], which achieved 6.43%. Moreover, it improves upon the performance of the previous best all-convolutional network, 7.25%, achieved by [15]. Unlike ours, this previous all-convolutional architecture additionally required dropout and a non-standard preprocessing (ZCA) of the entire data set. Our architecture also improves significantly upon [15] on both Cifar100 and ImageNet.
Related Work
Since the advent of residual networks ( [6]; [7]), most state-of-the-art networks for image classification have adopted a residual parameterization of the convolutional layers. Further impressive improvements were reported by [8] with a variant of residual networks, called dense nets. Rather than adding the original input to the output of a convolutional layer, these networks preserve the original features directly by concatenation. In doing so, dense nets are also able to easily encode an identity embedding in a higher-dimensional space. It would be interesting to see if our theoretical results also apply to this variant of residual networks.
There has been recent progress on understanding the optimization landscape of neural networks, though a comprehensive answer remains elusive. Experiments in [5] and [4] suggest that the training objectives have a limited number of bad local minima with large function values. Work by [3] draws an analogy between the optimization landscape of neural nets and that of the spin glass model in physics ( [1]). [14] showed that 2-layer neural networks have no bad differentiable local minima, but they didn't prove that a good differentiable local minimum does exist. [2] and [12] show that linear neural networks have no bad local minima. In contrast, we show that the optimization landscape of deep linear residual networks has no bad critical point, which is a stronger and more desirable property. Our proof is also notably simpler illustrating the power of re-parametrization for optimization. Our results also indicate that deeper networks may have more desirable optimization landscapes compared with shallower ones.
Optimization landscape of linear residual networks
Consider the problem of learning a linear transformation R : R d → R d from noisy measurements y = Rx + ξ, where ξ ∈ N (0, I d ) is a d-dimensional spherical Gaussian vector. Denoting by D the distribution of the input data x, let Σ = Ex∼D[xx ] be its covariance matrix.
There are, of course, many ways to solve this classical problem, but our goal is to gain insights into the optimization landscape of neural nets, and in particular, residual networks. We therefore parameterize our learned model by a sequence of weight matrices A 1 , . . . , A ∈ R d×d , Here h 1 , . . . , h −1 are the − 1 hidden layers andŷ = h are the predictions of the learned model on input x. More succinctly, we havê It is easy to see that this model can express any linear transformation R. We will use A as a shorthand for all of the weight matrices, that is, the × d × d-dimensional tensor that contains A 1 , . . . , A as slices. Our objective function is the maximum likelihood estimator, f (A, (x, y)) = ŷ − y 2 = (I + A ) . . .
We will analyze the landscape of the population risk, defined as, Recall that A i is the spectral norm of A i . We define the norm |||·||| for the tensor A as the maximum of the spectral norms of its slices, The first theorem of this section states that the objective function f has an optimal solution with small |||·|||-norm, which is inversely proportional to the number of layers . Thus, when the architecture is deep, we can shoot for fairly small norm solutions. We define γ := max{| log σ max (R)|, | log σ min (R)|}. Here σ min (·), σ max (·) denote the least and largest singular values of R respectively. We first note that the condition det(R) > 0 is without loss of generality in the following sense. Given any linear transformation R with negative determinant, we can effectively flip the determinant by augmenting the data and the label with an additional dimension: let x = [x, b] and y = [y, −b] , where b is an independent random variable (say, from standard normal distribution), and let R = R 0 0 −1 . Then, we have that y = R x + ξ and det(R ) = − det(R) > 0. 2 Second, we note that here γ should be thought of as a constant since if R is too large (or too small), we can scale the data properly so that σ min (R) ≤ 1 ≤ σ max (R). Concretely, if σ max (R)/σ min (R) = κ, then we can scaling for the outputs properly so that σ min (R) = 1/ √ κ and σ max (R) = √ κ. In this case, we have γ = log √ κ, which will remain a small constant for fairly large condition number κ. We also point out that we made no attempt to optimize the constant factors here in the analysis. The proof of Theorem 2.1 is rather involved and is deferred to Section A.
Given the observation of Theorem 2.1, we restrict our attention to analyzing the landscape of f (·) in the set of A with |||·|||-norm less than τ , Here using Theorem 2.1, the radius τ should be thought of as on the order of 1/ . Our main theorem in this section claims that there is no bad critical point in the domain B τ for any τ < 1. Recall that a critical point has vanishing gradient.
Theorem 2.2. For any τ < 1, we have that any critical point A of the objective function f (·) inside the domain B τ must also be a global minimum.
Theorem 2.2 suggests that it is sufficient for the optimizer to converge to critical points of the population risk, since all the critical points are also global minima.
Moreover, in addition to Theorem 2.2, we also have that any A inside the domain B τ satisfies that Here C opt is the global minimal value of f (·) and ∇f (A) F denotes the euclidean 3) says that the gradient has fairly large norm compared to the error, which guarantees convergence of the gradient descent to a global minimum ( [11]) if the iterates stay inside the domain B τ , which is not guaranteed by Theorem 2.2 by itself.
Towards proving Theorem 2.2, we start off with a simple claim that simplifies the population risk. We also use · F to denote the Frobenius norm of a matrix. Claim 2.3. In the setting of this section, we have, Here C is a constant that doesn't depend on A, and Σ 1/2 denote the square root of Σ, that is, the unique symmetric matrix B that satisfies B 2 = Σ.
Next we compute the gradients of the objective function f (·) from straightforward matrix calculus. We defer the full proof to Section A.
The gradients of f (·) can be written as, Now we are ready to prove Theorem 2.2. The key observation is that each matric A j has small norm and cannot cancel the identity matrix. Therefore, the gradients in equation (2.5) is a product of non-zero matrices, except for the error matrix E. Therefore, if the gradient vanishes, then the only possibility is that the matrix E vanishes, which in turns implies A is an optimal solution.
Proof of Theorem 2.2. Using Lemma 2.4, we have, Therefore we complete the proof of equation (2.3). Finally, if A is a critical point, namely, ∇f (A) = 0, then by equation (2.3) we have that f (A) = C opt . That is, A is a global minimum.
Representational Power of the Residual Networks
In this section we characterize the finite-sample expressivity of residual networks. We consider a residual layers with a single ReLU activation and no batch normalization. The basic residual building block is a function T U,V,s (·) : R k → R k that is parameterized by two weight matrices U ∈ R k×k , V ∈ R k×k and a bias vector s ∈ R k , A residual network is composed of a sequence of such residual blocks. In comparison with the full pre-activation architecture in [7], we remove two batch normalization layers and one ReLU layer in each building block. We assume the data has r labels, encoded as r standard basis vectors in R r , denoted by e 1 , . . . , e r . We have n training examples (x (1) , y (1) ), . . . , (x (n) , y (n) ), where x (i) ∈ R d denotes the i-th data and y (i) ∈ {e 1 , . . . , e r } denotes the i-th label. Without loss of generality we assume the data are normalized so that x (i) = 1. We also make the mild assumption that no two data points are very close to each other. Assumption 3.1. We assume that for every 1 ≤ i < j ≤ n, we have x (i) −x (j) 2 ≥ ρ for some absolute constant ρ > 0.
Images, for example, can always be imperceptibly perturbed in pixel space so as to satisfy this assumption for a small but constant ρ.
Under this mild assumption, we prove that residual networks have the power to express any possible labeling of the data as long as the number of parameters is a logarithmic factor larger than n. It is common in practice that n > r 2 , as is for example the case for the Imagenet data set where n > 10 6 and r = 1000.
We construct the following residual net using the building blocks of the form T U,V,s as defined in equation (3.1). The network consists of + 1 hidden layers h 0 , . . . , h , and the output is denoted byŷ ∈ R r . The first layer of weights matrices A 0 maps the d-dimensional input to a k-dimensional hidden variable h 0 . Then we apply layers of building block T with weight matrices A j , B j ∈ R k×k . Finally, we apply another layer to map the hidden variable h to the labelŷ in R k . Mathematically, we have We note that here A +1 ∈ R k×r and B +1 ∈ R r×r so that the dimension is compatible. We assume the number of labels r and the input dimension d are both smaller than n, which is safely true in practical applications. 4 The hyperparameter k will be chosen to be O(log n) and the number of layers is chosen to be = n/k . Thus, the first layer has dk parameters, and each of the middle building blocks contains 2k 2 parameters and the final building block has kr + r 2 parameters. Hence, the total number of parameters is O(kd + k 2 + rk + r 2 ) = O(n log n + r 2 ).
Towards constructing the network N of the form above that fits the data, we first take a random matrix A 0 ∈ R k×d that maps all the data points x (i) to vectors h Then we construct middle layers that maps h These vectors h (i) will clustered into r groups according to the labels, though they are in the R k instead of in R r as desired. Concretely, we design this cluster centers 4 In computer vision, typically r is less than 10 3 and d is less than 10 5 while n is larger than 10 6 by picking r random unit vectors q 1 , . . . , q r in R k . We view them as the surrogate label vectors in dimension k (note that k is potentially much smaller than r). In high dimensions (technically, if k > 4 log r) random unit vectors q 1 , . . . , q r are pair-wise uncorrelated with inner product less than < 0.5. We associate the i-th example with the target surrogate label vector v (i) defined as follows, Then we will construct the matrices (A 1 , B 1 ), . . . , (A , B ) such that the first layers of the network maps vector h (i) 0 to the surrogate label vector v (i) . Mathematically, we will construct (A 1 , B 1 ), . . . , (A , B ) such that Finally we will construct the last layer T A +1 ,B +1 ,b +1 so that it maps the vectors q 1 , . . . , q r ∈ R k to e 1 , . . . , e r ∈ R r , (3.4) Putting these together, we have that by the definition (3.2) and equation (3.3), for every i, if the label is y (i) is e j , then h (i) will be q j . Then by equation (3.4), we have that Hence we obtain thatŷ (i) = y (i) . The key part of this plan is the construction of the middle layers of weight matrices so that h (i) = v (i) .
We encapsulate this into the following informal lemma. The formal statement and the full proof is deferred to Section B. We briefly sketch the proof of the Lemma to provide intuitions, and defer the full proof to Section B. The operation that each residual block applies to the hidden variable can be abstractly written as,ĥ where h corresponds to the hidden variable before the block andĥ corresponds to that after. We claim that for an (almost) arbitrary sequence of vectors h (1) , . . . , h (n) , there exists T U,V,s (·) such that operation (3.5) transforms k vectors of h (i) 's to an arbitrary set of other k vectors that we can freely choose, and maintain the value of the rest of n−k vectors. Concretely, for any subset S of size k, and any desired vector v (i) (i ∈ S), there exist U, V, s such that This claim is formalized in Lemma B.1. We can use it repeatedly to construct layers of building blocks, each of which transforms a subset of k vectors in {h
Power of all-convolutional residual networks
Inspired by our theory, we experimented with all-convolutional residual networks on standard image classification benchmarks.
CIFAR10 and CIFAR100
Our architectures for CIFAR10 and CIFAR100 are identical except for the final dimension corresponding to the number of classes 10 and 100, respectively. In Table 1, we outline our architecture. Each residual block has the form x+C 2 (ReLU(C 1 x)), where C 1 , C 2 are convolutions of the specified dimension (kernel width, kernel height, number of input channels, number of output channels). The second convolution in each block always has stride 1, while the first may have stride 2 where indicated. In cases where transformation is not dimensionality-preserving, the original input x is adjusted using averaging pooling and padding as is standard in residual layers.
We trained our models with the Tensorflow framework, using a momentum optimizer with momentum 0.9, and batch size is 128. All convolutional weights are trained with weight decay 0.0001. The initial learning rate is 0.05, which drops by a factor 10 and 30000 and 50000 steps. The model reaches peak performance at around 50k steps, which takes about 24h on a single NVIDIA Tesla K40 GPU. Our code can be easily derived from an open source implementation 5 by removing batch normalization, adjusting the residual components and model architecture. An important departure from the code is that we initialize a residual convolutional layer of kernel size k × k and c output channels using a random normal initializer of standard deviation σ = 1/k 2 c, rather than 1/k √ c used for standard convolutional layers. This substantially smaller weight initialization helped training, while not affecting representation.
A notable difference from standard models is that the last layer is not trained, but simply a fixed random projection. On the one hand, this slightly improved test error (perhaps due to a regularizing effect). On the other hand, it means that the only trainable weights in our model are those of the convolutions, making our architecture "all-convolutional".
An interesting aspect of our model is that despite its massive size of 13.59 million trainable parameters, the model does not seem to overfit too quickly even though the data set size is 50000. In contrast, we found it difficult to train a model with batch normalization of this size without significant overfitting on CIFAR10. Table 2 summarizes the top-1 classification error of our models compared with a non-exhaustive list of previous works, restricted to the best previous all-convolutional result by [15], the first residual results [6], and state-of-the-art results on CIFAR by [8].
All results are with standard data augmentation.
ImageNet
The ImageNet ILSVRC 2012 data set has 1, 281, 167 data points with 1000 classes. Each image is resized to 224 × 224 pixels with 3 channels. We experimented with an all-convolutional variant of the 34-layer network in [6]. The original model achieved 25.03% classification error. Our derived model has 35.7M trainable parameters. We trained the model with a momentum optimizer (with momentum 0.9) and a learning rate schedule that decays by a factor of 0.94 every two epochs, starting from the initial learning rate 0.1. Training was distributed across 6 machines updating asynchronously. Each machine was equipped with 8 GPUs (NVIDIA Tesla K40) and used batch size 256 split across the 8 GPUs so that each GPU updated with batches of size 32.
In contrast to the situation with CIFAR10 and CIFAR100, on ImageNet our allconvolutional model performed significantly worse than its original counterpart. Specifically, we experienced a significant amount of underfitting suggesting that a larger week of training). While no longer state-of-the-art, this performance is significantly better than the 40.7% reported by [13], as well as the best all-convolutional architecture by [15]. We believe it is quite likely that a better learning rate schedule and hyperparameter settings of our model could substantially improve on the preliminary performance reported here.
Conclusion
Our theory underlines the importance of identity parameterizations when training deep artificial neural networks. An outstanding open problem is to extend our optimization result to the non-linear case where each residual has a single ReLU activiation as in our expressivity result. We conjecture that a result analogous to Theorem 2.2 is true for the general non-linear case. Unlike with the standard parameterization, we see no fundamental obstacle for such a result.
We hope our theory and experiments together help simplify the state of deep learning by aiming to explain its success with a few fundamental principles, rather than a multitude of tricks that need to be delicately combined. We believe that much of the advances in image recognition can be achieved with residual convolutional layers and ReLU activations alone. This could lead to extremely simple (albeit deep) architectures that match the state-of-the-art on all image classification benchmarks.
A Missing Proofs in Section 2
In this section, we give the complete proofs for Theorem 2.1 and Lemma 2.4, which are omitted in Section 2.
A.1 Proof of Theorem 2.1
It turns out the proof will be significantly easier if R is assumed to be a symmetric positive semidefinite (PSD) matrix, or if we allow the variables to be complex matrices.
Here we first give a proof sketch for the first special case. The readers can skip it and jumps to the full proof below. We will also prove stronger results, namely, |||A ||| ≤ 3γ/ , for the special case. When R is PSD, it can be diagonalized by orthonormal matrix U in the sense that We see that the network defined by A reconstruct the transformation R, and therefore it's a global minimum of the population risk (formally see Claim 2.3 below). Next, we verify that each of the A j has small spectral norm: Then using equation (A.1) and the equation above, we have that |||A||| ≤ max j A j ≤ 3γ/ , which completes the proof for the special case. Towards fully proving the Theorem 2.1, we start with the following Claim: is an orthonormal matrix. Then for any integer q, there exists matrix W 1 , . . . , W q ∈ R 2×2 and a diagonal matrix Λ satisfies that (a) Q = W 1 . . . W q Λ and W j − I ≤ π/q, (b) Λ is an diagonal matrix with ±1 on the diagonal, and (c) If Q is a rotation then Λ = I.
Proof. We first consider the case when Q is a rotation. Each rotation matrix can be written as T (θ) := cos θ − sin θ sin θ cos θ . Suppose Q = T (θ). Then we can take W 1 = · · · = W q = T (θ/q) and Λ = I. We can verify that Next, we consider the case when Q is a reflection. Then we have that Q can be written as Q = T (θ) · diag(−1, 1), where diag(−1, 1) is the reflection with respect to the y-axis. Then we can take W 1 = · · · = W q = T (θ/q) and Λ = diag(−1, 1) and complete the proof.
Next we give the formal full proof of Theorem 2.1. The main idea is to reduce to the block diagonal situation and to apply the Claim above.
Proof of Theorem 2.1. Let R = U KV be the singular value decomposition of R, where U ,V are two orthonormal matrices and K is a diagonal matrix with nonnegative entries on the diagonal. Since det(R) = det(U ) det(K) det(V ) > 0 and det(K) > 0, we can flip U, V properly so that det(U ) = det(V ) = 1. Since U is a normal matrix (that is, U satisfies that U U = U U ), by Claim C.1, we have that U can be block-diagnolaized by orthonormal matrix S into U = SDS −1 , where D = diag(D 1 , . . . , D m ) is a real block diagonal matrix with each block D i being of size at most 2 × 2. Using Claim A.1, we have that for any D i , there exists W i,1 , . . . , W i,q , Λ i such that and W i,j − I ≤ π/q. Let Λ = diag(Λ 1 , . . . , Λ m ) and W j = diag(W 1,j , . . . W m,j ). We can rewrite equation (A.2) as Moreover, we have that Λ is a diagonal matrix with ±1 on the diagonal. Since W i,j 's are orthonormal matrix with determinant 1, we have det(Λ) = det(D) = det(U ) = 1. That is, Λ has an even number of −1's on the diagonal. Then we can group the −1's into 2 × 2 blocks. Note that −1 0 0 −1 is the rotation matrix T (π). Thus we can write Λ as a concatenation of +1's on the diagonal and block T (π). Then applying Claim A.1 (on each of the block T (π)), we obtain that there are W 1 , . . . , W q such that where W j − I ≤ π/q. Thus using equation (A.3) and (2.3), we obtain that Moreover, we have that for every j, SW j S −1 − I = S(W j − I)S −1 = W j − I ≤ π/q, because S is an orthonormal matrix. The same can be proved for W j . Thus let B j = SW j S −1 − I for j ≤ q and B j+q = SW j S −1 − I, and we can rewrite, We can deal with V similarly by decomposing V into 2q matrices that are π/q close to identity matrix, Last, we deal with the diagonal matrix K. Let K = diag(k i ). We have min k i = σ min (R), max k i = σ max (R). Then, we can write K = (K ) p where K = diag(k 1/p i ) and p is an integer to be chosen later. We have that K − I ≤ max |k Let B 1 = · · · = B p = K − I and then we have K = (I + B p ) · · · (I + B 1 ). Finally, we choose p = 3γ 4π+3γ and q = π 4π+3γ , 6
B Missing Proofs in Section 3
In this section, we provide the full proof of Theorem 3.2. We start with the following Lemma that constructs a building block T that transform k vectors of an arbitrary sequence of n vectors to any arbitrary set of vectors, and main the value of the others. For better abstraction we use α (i) ,β (i) to denote the sequence of vectors.
We can see that the conclusion implies which is a different way of writing equation (3.6).
Proof of Lemma B.1. Without loss of generality, suppose S = {1, . . . , k}. We construct U, V, s as follows. Let the i-th row of U be α (i) for i ∈ [k], and let s = −(1 − 2ρ ) · 1 where 1 denotes the all 1's vector. Let the i-column of V be Next we verify that the correctness of the construction. We first consider 1 ≤ i ≤ k. We have that U α (i) is a a vector with i-th coordinate equal to α (i) 2 ≥ 1 − ρ . The j-th coordinate of U α (i) is equal to α (j) , α (i) , which can be upperbounded using the assumption of the Lemma by Therefore, this means U α (i) − (1 − 2ρ ) · 1contains a single positive entry (with value at least α (i) 2 − (1 − 2ρ ) ≥ ρ ), and all other entries being non-positive. This means that ReLu(U α (i) Finally, consider n ≥ i > k. Then similarly to the computation in equation (B.1), U α (i) is a vector with all coordinates less than 1 − 2ρ . Therefore U α (i) + b is a vector with negative entries. Hence we have ReLu(U α (i) + b) = 0, which implies V ReLu(U α (i) + b) = 0. Now we are ready to state the formal version of Lemma 3.3.
Lemma B.2. Suppose a sequence of n vectors z (1) , . . . , z (n) satisfies a relaxed version of Assumption 3.1: a) for every i, 1 − ρ ≤ z (i) 2 ≤ 1 + ρ b) for every i = j, we have z (i) − z (j) 2 ≥ ρ ;. Let v (1) , . . . , v (n) be defined above. Then there exists weigh matrices (A 1 , B 1 ), . . . , (A , B ), such that given ∀i, h We will use Lemma B.1 repeatedly to construct building blocks T Aj ,B k ,sj (·), and thus prove Lemma B.2. Each building block T Aj ,B k ,sj (·) takes a subset of k vectors among {z (1) , . . . , z (n) } and convert them to v (i) 's, while maintaining all other vectors as fixed. Since they are totally n/k layers, we finally maps all the z (i) 's to the target vectors v (i) 's.
Now we ready to prove Theorem 3.2, following the general plan sketched in Section 3.
C Toolbox
In this section, we state two folklore linear algebra statements. The following Claim should be known, but we can't find it in the literature. We provide the proof here for completeness.
Claim C.1. Let U ∈ R d×d be a real normal matrix (that is, it satisfies U U = U U ). Then, there exists an orthonormal matrix S ∈ R d×d such that where D is a real block diagonal matrix that consists of blocks with size at most 2 × 2.
Proof. Since U is a normal matrix, it is unitarily diagonalizable (see [16] for backgrounds). Therefore, there exists unitary matrix V in C d×d and diagonal matrix in C d×d such that U has eigen-decomposition U = V ΛV * . Since U itself is a real matrix, we have that the eigenvalues (the diagonal entries of Λ) come as conjugate pairs, and so do the eigenvectors (which are the columns of V ). That is, we can group the columns of V into pairs (v 1 ,v 1 ), . . . , (v s ,v s ), v s+1 , . . . , v t , and let the corresponding eigenvalues be λ 1 ,λ 1 , . . . , λ λs ,λ s , λ s+1 , . . . , λ t . Here λ s+1 , . . . , λ t ∈ R. Then we get that , then we have that Q i is a real matrix of rank-2. Let S i ∈ R d×2 be a orthonormal basis of the column span of Q i and then we have that Q i can be written as Q i = S i D i S i where D i is a 2×2 matrix. Finally, let S = [S 1 , . . . , S s , v s+1 , . . . , v t ], and D = diag(D 1 , . . . , D s , λ s+1 , . . . , λ t ) we complete the proof.
The following Claim is used in the proof of Theorem 2.2. We provide a proof here for completeness.
Claim C.2 (folklore). For any two matrices A, B ∈ R d×d , we have that Proof. Since σ min (A) 2 is the smallest eigenvalue of A A, we have that B A AB B · σ min (A) 2 I · B .
Therefore, it follows that Taking square root of both sides completes the proof. | 2016-12-04T08:38:49.902Z | 2016-11-04T00:00:00.000 | {
"year": 2016,
"sha1": "8fbb115c578e8bfbcc1615bd7af990396abf6776",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8fbb115c578e8bfbcc1615bd7af990396abf6776",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
237952718 | pes2o/s2orc | v3-fos-license | Low intensity pulsed ultrasound information technology intervention in diagnosis and prediction of Muscle Atrophy
Objective: To discuss the effects and function of LIPUS on muscle atrophy (MA), analysis from various aspects through the study of low-intensity pulsed ultrasound (LIPUS) information technology intervention (ITI) in diagnosis and the prediction of muscle atrophy.. Method: In this study conducted in our university, 74 healthy female SD rats aged three months, weighing 100-200g were selected. All rats were placed in sterile cages from June 2020 to September 2020. They were divided into three groups. In the OVO group and OVE group, the mice are treated with LIPUS, Finally, the changes of body weight, grasping power, biochemical indexes and glycogen content of gastrocnemius muscle were analyzed and recorded to explore the effect and value of LIPUS ITI combined with intermittent weight-bearing exercise in the treatment of MA Results: After weight-bearing running, the body weight of model (OVO) group, exercise (OVE) group and NC group had significant statistical significance (P<0.01). It was found that the weight of OVE group was much more as compared to OVO group. There was significant difference in body weight between OVO group and NC group (P<0.05). After LIPUS treatment, it was found that the weight of OVO group, OVE group, LIPUS group and OVE +LIPUS group increased. Compared with the NC group, there was significant statistical difference (P<0.01). Conclusion: Low intensity pulsed ultrasound ITI has a good effect on improving MA, so as to effectively improve the weight of gastrocnemius muscle. The combined application of the two is better for the improvement of muscular atrophy.
INTRODUCTION
Muscle atrophy (MA) refers to the disorder of protein metabolism in muscle tissue, the enhancement of proteolysis of muscle, the thinning or even disappearance of muscle fibers. It is often accompanied by decrease of protein synthesis in muscle tissue and the increase of protein degradation. 1,2 In the process of aging, the body will also have MA.
Most of the loss of skeletal muscle quality is due to MA caused by the long-term imbalance between muscle protein synthesis and muscle protein decomposition rate. 3 MA affects people's health, brings great trouble to people's daily life. 4 Original Article
Low intensity pulsed ultrasound information technology intervention in diagnosis and prediction of Muscle Atrophy
In this study, the ovariectomized rat model was established. They were divided into different groups. After that, the square algorithm of correlation coefficient of Karl Pearson, root mean square error algorithm and R square algorithm are applied to the model of muscle force prediction. Finally, the changes of body weight, grasping power, biochemical indexes and glycogen content in gastrocnemius muscle were analyzed. This study provides a reliable reference direction and guidance for clinical treatment of MA caused by menopause.
METHODS
Seventy-four healthy female SD (Sprague Dawley) rats aged three months, and the average weight was 100~200g were selected as the subjects. The study was performed from June 2020 to September 2020. All rats were kept in sterile cages. The national standards for rodents were strictly followed. All animal experiment plans had been approved by Ethical Committee in Tianjin University of Commerce at March 10, 2021.
After one week of adaptive feeding, rats were randomly divided into three groups: control group, ovariectomy operation group and ovariectomy exercise group. 18 rats in the control group were recorded as negative control (NC) group. There were 28 rats in the ovariectomy operation group and the ovariectomy exercise group, which were respectively recorded as OVO group and OVE group.
Rats in OVO group and OVE group were anesthetized. The anesthesia injection used was 30 g/L barbital sodium injection. It was injected into the abdominal cavity of rats to complete anesthesia. Then, rats were placed on the operating table in a prone position for hair removal, skin preparation and disinfection. Then, the subcutaneous tissue and the abdominal muscles were separated. Tweezers were used to remove the surrounding fat and ovaries. Hemostatic forceps were used to ligate the catgut. After that, bilateral ovaries of the rats were removed. Finally, the incision was cleaned and sutured. After anesthetizing the mice in the normal control group, the skin incision was made on both sides of the ventral back, and then the adipose tissue near the ovary was removed. The control group did not need bilateral ovariectomy.
In OVE group, the weight parameters of Bedford TG were the same. Each rat bore 35% of its own weight. First, the rats were fed with adaptive exercise for six weeks, 30 minutes a day, 10 meters per minute. The slope of the running platform was set to 0 and the speed was 20 meters per minute. Exercise was carried out in six groups every day, the exercise time of each group was five minutes, and the interval time was 2 minutes. The total training period of OVE group was 10 weeks. At the end of exercise training, 8 mice in each group were taken out for testing. Then, rats in OVO group were randomly divided into 10 rats in each group, namely OVO group and ultrasound treatment group. The ultrasound treatment group was recorded as LIPUS group. In the same way, the rats in OVE group were randomly divided into two groups, 10 rats in each group, namely OVE group and ovariectomy combined with ultrasound treatment group (OVE +LIPUS group). Then, the LIPUS therapy instrument was used to treat the OVE +LIPUS group and LIPUS group.
Evaluation algorithm of muscle strength prediction:
The square algorithm of correlation coefficient of Karl Pearson, root mean square error algorithm and R square algorithm were applied to the model of muscle force prediction. [5][6][7] The correlation between muscle force and actual force could be effectively evaluated, and the range of the value was between 1 and-1. If the value was positive, there was a positive correlation between the two variables. If the value was negative, there was a negative correlation between the two variables.
(1) In the above equation, the average value of the actual force was expressed by F − . The estimation power and the average of the estimation power were expressed by F * and F − * respectively. n was the sample length. If the value was smaller, there was a very low correlation between the actual force and the estimation force, and there was a great difference between them, and the model effect was not superior. The main calculation principle of root mean square error algorithm was as follows: (2) If the calculated value was larger, the estimation effect was the worse, and there was a great difference between the estimated force and the actual force. The smaller the calculated value was, the better was the estimation effect. The difference between the estimated force and the actual force was very small, and the result was almost close.
The main principle of R square algorithm was to describe and explain the change of dependent variable by fitting model. Larger value showed that the fitting model had certain advantages and could describe the variation of independent variables well. If the value was larger, there was a big difference between the predicted muscle force and the actual force.
The larger the calculated results, smaller was the difference between the estimated force and the actual force.
RESULTS
The body weight, grasping power, gastrocnemius weight and uterus weight of the rats were calculated after exercise (as can be seen from Fig.1 and Fig.2). The body weight of OVO group, OVE group and NC group had significant statistical significance (P<0.01). The body weight of OVO and OVE rats increased significantly. The body weight of OVO group was compared with that of OVE group. It was found that the body weight of OVE group was relatively light, and there was a very significant statistical significance between the two groups (P<0.01). The rats in OVE group had the largest grasping force, followed by OVO group (P<0.01). It was found that the weight of gastrocnemius muscle in OVE group was the largest and that of gastrocnemius muscle in OVO group was the lightest (P<0.05). The weight of uterus in OVO group and OVE group was lower than that in the NC group, showing statistical difference (P<0.01). Therefore, appropriate intermittent weight-bearing exercise could effectively control the weight, and could effectively increase the weight of gastrocnemius muscle of rats, so as to optimize and improve the skeletal muscle strength of rats, but exercise could not improve the uterus.
After weight-bearing exercise training, the serum calcium content of OVO group and OVE group was not much higher than that of the NC group (P > 0.05). The serum phosphorus content of OVO group and OVE group was significantly higher than that of the NC group (P<0.01). Lactate dehydrogenase (LDH) activity of the three groups was compared. The activity of LDH in OVO group and OVE group was significantly higher than that of the other three groups (P<0.01). The activity of Creatine Kinase (CK) in the three groups were compared. Compared with the NC group, the activity of CK was higher in OVO group, but lower in OVE group (P<0.01). After that, the muscle glycogen content of gastrocnemius was compared among the three groups. Compared with the NC group, the glycogen content of OVO group decreased significantly, but that of OVE group increased (P<0.01). As given in Fig.3, the serum index and the content of glycogen in gastrocnemius muscle tissue of rats treated with LIPUS after weightbearing exercise had changed to some extent. Compared with the control group, the calcium concentration of the other four groups was higher. However, there was no significant difference in calcium concentration between the OVE +LIPUS group and the control group (P > 0.05). Compared with the phosphorus content of the five groups, the phosphorus content of the other four groups was significantly higher than that of the control group (P<0.01). Compared with OVO group, the phosphorus content in OVE group and OVE +LIPUS group was lower than that in OVO group (P<0.01). After that, the CK activity of the five groups were compared, and it was found that the CK activity of the other four groups was significantly higher than that of the control group. Compared with OVO group, the activity of CK in OVE group and OVE +LIPUS group was lower than that in OVO group (P<0.01). As regards LDH activity changes, the relationship between each group and CK activity was the same. The content of glycogen in gastrocnemius muscle was compared among the five groups. Compared with the control group, the glycogen content of the other four groups decreased significantly (P<0.05). Moreover, the muscle glycogen content of OVE group and OVE +LIPUS group was compared with OVO group which showed that the content of muscle glycogen was higher in OVE group and OVE +LIPUS group (P<0.05).
DISCUSSION
With the increase of age, the ovarian function of women after menopause deteriorates, and the level of endogenous estrogen secretion decreases, which leads to a series of metabolic diseases. 8 Studies have found that, the postmenopausal women's skeletal muscle mass and/or function is low, which is mainly manifested as a decrease in the number of skeletal muscles, a decline in skeletal muscle function, and a decline in muscle strength (especially explosive power). 9-11 Lowintensity pulsed ultrasound (LIPUS), as a noninvasive, safe and convenient treatment method, is popular. 12 Studies have shown that, LIPUS has good therapeutic effects on muscle, tendon, and soft tissue injury, and can promote muscle cell proliferation and protein synthesis. 13,14 In this study, the PILUS technology combined with intermittent weight-bearing exercise was used to treat muscle wasting symptoms in ovariectomized rats. Rats were divided into different groups and treated with different methods. After the ovariectomy of rats, the estrogen secretion in the body decreased which would lead to obesity in rats. However, after weight-bearing exercise training and LIPUS treatment, its weight had been greatly reduced, which showed that weight-bearing running could effectively promote the increase of skeletal muscle tissue fiber content, which had a great role in improving weight. Some researchers have found that the effect of ovariectomy on weight and energy metabolism regulation of rats may be similar to that of highfat diet. At the same time, it was found that the changes of body weight and energy metabolism caused by ovariectomy and high-fat diet may not directly affect the life span of female rats. [15][16][17] Other researchers have found that weight-bearing anti resistance exercise could improve skeletal muscle tissue. This study results were consistent with the results of previous studies. [18][19][20]
CONCLUSIONS
The intervention of LIPUS information technology combined with intermittent weightbearing exercise can have a good auxiliary treatment effect on the improvement of MA in postmenopausal women. The combination of these two methods has unique advantages. As a new, safe and effective treatment modality, it is worth to be widely used and promoted in clinical practice. However, in this study, only the effects of these two methods on the improvement of MA were analyzed and discussed, and the most fundamental molecular biological mechanism are not explored in detail. In the future research, from the most fundamental point of view, the mechanism of this effect can be studied. | 2021-09-27T20:35:38.552Z | 2021-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "735993b77d205537548e5af98ca758f622693736",
"oa_license": "CCBY",
"oa_url": "https://pjms.org.pk/index.php/pjms/article/download/4839/1041",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79a6ead5721609a61e5abf8e29be036223238850",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44064825 | pes2o/s2orc | v3-fos-license | Developing a scale for strategies used during the practice and learning of instrumental music
The purpose of this study is to develop a valid and reliable scale to identify the strategies students who study instrumental music use during the practice and learning of instrumental music based on their own responses. The study group comprised of 358 students studying music education in five universities in the academic year of 2015 to 2016. The scale developed within the scope of this study includes five dimensions: attention strategies, rehearsal strategies, elaboration strategies, articulationorganization strategies and comprehension monitoring strategies. The scale has 39 items. There are seven items in the attention strategies dimension, five in rehearsal strategies dimension, six in elaboration strategies dimension, seven in articulation-organization strategies dimension and 14 in comprehension monitoring strategies dimension. The factor loading values of the items in the scale ranged from 0.513 to 0.813. Reliability coefficients for the scale’s sub-dimensions were found to be 0.89 for attention strategies dimension, 0.81 for rehearsal strategies dimension, 0.85 for elaboration strategies dimension, 0.87 for articulation-organization strategies dimension and 0.93 for comprehension strategies dimension. It is believed that the scale will be a powerful tool for researchers in explaining levels of strategies students use during the practice and learning of instrumental music.
INTRODUCTION
Just like in every dimension of learning, the use of various learning strategies is crucial for easy and permanent learning of instrumental music, and in learners attaining the skill to learn independently.Learning strategies have been defined by many educators in various ways.Mayer (1988) defined learning strategies as behaviors intended to influence how learners process information.Learning strategies are the processes students use in learning on their own (Gagné and Driscoll, 1988).In a broader sense, learning strategies are behaviors and thoughts that learners engage in during learning that are intended to influence their encoding process (Weinstein and Mayer, 1986).
According to Riding and Rayner (1998), individuals develop their learning strategies while they engage in learning materials that are initially inappropriate for their cognitive styles.Strategies can be learned and changed.Strategies, on the other hand, are individuals' fixed (unchangeable) essential characteristics.Learning strategy is a group or more of processes an individual *Corresponding author.E-mail: mehtapaydineruygun@yahoo.com.tr.
Authors agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License gains to ease the performance in a learning task.Strategies change depending on the nature of the task.Different learning strategies are used for different learning tasks.In this regard, educators have divided learning strategies into categories.Accordingly, learning strategies in each category include methods that will influence certain aspects of the encoding process to ease one or more types of learning outcomes and performance.
Learning strategies are divided into eight groups in the most recognized classification regarding learning strategies.According to this classification, learning strategies include rehearsal, elaboration and organization strategies for basic and complex learning tasks, rehearsal, elaboration and organization strategies for complex learning tasks and affective and motivational strategies (Mayer, 1988;Weinstein and Mayer, 1986).According to another classification, learning strategies are divided into five groups (Gagné and Driscoll, 1988).These strategies are attention strategies, short-term memory development strategies, encoding enhancement strategies, retention enhancement strategies and monitoring-guiding strategies.Some studies (Yokuş, 2009;Yokuş, 2010) in the field of music discuss learning strategies according to this classification.
Furthermore, learning strategies make up one of the two main dimensions of the Motivated Strategies for Leaning Questionnaire (MSLQ) developed by Pintrich, Smith, García and McKeachie in 1991.The strategies in the learning strategies dimension of Motivated Strategies for Learning Questionnaire (MSLQ) are collected in three groups.These strategies are cognitive strategies, metacognitive strategies and resource management strategies.Two (McCormick and McPherson, 2003;Nielsen, 2008) or three (Nielsen, 2004) of these strategies were focused on determining the practice and learning strategies used by students in the field of music.Weinstein and Mayer (1986) emphasize that students should be aware of learning strategies in order for them to learn more effectively, to remember the information that has been stored and to motivate themselves.Moreover, frequent use of learning strategies can lead to a desire to be more successful in learning because these strategies are the most important parts of the learning process, and this process requires determining whether students use learning strategies or not (Güven, 2008).Measurement of levels of practice and learning strategies used in instrumental music can be possible with a valid and reliable scale developed on this subject.In order to understand how the practice and learning strategies classification used in this measurement tool, it will be useful to discuss the related studies in the field of music.
Strategies in studies in the field of music
The first practice and learning strategies found in studies in the field of music is rehearsal strategies.Strategies like Uygun and Kılınçer 519 problem solving, decision-making, participation in performance products come to mind when rehearsal strategies are discussed.These strategies can change according to musical learning areas.For example, rehearsal strategies in choir training include strategies designed to develop the critical thinking skills of the conductor.Rehearsal strategies designed within this structure necessitate performance skills, music literacy and music comprehension skill (Field, 1997).
In another related study, rehearsal strategies include the strategies an expert conductor-teacher uses in improvisational teaching.These strategies are verbal rehearsal strategies used for narration and evaluation during rehearsals and a series of active rehearsal strategies like modelling, creating links, doing vocal exercises and critical listening (Cruse, 2011).In studies, different than the studies of Field (1997) and Cruse (2011), rehearsal strategies are discussed as a subdimension of cognitive strategies (McCormick and McPherson, 2003;Nielsen, 2004Nielsen, , 2008)).
Task strategies are the other practice and learning strategies encountered in studies in the field of music.Task strategies are the strategies used in organizing the parts of the piece meaningfully, selecting the problem areas and bringing together parts of the piece as a whole (Nielsen, 2001).Practice strategies are also among the strategies encountered in studies in the field of music.These strategies include students getting motivated, setting goals, self-assessing and their strategy use processes (Oare, 2007).
In another study on university students playing string instruments, practice strategies consist of strategies like repetition of the entire piece, repetition of little sections of the piece, doing technical works and analyzing the piece.Also, hand position transitions, practicing in unknown chords, playing slowly, using metronome for slow practice tempo, increasing gradually and playing the section many times are given as examples to practice strategies (Sikes, 2013).
Writing strategies in the field of music include writing or piece writing for learning in the pedagogical approaches used in the field of music theory (Kelley, 1997).Correction strategies are the strategies used to teach the transposition skill, and to correct orchestration errors (Dobroski, 1981).
Some studies on learning strategies in the field of music used the classification of the MLSQ.Strategies in this context are comprised of cognitive, metacognitive and resource management strategies (Pintrich et al., 1991).Cognitive strategies include strategies like rehearsal, elaboration, organization and critical thinking.Sample items taken from the scale in order to determine these strategies are: "I select important technical and musical parts, repeating these over and over again", "I try to develop musical ideas by making connections between alternative interpretations from listening to music and from lecturers" and "When I practice, I go through the music and try to find the most important musical ideas" (McCormick and McPherson, 2003;Nielsen, 2004;2008).
Although metacognitive strategies in the field of music are also called self-regulation strategies, comprehension monitoring strategies, executive cognitive strategies, they are all used in similar context.Scales related to metacognitive strategies focus on how much students plan and observe and how much they organize their problem-solving during the practice time.Sample items are: "When practicing, I set goals for myself in order to direct my practicing" and "I test my performance on the instrument to better manage my strength".Students' use of metacognitive strategies while practicing their instruments was determined in some related studies (McCormick and McPherson, 2003;Nielsen, 2004Nielsen, , 2008)).
Similar to metacognitive strategies, in another study (Nielsen, 2001), under the name of self-regulation strategies, whether students exhibited enough selfregulation strategies or not through setting specific goals, strategic planning, self-monitoring and self-judgment was determined.
Resource management strategies include managing time and study environment, effort regulation, peer learning and help seeking sub-dimensions.Sample items from the "managing time and study environment" subdimension (Nielsen, 2004) are: "I find it hard to stick to a practice schedule" and "I use my practice time wisely".The sample items (Nielsen, 2004) from the "effort regulation" sub-dimension are "I often feel so lazy or bored when practicing that I quit before I finish what I planned to do" and "I continue to practice even the music is boring and uninspired".The sample item in the "peer learning sub-dimension" is "When practicing repertoire, I often try to perform the piece for a classmate or friend".
Finally, the sample items in the "help seeking" subdimension are: "Even if I have trouble learning the music, I try to work on my own, without help from anyone" and "I seek help from the teacher when I have trouble learning the music".
Another study on strategies in the field of music was conducted by Nielsen (1999a).Nielsen (1999a) determined a series of primary and support strategies helping the learning process.Primary strategies are strategies of selection, organization, making connections between the learning material and the preexisting knowledge, classification of learning material, making connections between the kinesthetic pictures and performing the material, and making connections between the playing of the piece and audiovisual, and visual elements.
Selection strategies are strategies that can be used to select the problem areas of the piece to be learned.These strategies include strategies like visual examination of the score, playing the sections close to the final tempo from beginning to the end, repetition by dividing the piece into different sections.
Organization strategies include strategies like combining segments of the piece to form a whole, playing by dividing the piece into sections with different lengths, playing segments in different tempi, playing each hand (unilateral play), and playing both hands (bilateral play).At the same time, strategies like playing by systematically altering the rhythmical structure of a segment, and playing segments in different tempi by using metronome can be included among the organization strategies.
Strategies to make connections the learning material with existing knowledge (integration) are finding different solutions for a problem, keeping to only one fingering of a segment to alter the solution for the problem and developing exercises based on parts of the piece.Strategies to sort the learning material (categorization) include strategies like dividing the piece into working areas (larger sections), doing markings in the score, minimizing patterns of movements to chords and overdoing the segments.Strategies to relate kinesthetic pictures to the performing of the material include mental rehearsal.These strategies are for mentally rehearsing the kinesthetic (physical) movements done while playing music without touching the instrument.Strategies to relate auditory factors to the performing of the material include playing segments along with a vocal expression and listening to others' performance recordings.
Strategies to relate visual factors to the performing of the material are strategies of identifying similarities and dissimilarities in the piece.In Nielsen (1999a) study, support strategies aim to influence indirectly the acquisition of new knowledge by focusing on the learner's state of mind.This category includes strategies of maintaining concentration on the task at hand, mastering anxiety and securing the efficient use of time.Strategies to direct attention to the task at hand are the strategies of activating and maintaining concentration or activating and maintain motivation.Strategies to master anxiety include strategies of mentally preparing for a public performance.
Strategies to secure efficient use of time are strategies that utilize the distribution of practice over time (massed vs. distributed practice, short-term vs. long-term).In addition to these strategies, they include strategies like pausing/resting, preparing the body and muscles for the practice activity, constructive self-talk, help from others, mental exercises and relaxation exercises.
A study on learning strategies in the field of music was also conducted by Leon-Guerrero (2008).Under the title of self-regulation strategies, Leon-Guerrero (2008) described 21 types of strategies including strategies like playing from the beginning, repeating a measure, repeating a segment, practicing a group of notes and playing a group of notes backwards.
Practice and learning strategies classification the current study is based on
In the examples given from the studies in the field of music it seen that the definitions and classifications of practice and learning strategies differ according to researcher's approach to the subject.For example, while strategies described as self-regulation strategies correspond to the rehearsal (repetition) strategies (Leon-Guerrero, 2008) of Weinstein and Mayer (1986) classification, in some studies (Hallam, 2001a;Nielsen, 2001) these strategies correspond to comprehension monitoring (metacognition) strategies of Weinstein and Mayer (1986) classification.
When classification related to learning strategies are examined in studies in the field of music, it is seen that learning strategies sometimes discussed as characteristics include special behaviors and sometimes a combination of a set of behaviors.In addition to different classifications, learning strategies are generally gathered around certain strategies.However, Weinstein and Mayer's (1986) classification is generally recognized and used in many studies (Aicher, 1998;Aydıner-Uygun andKılınçer, 2012a, 2012b;Kılınçer andAydıner-Uygun, 2013a, 2013b;Nielsen, 1999aNielsen, , 1999bNielsen, , 2001)).
The current study is based on Weinstein and Mayer (1986) classification of learning strategies.However, while some of the learning strategies (rehearsal, elaboration and organization) in Weinstein and Mayer (1986) classification are discussed in two separate groups, that is, basic and complex, no such separation has been made in this study.In addition to the classification made in this way, one of the sub-dimensions of learning strategies classification developed by Gagné and Driscoll (1988), attention strategies, are also included in the study.One of the strategies developed based on the theory of information processing is the attention strategies (O'Malley et al., 1988;Gagné and Driscoll, 1988;Senemoğlu, 2010;Öztürk, 1995;Subaşı, 2004).This is the reason why attention strategies are included in the study.Furthermore, just like in all learnings, it is believed that learning process in the learning of instrumental music starts with attention.
For the adaptation to the field of music, all the strategies categorized as cognitive by Weinstein and Mayer (1986) can also be used to phrase the psychomotor strategies.In fact, in order to conform to music's nature, there are some psychomotor behaviors expressed in items of the measurement tools used in related studies (Nielsen, 1999a(Nielsen, , 1999b) ) based on Weinstein and Mayer's learning Uygun and Kılınçer 521 strategies approach.Some of these items are playing parts with different tempos, playing each hand differently or together, systematically playing by changing the rhythmic structure of a part, playing only with one finger in a part, etc.In this study, the concept of learning strategy is used in accordance with the definition given by Weinstein and Mayer (1986).According to Weinstein and Mayer (1986), learning strategies are thoughts and behaviors a learner engages in during learning that influences the learner's encoding process.Thus, the purpose of any learning strategy may be the way the learner reaches new information or behavior that he or she chose, obtained, organized or integrated.According to this interpretation, a strategy includes both thought and action.Strategy is not only a "pure" cognitive information process, but it also consists of different action types directed to the learning material.
According to the classification on which this study is based, learning strategies are divided into five groups: attention strategies, rehearsal strategies, elaboration strategies, articulation-organization strategies and comprehension monitoring strategies.Below, these strategies are briefly introduced and examples related to the practice and learning of instrumental music is given (Appendix 1 presented the status of these strategies in the scale).
Attention strategies
Learning activities start with the attention process.Attention is, in the broadest sense, socializing of a cognitive activity (Matlin, 1989).Attention is the most important process that ensures that the information necessary for an individual is transmitted to the shortterm memory.For this reason, the first step in teaching is to attract student's attention and enhance it (Subaşı, 2004).Attention strategies allow the student to focus on cognitive activity by making markings on the material to be learned.Some of the examples of attention strategies used during the practice and learning of instrumental music are marking the instrumental music's tone/mode changes, marking the tempo or places of tempo changes, marking the speed and nuance terms, marking the difficult passages and marking the ornaments and explanations.
Rehearsal strategies
According to Weinstein and Mayer (1986), rehearsal is the learner effectively reading and counting the presented items during the learning.The purpose of this action is to choose and acquire the units to be transmitted to longterm memory.Rehearsal strategies are effective in acquiring the information through cognitive rehearsal after determining what information is important or in making the information permanent.Consequently, rehearsal strategies are quite effective in learning the information that needs to be remembered (for example, poetry, composition, etc.).Some examples of the rehearsal strategies that can be used during the practice and learning of instrumental music are repetition of the entire music piece, the difficult passages or the parts where mistakes have been made on the instrument until the hand positions are easily done or cognitive rehearsal of the piece without playing an instrument until it can be easily played.However, to make learning permanent, learning strategies including higher-level cognitive actions are used.These actions may be possible with elaboration and articulation-organization strategies.
Elaboration strategies
Elaboration strategies enable students to understand the new information by activating their preexisting information, and to create analogies (Wernke et al., 2011).The purpose of elaboration strategies is to create links between the preexisting information or the information stored in the long-term memory and the information that is believed as important to remember.Successful learning includes establishing links between the preexisting information and the new information.The new information should be built on and linked with the previous information (Conford, 2002).An example of elaboration strategies that can be used during the practice and learning of instrumental music is establishing similarity and difference relationships between the tonalmodal, rhythmic and technical characteristics of the newly learned music and the tonal-modal, rhythmic and technical characteristics of the previously learned music.
Articulation-organization strategies
Articulation-organization strategies are among the ways to elaborate the material to be learned.Articulation strategies are the strategies that increase the meaning of the information and that increase the number of associations between the information and its parts (Senemoğlu, 1997).Organization strategies include grouping common characteristics between different examples in the learning material, sequencing the given words in a meaningful way and to reorganize complex information (Weinstein & Mayer, 1986).In other words, organization strategies involve transforming information into different forms and developing a schematic system between the parts (Cornford, 2002).Some of the examples of articulation-organization strategies that can be used during the practice and learning of instrumental music are concretization of abstract situations in music by, using visual markings or developing visual images in mind, attempting to visualize musical statements in mind, encoding the codes that will make the music permanent, and grouping structures in music that show similarities or differences.Weinstein and Mayer (1986) point out that comprehension monitoring strategies involve students determining learning objectives in a learning activity, evaluating the extent to which these objectives are achieved and, when necessary, changing the strategies used to achieve these objectives.According to Pintrich (2004), comprehension monitoring strategies include students planning, supervising, regulating and changing their learning process.Some of the examples of comprehension monitoring strategies that can be used during the practice and learning of instrumental music are: thinking about the contributions of instrumental music on technical and musical development, using practice methods appropriate to music, identifying the reasons for difficulties that were encountered and developing methods to overcome these difficulties.
Study purpose
The purpose of this study is to develop a valid and reliable scale identifying the strategies students who study instrumental music use during the practice and learning of instrumental music based on their own responses.As a result of examining the related studies, it was believed that there was a need for a measurement tool focusing on attention, rehearsal, elaboration, articulation-organization and comprehension monitoring strategies as a whole.Studying the strategies used during the practice and learning of instrumental music requires the development of a valid and reliable scale.This necessity became the starting point of this study.
METHODOLOGY
This study is a scale development study employing the survey model.A descriptive item analysis, an exploratory factor analysis, and a scale's performance analysis were adopted.Research design is a descriptive item analysis since the study is about revealing the existing relationship between the items.
After developing the candidate form, the validity and reliability were established ensured.Reliability is the degree to which an assessment tool produces stable and consistent results.Types of reliability are test-retest, parallel forms, inter-rater and internal consistency.Internal consistency is used in this study.Validity refers to how well a test measures what it is purported to measure.Types of validity are face validity, criterion-related validity, construct validity, formative validity and sampling validity.Construct validity is used in this study.
Expert opinions were sought for the scale's content validity.One of the experts is an expert in the field of measurement and evaluation from Gazi University (Ankara, Turkey).The other two are experts in the field of music education from Ömer Halisdemir University (Niğde, Turkey).To determine the scale's validity and reliability of the scale a pre application was conducted with the 358 students making up the study group.
For the validity of the scale, the candidate form consisting of 55 items was presented to experts.In accordance with the expert opinions, one item was excluded from the scale because it was inadequate to measure the related structure; five items were excluded because they were not clear enough.Thus, content validity for 49-item scale was met.The 49-item scale was reproduced on paper and administered to the students making up the study group.The scale's structural validity and reliability analyses were conducted through 358 observations.
Comrey and Lee (1992) characterized a sample size of 100 as poor, 200 as fair, 300 as good, 500 as very good and 1000 or more as excellent.Guilford (1954) suggested that sample size should be at least 200.Tavşancıl (2002) expressed that the number of items should be between five items and ten items.Based on the aforementioned, it can be said that the sample size in this scale development study is appropriate.
Research and data analysis
Exploratory Factor Analysis (EFA) was administered for the structural validity of the scale.The factor analysis is one of the methods used for transforming correlated data structures to new independent and less data structures, for presenting common factors by grouping variables that are supposed to explain a phenomenon or event and for grouping variables influencing a phenomenon (Özdamar, 2002).Some hypothetical criteria must be sought in factor analysis studies.The suitability of the data for factor analysis was examined with Kaiser-Meyer-Olkin (KMO) coefficient and Barlett sphericity tests.For the suitability to be met, KMO value should be higher than .60 and the chi-square value calculated for the Barlett sphericity test should be statistically significant (Büyüköztürk, 2008).
RESULTS AND DISCUSSION
This section includes the results obtained from the structural validity and reliability of the Scale for Strategies used during the practice and learning of instrumental music.
Results obtained from the scale's structural validity analysis
In the study, sample adequacy coefficient (KMO) was found to be 0.93, and χ2 value of Barlett sphericity test was found to be 9720.93(p<.001).These results show that the data is suitable for factor analysis.Then, Varimax rotation AFA was applied to the 49-item scale and the items (i7, i14, i15, i22, i24, i29 and i36) with factor loads below 0.40 were excluded.After this process, factor analysis was applied again.In the second application, items (i23, i33 and i34) in more than one factor were excluded and factor analysis was repeated for a third time.As a result of the last application, it is seen that factor loads were above 0.40 and the scale has five sub-dimensions (Table 1 and Figure 1).The exploratory percentage of the total variance of this five sub-dimensioned structure was found to be 60.3%.Table 1 shows the scale's distributions of the factor loadings.Table 2 presents the exploratory and item distributions of the scale's dimensions, and Figure 1 shows the eigenvalues of the scale's sub-dimensions
Results obtained from the scale's reliability and item analysis
The distribution of reliability coefficients according to the scale's sub-dimensions is presented in Table 3.As can be seen in Table 3, the reliability coefficients of the scale and its sub-dimensions were found to be high.Distributions of the scale's item analysis are given in Table 4.In Table 4, it is seen that corrected item-total correlations are above 0.30.Based on the results, it can be concluded that the five-dimensioned scale meets the validity and reliability criteria.
CONCLUSION
This study was aimed to develop a scale for strategies used during the practice and learning of instrumental music.The type of the scale can be described as selfassessment scale.Developed to determine strategy levels individuals use during the practice and learning instrumental music, the scale is a measurement tool based on individuals giving information about themselves.The scale includes attention, rehearsal, elaboration, articulation-organization and comprehension monitoring dimensions.There are a total of 39 items in the scale., i37, i38, i39, i40, i41, i42, i43, i44, i45, i46, i47, i48, i49 (Kline, 2000).Reliability coefficients for the scale's sub-dimensions were found to be 0.89 for attention strategies dimension, 0.81 for rehearsal strategies dimension, 0.85 for elaboration strategies dimension, 0.87 for articulationorganization strategies dimension and 0.93 for comprehension strategies dimension.There are no reverse-coded items in the scale.The information regarding the strategies the individual use during the practice and learning instrumental music can be obtained by dividing the total score from the scale by the item number (39).
RECOMMENDATIONS
It is believed that the scale will be a powerful tool for researchers in explaining the levels of strategies students use during the practice and learning of instrumental music.Also, it can be used as an observation tool by teachers to measure their students' levels of practice and learning strategies.Through this scale, teachers can take measures to increase the diversity of their students' strategy use.They can experience which strategies are effective under which conditions on which student style.
This scale can also be considered as a measurement tool for researchers to use in their experimental studies.
For this, it is important to repeat the scale's validity and reliability works by employing the scale to different sample groups.It is also important to perform confirmatory factor analysis to confirm the factor structure.In future studies, a confirmatory factor analysis of the present scale with another related scale developed earlier (Kılınçer and Aydıner-Uygun, 2013a) can be performed in order to confirm the present scale's factor structure.
In addition, the relations between the present scale and valid and reliable scales (Aydıner- Uygun, 2012;Aydıner-Uygun, 2016;Miksza et al., 2016) that measure structures related with practice and learning strategies (learning approach in instruments, achievement goal orientation in instrument) can be examined.Using this scale, scale's measurement power can be tested through future studies.
S/N
Items 1 2 3 4 5 1 I mark instrumental music's tone-mode/tone-mode changes 2 I mark instrumental music's tempo/places of tempo changes 3 I mark the tempo terms found in instrumental music 4 I mark the nuance terms found in instrumental music 5 I mark the places of ornaments like mordan, trill and grupetto in instrumental music 6 I underline explanations about practice of ornaments like mordan, trill and grupetto in instrumental music 7 I mark the difficult passages of instrumental music 8 I rehearse the entire instrumental music piece on my instrument until I gain a certain ease in playing.9 I rehearse the instrumental music's difficult passages on my instrument until I gain a certain ease in playing by determining the difficult passages.
10 I rehearse the notes of entire instrumental music piece or a part of it visually in my mind 11 I constantly rehearse the places where I made mistakes in instrumental music on my instrument until I can play them correctly 12 I cement the hand positions found in instrumental music by constantly playing them 13 I establish similarity relationships between the techniques of instrumental music and the techniques of instrumental music I have learned previously 14 I establish difference relationships between the techniques of instrumental music and the techniques of instrumental music I have learned previously 15 I establish similarity relationships between the rhythmic characteristics of instrumental music and the rhythmic characteristics of instrumental music I have learned previously 16 I establish difference relationships between the rhythmic characteristics of instrumental music and the rhythmic characteristics of instrumental music I have learned previously 17 I establish similarity relationships between the tonal/modal characteristics of instrumental music and the tonal/modal characteristics of instrumental music I have learned previously.18 I establish difference relationships between tonal/modal characteristics in the study/piece and the tonal/modal characteristics in the study/piece I have learned previously 19 I concretize abstract information, concept and situations for me in instrumental music by using visual markings.(Forex.Drawing a triangles where there are trioles, using symbols for ligature beginnings and endings) 20 I concretize abstract information, concept and situations for me in instrumental music by developing visual images in my mind.(For ex.To visualize a turtle's walk for the term lento) 21 I try to create a short story by visualizing the statement in instrumental music in my mind 22 I code certain note groups in instrumental music with a phrase or a sentence to make them permanent in my mind.23 I learn the notes of instrumental music by grouping them according to their movement directions.24 I learn melodic/harmonic structures moving with similar intervals in instrumental music by grouping them 25 I learn the rhythmic patterns of instrumental music by grouping them according to their similarities/differences 26 I can guess how difficult it would be for me to play a new instrumental music when I see it for the first time 27 I think about the gains instrumental music will bring to my technical development 28 I think about the gains instrumental music will bring to my musical development
Table 2 .
Exploratory and item distributions of the scale's dimensions.
Table 3 .
Reliability coefficients of the dimensions.
Table 4 .
Distributions of scale's item analysis. | 2018-06-03T02:48:00.120Z | 2017-04-23T00:00:00.000 | {
"year": 2017,
"sha1": "e8eb0ab5bbe5ab64eee483fb05c4d8d404b5b7c7",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/ERR/article-full-text-pdf/F81CC0163999.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e8eb0ab5bbe5ab64eee483fb05c4d8d404b5b7c7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
24387708 | pes2o/s2orc | v3-fos-license | Moderating Effects and Maintenance of Lung Cancer Cellular Immune Functions by CIK Cell Therapy
With the in-depth studies of the molecular mechanisms of happening and development tumor and the development of biotechnologies, the biological treatment has become a new therapeutic approach following the surgical treatment, chemotherapy and radiation therapy treatment. Lung cancer is the most common malignant tumor in current world, its occurrence and development are closely related to the body’s immune function status, and cell-mediated immunity is the main way for the anti-tumor immune (Wu et al., 2004). The tumor immunotherapy is the primary means of Cancer Biotherapy, and adoptive immunotherapy is an effective method of it (Urbanska et al., 2012). Cytokine-induced killer (CIK) has both the advantages of T lymphocytes’ powerful anti-tumor activity and NK cells’ killing tumor in non-MHC-restriction. It could directly kill tumor cells, adjust and enhance immune function, withnot damaging the structure and functions of the immune system. Its effects on the treatment of malignant solid tumors has been widely recognized (Blattman and Greenberg, 2004; Kim et al., 2010). But there is rarely reports on the analysis of the effects of regulation and maintaining time of cellular immune function in patients with lung cancer by CIK cell therapy, thus the author summarized and analyzed the experience of extracting the mononuclear cells (PMBC) to prepare CIK cells from the peripheral blood of patients with lung cancer, and the clinical efficacy data of prepared CIK cells reinfusion, reported as the following.
Introduction
With the in-depth studies of the molecular mechanisms of happening and development tumor and the development of biotechnologies, the biological treatment has become a new therapeutic approach following the surgical treatment, chemotherapy and radiation therapy treatment.Lung cancer is the most common malignant tumor in current world, its occurrence and development are closely related to the body's immune function status, and cell-mediated immunity is the main way for the anti-tumor immune (Wu et al., 2004).The tumor immunotherapy is the primary means of Cancer Biotherapy, and adoptive immunotherapy is an effective method of it (Urbanska et al., 2012).Cytokine-induced killer (CIK) has both the advantages of T lymphocytes' powerful anti-tumor activity and NK cells' killing tumor in non-MHC-restriction.It could directly kill tumor cells, adjust and enhance immune function, withnot damaging the structure and functions of the immune system.Its effects on the treatment of malignant solid tumors has been widely recognized (Blattman and Greenberg, 2004;Kim et al., 2010).But there is rarely reports on the analysis of the effects of regulation and maintaining time of cellular immune function in patients with lung cancer by CIK cell therapy, thus the author summarized and analyzed the experience of extracting the mononuclear cells (PMBC) to prepare CIK cells from the peripheral blood of patients with lung cancer, and the clinical efficacy data of prepared CIK cells reinfusion, reported as the following.
Moderating Effects and Maintenance of Lung Cancer Cellular Immune Functions by CIK Cell Therapy
Cong-Guo Jin*, Xiao-Qun Chen, Jia Li, Zhi-Pin Wu, Xin Liu, Xi-Cai Wang
Studying objects
There was a total of 511 cases lung cancer patients in our hospital by out-patient and in-patient treatment from Jan. 2002 to Jan. 2012, age 24 to 69 years, average age 59.2±15.1 years.All patients were pathologically confirmed, and the normal control group, 200 cases, were healthy subjects randomly selected, aged 25 to 68 years, average age 45.2±11.3years, and these subjects were excluded: (1) consolidated immune system disease; (2) combined with chronic wasting disease and infectious diseases; (3) combined with other malignancies.
Methods
Conventional-treatment group: All patients were given standardized chemotherapy after surgery.
CIK cell therapy group: All patients were given standardized chemotherapy for 1 month right after surgery, then autologous immune cells in vitro expansion and reinfusion were carried out.
CIK cell preparation: Mononuclear cells (PBMCs) were collected from the peripheral blood of patients with a blood cell separator, and suspended in serum-free medium, adjusting the cell density to 1×10 6 /ml.Cells were seeded in culture flasks and incubated in 37 ℃ and 5% CO2 incubator train, 1 000 U/ml γ-IFN cultivation was added at 0 h, and 300 U/ml IL-2 and 50 ng/ml CD3McAb were added at 24h.Culture medium was replaced every 4 d, with adjusting the cell density to 1×10 6 /ml and adding additional 300 U/ml IL-2, and 50 ng/ml CD3McAb was supplemented at the same time when IL-2 was complemented every 8 d.Cells were collected about 1 week.
CIK cell Reinfusion: At the end of cell culture, collected the cells with negative results of microorganisms testing, after centrifuged to remove culture medium, the cells were washed 3 times with physiological saline, then suspended and vein-reinfusioned in liquid dubbed with 30 ml 20% human serum albumin and 100 ml normal saline, once a day for 3consecutive days (as a course of treatment), the number of cells was in 2×10 9 to 6×10 9 /ml in each reinfusion.
The detection method of the cellular immune: 100ul fresh EDTA-anticoagulated peripheral blood was taken, added 20ul monoclonal antibody, mixed and stained in darkness at room temperature for 30 min, then 1.5 ml of erythrocyte lysis buffer was added, mixed and placed in dark place at room temperature for 10min to lyse erythrocytes.2000g was taken to be centrifuged for 5 min and the supernatant was discarded, then added 1 ml 1×PBS into each tube, mixed and tested on the machine after centrifugation to remove the supernatant, and 10000 cells were gotten for each tube.According to the result of the detection, they were subdivided into 5 types: Class A : CD3, CD4, CD8, CD4/CD8, NK and CIK were all normal Class B: CD3, CD4, CD8 decreased, CD4/CD8 were normal; NK increased and CIK was normal Class C: CD3 was normal, CD4 significantly decreased, CD8 significantly increased, CD4/CD8 significantly decreased, and NK and CIK were normal Class D: CD3 and CD4 were normal, CD8 significantly decreased, CD4/CD8 significantly increased, and NK and CIK were normal Class E: CD3 increased, CD4 was normal, CD8 significantly increased, CD4/CD8 significantly decreased, NK decreased, and CIK was normal
Statistically
SPSS16.0 was used for the statistical analysis, all data were presented as mean±standard deviation (x±s), chi-square test was used to the counting data, analysis of variance was used to measure data, LSD method was used to compare each group with the other, kaplan-meier was used to analyze maintaining time and Log-rank was used for the comparison between the two groups, and P<0.05 for the difference was judged to be statistically significant.
Characteristics of the immune function of lung cancer patients
Before the treatment, according to characteristics of cells immune functions, 511 lung cancer patients were divided into Class A (147 cases, 28.8%), Class B (128 cases, 25.0%), Class C (142 cases, 27.8%), Class D (68 cases, 13.3%) and Class E (26 cases, 5.1%).Compared with normal people, cellular immune functions of Class A had no difference in all indicators (P>0.05).CD3, CD4 and CD8 of Class B were lower than those of normal people with the statistically significant difference (P<0.05), while there was no statistically significant difference in CD4/ CD8, in addition, NK cells were higher than those of normal people with the statistically significant difference, and there was not statistically significant difference in CIK cells from that of normal people.For Class C, CD3, CD4 and CD4/CD8 were lower than normal with the statistically significant difference (P<0.05),CD8 and NK cells were higher than normal with statistically significant difference, and there was not statistically significant difference in CIK cells from normal people.For Class D, CD3 and CD8 were lower than normal with the statistically significant difference (P<0.05),CD4/CD8 was higher than normal with the statistically significant difference, and no statistically differences in CD4, NK and CIK cells from normal people.For Class E, CD4 and CD4/CD8 were lower than normal with the statistically significant difference (P<0.05),CD3 and CD8 were higher than normal with the statistically difference, and no statistically differences in NK and CIK cells from those of normal human (Table 1).
CIK treatment effects on the regulation of different immune dysfunction of lung cancer patients
After a course of CIK treatment, 31 cases of 67 patients in Class B improved (46.3%), the improvement rate was significantly higher than other Class, and it was the worst for Class D, only 8.0%, the difference was statistically significant (P=0.000).Class B, C and E improved significantly after the 2 nd course of treatment, but only Class D significantly improved until the 3 rd course of treatment.Improvement rates of 4 groups at the end of 5 courses were 79.1%, 70.8%, 76.0% and 70.0%, respectively, all significant, and the difference was not statistically significant (P=0.675).For Class B, after the 1 st course of treatment, the improvement rate was 46.3% and 64.2% after the 2 nd course, the difference between them was statistically significant (P=0.001), and after the 3 rd course, the situation stabilized without statistical significance (P=0.131)compared with the 4 th course.For Class C, there was no significant improvement in the 1 st course with the improvement rate of 19.4%, while there was a significant improvement in the 2 nd course, with the improvement rate of 52.8%, the difference was statistically significant (P=0.000), and after the 3 rd course, the situation stabilized without statistical significance (P=0.176)compared with the 4 th course.For Class D, in the 1 st and 2 nd course, the improvements were not obvious, until the 3 rd course, the improvement was significant with rate of 62.0%, the difference was statistical (P=0.009), and there was not obvious change in the 4 th and 5 th course, comparing with the 3 rd course, the difference was not statistically significant (P=0.288).Class E improved significantly after the 2 nd course, the improvement rate was 45.0%, and the situation became stabilized after the 3 rd and 4 th course (Table 2).
Maintaining time of normal status of cellular immune function in lung cancer patients
Maintaining time of normal status of cellular immune function in Class B lung cancer patients: For Class B patients, whose immune function restored normal at the end of the 1 st course, the median time of keeping normal immune function was 4.1 months (3.23-4.97),after the 2 nd course, it was 5.8 months (4.64-6.96),6.3 months (6.00-6.60)after the 3 rd course, 9.8 months (8.59-11.01)after the 4 th course and 10.4 months (9.76-11.04) in the end of 5 courses.The time of maintaining normal all extended after different courses, there were no statistically significant difference (P=0.534) in remaining time between the 2 nd and the 3 rd course, as well as the 4 th and the 5 th , P=0.000 (Table 3 and Figure 1).
Maintaining time of normal status of cellular immune function in Class C lung cancer patients: For Class C patients, whose immune function restored normal at the end of the 1 st course, the median time of keeping immune function normal was 2.2 months (1.83-2.57),after the 2 nd course, it was 3.6months (2.54-4.66),and 6.2 months (4.68-7.72)after the 3 rd course, with obvious increasing and statistically significant difference when compared with the 1 st and 2 nd (both P=0.000).The median time was 8.2 months (7.97-8.43)after the 4 th course, with statistically significant difference when compared with the 3 rd (P=0.018) and 8.4 months (7.86-8.94) in the end of 5 courses without statistically difference (P=0.626)(Table 3 and Figure 1).
Maintaining time of normal status of cellular immune function in Class D lung cancer patients: For Class D patients, whose immune function restored normal after the end of the 1 st course, the median time of keeping immune function normal was 1.0 month (0.61-1.39), after the 2 nd course, it was 2.6 months (0.73-4.47) with more slowly rising, and 6.2 months (4.68-7.72)after the 3 rd course with obvious increasing.The median time was 8.8 months (8.41-9.19)after the 4 th course and 9.8 months (9.20-10.4) in the end of 5 courses.There were statistically significant differences, when comparing with the medium time of each courses (Table 3 and Figure 1).
Maintaining time of normal status of cellular immune function in Class E lung cancer patients: For Class E patients, whose immune function restored normal at the end of the 1 st course, the median of keeping immune function normal was 1.2 months (0.77-1.63), the 2 nd course, it was 2.6 months (0.73-4.47) without statistically significant difference (P=0.272)compared with the 1 st .The median time was 4.5 months (2.88-6.12)after the 3 rd course with statistically significant difference (P=0.012) when compared with the 2 nd , and 6.6 months (4.02-9.19)after the 4 th course with statistically significant difference (P=0.049) when compared with the 3 rd .The median time was 7.9 months (6.25-9.55) in the end of 5 courses without statistically significant difference when compared with other courses (P=0.170)(Table 3 and Figure 1).
Comparison of maintaining time of restoring immune
after the same treatment course in different Class lung cancer patients: From Table 3, it could be seen that at the end of the 1 st course, the maintaining time of Class B was 4.1 months (3.23-4.97),longer than other Class with statistically significance differences (P=0.000 for all), and the maintaining time of D and E were worst without statistically significance difference (P=0.347) between them but statistically significance differences (P=0.007 and 0.000, respectively) when compared with C.After the 2 nd course, it was still longer for Class B than other Class with statistically significant differences (P=0.000 for all), D and E were poor without statistical significance (P=0.066), and C was different from D but not from E with P as 0.004 and 0.076, respectively.After the 3 rd course, the maintaining time of B, C, D and E were significantly prolonged, but there were only statistically significant differences between B and D or E with P as 0.05 and 0.007, respectively.The difference was not statistically significant in other comparisons with P all greater than 0.05.After 5 courses, B and D maintained the longest without statistical significance (P=0.589) between them, while C and E maintained a short period without statistically significant difference (P=0.444) between them, but with statistically significant difference (P <0.01) when compared with B and D (Figure 2).
Discussion
The incidence and mortality rates of lung cancer are in the first place of malignancy in the world, of which more than 80% are NSCLC, characterized by high incidence, high mortality and low survival (Levina et al., 2010).Standardized radiotherapy could kill the majority of tumor cells in a certain extent, but because of such factors as severe toxicity, resistant cells, the existence of cancer stem cells and anti-radiation cell, low immunity of advanced tumors patients and other factors, the implementation and benefit of the treatment program are greatly hampered (Levina et al., 2010;Baas et al., 2011;Hsu et al., 2011).Therefore, it is received widespread attention to explore and research new methods of treatment.
There are CD system antigens in normal T cell surface.CD3+T are all mature T cells, CD4+T are assistant T lymphocyte, and CD8+T are inhibitory T lymphocytes.The proportion of these cells in the human body is constant to maintain its optimum state of equilibrium, in order to achieve the cellular immune surveillance (Shepherd et al., 2011).The main role of CD4 T cells as the assistant T lymphocytes is to identify exogenous antigen peptide, presented by MHC I molecules, differentiating to Th cells after the activation (Bordón et al., 2011).The function of CD8 T cell is to inhibit the activation stage of immune response, and its target cells are antigen-specific Th cells and B cells.Human tumor-specific cytotoxic T lymphocytes are all CD8 T cells, the changes of which could lead to tumor-specific anti-tumor effect weakening.Studies and clinical researches both showed that the CD4+/CD8+ might well reflect the balance of the host doi.org/10.7314/APJCP.2013.14.6.3587Analysis of Moderating Effects on Lung-cancer Cellular Immune Function with CIK Cell Therapy immune regulation, and its ratio-decreasing indicated the reducing of patients' immune function and increasing of opportunities of malignant tumor occurrence (Das et al., 2007).NK cells are effector cells, playing the role in early stage of tumor, requiring no antibodies or pre-sensitized lymphocytes, it could quickly be activated to suppress and destruct variety of tumor cells, and more lethal after affected by lymphokines (Kuss et al., 2004).
Immune functions of lung cancer patients are low, so the body could not effectively carry out the immune defense reaction, which is an important factor leading to tumor cell immune escaping and postoperative metastasis recurrence (Hongeng et al., 2003;Shi et al., 2004;Sievers et al., 2004;Wang et al., 2006;Méndez et al., 2007;Wongkajornsilp et al., 2009).In this study, there were different types of immune function disorders of peripheral blood cells in lung cancer patients, which were distributed to 147 cases Class A (28.8%), 128 cases Class B (25.0%), 142 cases Class C (27.8%), 68 cases Class D (13.3%) and 26 cases Class E (5.1%) among 511 lung cancer patients according to the cells immune characteristics before treatment.Compared with normal people, cellular immune functions of Class A had no difference (P> 0.05).CD3, CD4 and CD8 of Class B were lower than those of normal people with statistically significant difference (P<0.05),but no statistically significant difference in the ratio of CD4/CD8, and NK cells were higher than normal people with statistically significant difference, and the difference in CIK cells was not statistically significant.For Class C, CD3, CD4 and CD4/CD8 were lower than those of normal people with the statistically significant difference (P<0.05),CD8 and NK cells were higher than normal with statistically significant difference, and there was no statistically significant difference in CIK cells when compared with normal.For Class D, CD3 and CD8 were lower than normal with statistically significant difference (P<0.05),CD4/CD8 was higher with statistically significant difference, and there were not statistically significant difference in CD4, NK and CIK cells when compared with normal people.For Class E, CD4 and CD4/CD8 were lower than those of normal person with statistically significant difference (P<0.05),CD3 and CD8 were higher than those of normal with statistically significant difference, and there were not statistically significant differences in NK and CIK cells when compared with normal people.
Treatment of CIK is the new strategy for tumor adoptive immunotherapy developed recent years (Höltl et al., 1999;Blattman and Greenberg, 2004).CIK cells are heterogeneous cell groups, including two subsets CD3-CD16+CD56+ and CD3+CD16+CD56+, which are obtained by people PMBC co-incubation in vitro with a variety of cytokines, such as CD3McAb, IL-2, IFN-γfor some time, the main effector cells are CD3+CD56+ cells, known as effector cells of a tumor biological treatment with the strongest cytotoxic activity.The CIK cells could kill tumor cells through three ways (Zhang et al., 2007), the 1 st way, CIK cells could recognize tumor cell surface ligands by Mcli20 and form proteins and particle enzymes through perforation to dissolve cells and directly kill tumor cells.the 2 nd way, a variety of cytokines (IL-2, IFN-γ, TNF, etc.) are secreted by the CIK cells into the body, could not only directly inhibit tumor cells, but also indirectly kill them by regulating the body's immune system, at the same time, enhance the anti-tumor function of T cells.Finally, the 3 rd way, CIK cells could induce apoptosis of tumor cells through the expression of FasL, and express anti-apoptotic genes (Bcl-2, Bcl-xl, SurvIvin) to resist the counterproductive effect of FasL-positive tumor cells, which would make CIK cells be able to tolerate the apoptosis induced by tumor cell, which would express apoptosis-related factor ligands, and thus play a longlasting anti-tumor effect (Xu et al., 2010).
This study showed that the immune functions of lung cancer patients with different cell immune dysfunctions achieved improvements after CIK cell therapy, but the needs for the treatment courses of CIK were different.In this study, after 5 courses, immune function improvement rates could all be over 70%, consistent with the report of Li et al. (2012) on kidney cancer immune function improvement.This study suggested that the improvements of immune function by CIK cells were affected by two factors, the first factor was the situation of immune function of lung cancer patients prior to the treatment, different immune functional status would react differently to CIK therapy.The second factor was the number of CIK cells.Only a treatment course of CIK could not improve the immune function of lung cancer patients, while several courses of treatment could achieve a stable effect.Among of the five types patients, Class B needed the fewest courses, only 1-2 could achieve stable effects, while Class C needed 2-3 courses, 3-4 courses for Class D with the worst effect in the end of the 1 st course, and 2-3 courses for E, thus it was prompted that CIK cell therapy program should be reasonably arranged according to functional status of lung cancer patients after the CIK treatment.
To maintain and try to extend the time of the antitumor immune is equally important with stimulating the body's anti-tumor immunity (Méndez et al., 2007;Wongkajornsilp et al., s2009).In present, in CIK cell therapy, there is more controversy on the questions of the number of cells needed to be infused and how long the body's anti-tumor immune function could last after the infusion.In this study, the results of immune function test of patients showed that the recovery of immune function of lung cancer patients could last for some time after CIK cell therapy, but the durations were different according to different courses and different types of immune disorders.This prompted that the CIK treatment cycle needed to be designed according to the immune function of the patient's condition and the treatment times, which would enable patients to last long-term maintenance of normal immune function and achieve long-term fighting against cancer and longer survival time.
This study suggested that at the end of the 1 st course of treatment, the maintaining time of Class B was longer, being 4.1 months (3.23-4.97),suggesting that patients needed another course of treatment at the 4 th month from the end of the 1 st course, while it were 2.2 months (1.83-2.57),1.0 (0.61-1.39) and 1.2 months (0.77-1.63) for Class C, D and E, respectively, suggesting that these patients needed another course of treatment at the 1 st or 2 nd month from the end of the 1 st course, it was significantly different from the traditional CIK cell treatment interval, in which the current interval is 3 months after the 1 st course of treatment before the next course.At the end of the 2 nd course, the median for the duration of the 4 types were 5.8 (4.64-6.96), 3.6 months (2.54-4.66), 2.6 month (0.73-4.47) and 2.6 month (2.31-2.89),respectively, significantly different from the traditional CIK cell treatment interval, and the current interval is 6 months after the 2 nd course before the next course starts.At the end of five courses, the duration of the 4 types were significantly prolonged, but did not reach 1 year.This prompted the appropriate period of treatment needed to be given according to the changes in immune functions during CIK cell treatment.
In conclusion, CIK cells could improve the immune function of lung cancer patients, but the quality of improvement was associated with the immune function conditions before treatment and the treatment courses.The improvements of cellular immune function of lung cancer patients were able to remain for some time, but the length of the maintaining time was still associated with the immune function conditions before treatment and the treatment courses.It needed to select different time intervals according to different situations during CIK cell treatment.
Figure 2 .
Figure 2. Comparison of Maintaining Time of Restoring Immune Function after the Same Treatment Course in Different Class Lung Cancer Patients | 2018-04-03T05:12:10.391Z | 2013-06-30T00:00:00.000 | {
"year": 2013,
"sha1": "5c8c50bae277b2cb4152821212dc1b31f94ec8aa",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201332479511386&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5c8c50bae277b2cb4152821212dc1b31f94ec8aa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10416676 | pes2o/s2orc | v3-fos-license | Vector-free transmission and persistence of Japanese encephalitis virus in pigs
Japanese encephalitis virus (JEV), a main cause of severe viral encephalitis in humans, has a complex ecology, composed of a cycle involving primarily waterbirds and mosquitoes, as well as a cycle involving pigs as amplifying hosts. To date, JEV transmission has been exclusively described as being mosquito-mediated. Here we demonstrate that JEV can be transmitted between pigs in the absence of arthropod vectors. Pigs shed virus in oronasal secretions and are highly susceptible to oronasal infection. Clinical symptoms, virus tropism and central nervous system histological lesions are similar in pigs infected through needle, contact or oronasal inoculation. In all cases, a particularly important site of replication are the tonsils, in which JEV is found to persist for at least 25 days despite the presence of high levels of neutralizing antibodies. Our findings could have a major impact on the ecology of JEV in temperate regions with short mosquito seasons.
J apanese encephalitis virus (JEV) causes an important zoonotic vector-borne disease first isolated from a human in Japan in 1935 (ref. 1). It is currently present in East and Southeast Asia and Australia 2,3 . The annual incidence of human cases is reported to be in the range of 50,000-175,000 (refs 2-4). During an epidemic, roughly 0.1-4% of infected individuals develop clinically apparent encephalitis. The mortality associated with JEV encephalitis is as high as 25-30%, and B50% of surviving patients suffer neuropsychiatric sequelae 2,3 . JEV is considered to be the most frequent viral encephalitis associated with fatal or severe outcomes 5 .
JEV is vector-borne with Culex mosquitoes as its main vectors, and with waterbirds such as egrets and herons as reservoirs. However, it has been reported that pigs serve as amplifying hosts in human epidemics 2,3,6-10 . As early as the 1950s, studies found that pigs are readily infected with JEV and develop viraemia for several days 8,11 . While clinical symptoms in pigs are mild, humans and horses can develop severe disease with encephalitis. Factors favouring pigs as being the main amplifying host for JEV are a high birth rate and a rapid population turnover, resulting in constant generation of an immunologically naive population. Furthermore, an important JEV vector Culex tritaeniorhynchus preferentially feeds on pigs 2,3,9,12,13 . Fortunately, viraemia in humans and horses is probably insufficient to infect mosquitoes, and they are considered to be dead-end hosts 2 .
In the past, the temperate northern Japanese Island Hokkaido was affected by JEV epidemics, and the virus was shown to hibernate and re-emerge in the same local region 14 . Some ecological and epidemiological aspects of these outbreaks remained enigmatic. First, during several Japanese encephalitis outbreaks, no virus was isolated from locally collected mosquitoes 15,16 . Second, two geographically isolated distinct outbreaks were identified on pig farms in Hokkaido over at least 3 years, demonstrating that the virus can hibernate locally 14 . However, the underlying mechanisms were not clarified. Also, during epidemiological investigations in Taiwan, no viraemic mosquitoes were found in the period before JEV outbreaks in pigs 17 .
We questioned if vector-free transmission might be possible and, if so, could help explain some of the observations made in temperate regions. Therefore, in the frame of a pathogenesis study with pigs, we placed sentinels with intravenously (i.v.) infected pigs and found vector-free transmission of JEV in pigs. This finding was confirmed and further supported by demonstrating efficient oronasal infection with low doses of virus. Moreover, tonsils appear to play a prominent role as a source of virus replication and persistence.
Results
JEV can transmit between pigs in the absence of vectors. We observed JEV transmission from needle-infected pigs to uninfected naive pigs when three infected pigs were housed with two uninfected animals. Before infection, all piglets were healthy and alert, with normal body temperatures of 38.7-39.4°C. Body temperature in the needle-infected animals increased after 24 h, with readings up to 40.6°C; fever lasted for 4-5 days before dropping to pre-infection levels (Fig. 1a). The two contact animals developed fever 6 and 9 days after needle infection of the other three animals. In all but one animal, fever curves were double-peaked. Appetite was reduced in all animals. They produced less manure and were reluctant to move for 3-6 days. When body temperature normalized, clinical symptoms declined and finally disappeared.
Viraemia in needle-infected animals lasted 3 days and reached maximum values in the range of 10 4 RNA U ml À 1 (Fig. 1b); viral titres were 3.2 Â 10 4 (2 animals)-3.2 Â 10 5 (1 animal) tissue culture infectious dose 50 (TCID 50 ) per ml. Viraemia was found in both contact pigs for 2 and 4 days, with maximum values around one to two orders of magnitude lower than the maximum in the needle-infected animals (Fig. 1b). Both sera were positive for virus isolation by cell culture, but their infectious titres were close to the detection limit of the assay (50 TCID 50 per ml). To verify this unexpected and to our knowledge previously not described observation of viral transmission in a vector-free environment, we conducted a second experiment. Two animals were needle-infected and six healthy animals were kept in the same stable to act as sentinels. Clinical outcomes in the needleinfected animals were as described above, with one pig's body temperature passing 41°C. Both became viraemic both in terms of viral RNA (Fig. 1c,d) and live virus detection (1.5 and 3.2 Â 10 4 TCID 50 per ml). Body temperature increased over 40°C in two of the sentinels (Fig. 1c). However, real-time reverse transcriptionquantitative PCR (RT-qPCR) revealed viraemia in only one. Viraemia lasted for 3 days, and values were close to 10 4 RNA U ml À 1 , similar to the needle-infected pigs (Fig. 1d). The other five sentinel animals did not develop detectable viraemia or seroconversion, and RT-qPCR-positive organs were not detected during the 11-day observational period.
JEV organ tropism is independent of mode of infection. Necropsy of needle-infected animals was performed at day 11 (first experiment) or 7 (second experiment), and necropsy for sentinels was performed at day 10 or 11, which was 6-8 days after estimated transmission (post transmission). Figure 2 shows relative RNA quantities for both needle-infected and sentinel animals. Relative RNA quantities were comparable between all animals, independent of mode of infection. The lymph nodes, the ileum with its continuous Peyer's patches, and parts of the nasal cavity were positive for viral RNA. Interestingly, relative RNA levels of up to 10 5 U g À 1 were found in the tonsils. These values were 1-2 orders of magnitude higher than in the other organs (Fig. 2a). In the brain tissues, values were also comparable between the needle-and contact-infected pigs. All examined regions remained positive until the end of the study period, with highest levels of up to 10 3 -10 4 RNA U g À 1 in the (frontal) neocortex, thalamus and basal nuclei. In the brain stem and olfactory bulb, we found roughly 10 times less viral RNA compared with the other regions (Fig. 2b). By titrating lysed material from the tonsils, we confirmed the presence of live virus in all tonsils of infected pigs (Table 1).
Oronasal virus shedding by JEV-infected pigs. RT-qPCR indicated that needle-infected animals started to shed virus oronasally as early as two days post infection (p.i.) for a period of B4 days. Animals infected by contact first shed virus 5 days after first contact with the needle-infected animals. In two of them, viral RNA in oronasal swabs was detected for 1 day only. The third animal shed virus for 3 days (Fig. 3). Swabs from the eyes, rectum and vagina/preputium, and the urine were negative, with the exception of one animal in which a foreskin swab was RT-qPCR positive at 5 days p.i. (0.6 RNA U ml À 1 ). We used cell culture to confirm that the oronasal swabs contained live virus. Most swabs collected at 4-5 days p.i. were positive ( Table 2, second column). Similarly, pigs infected by contact shed live virus 6-10 days after contact, depending on the animal ( Table 2, third column).
Pigs are highly susceptible to oronasal JEV infection. Considering that three out of eight in-contact animals became ill and shed virus oronasally for 1-4 days, we tested the oronasal route as a means of infection by JEV. As described in Methods, nine animals were infected oronasally using three different doses of virus. In all nine pigs, body temperatures raised after 4-9 days, reaching 41.5°C in some animals. Interestingly, two animals infected with the lowest dose (10 3 TCID 50 ) developed the highest body temperatures. By day 10 p.i., body temperatures of all animals returned to normal levels (Fig. 4a). Viraemia in all three groups was comparable, although two animals infected with the lowest dose developed viraemia 1-2 days later than the other pigs (Fig. 4b). In all pigs, viraemia lasted for 4 days. One animal, infected with the highest dose, suffered from rebound viraemia on day 16 p.i., with viral RNA detected in the serum. Only one blood sample per week was taken, and we cannot determine the duration of this second viraemia.
Animals infected via the oronasal route were also positive for JEV RNA in oronasal swabs (Fig. 4c). Swabs were positive in pigs infected with the highest dose at day 1 p.i., possibly representing input virus. Thereafter, most pigs shed virus between day 4 and 7 p.i., with relative RNA levels reaching 100 U ml À 1 . Nevertheless, some animals had RNA-positive swabs up to 9 days p.i., which tissues were determined at necropsy by real-time RT-PCR and expressed as U g À 1 (1 U corresponding to the RNA quantity found in 1 TCID 50 of a virus stock). Solid symbols represent needle-infected pigs (n ¼ 5) killed at 7 (circles) and 11 (squares) days p.i. Open symbols represent pigs infected by contact (n ¼ 3). Two animals, corresponding to those shown in Fig. 1b, were killed at day 11, which was 4 and 7 days after the peak of viraemia, respectively. One animal, corresponding to Fig. 1d, was killed at day 10, which was 7 days after peak viraemia. Asterisks (*) indicate significant differences calculated with a nonparametric two-tailed Mann-Whitney U-test (Po0.05).
was 2 days beyond the end of the viraemic phase ( Fig. 4c). Between 3 and 6 days p.i., the majority of swabs were also positive for virus isolation (Table 2). Considering the relatively low levels of virus in oronasal swabs, we decided to perform a follow-up oronasal experimental infection with doses of 10, 100 and 1,000 TCID 50 per pig (Fig. 4d,e). Strikingly, all animals again became infected, with incubation times of 2-3 days and viraemia lasting 5-6 days. At 4 and 5 days p.i., body temperature in all pigs was above 39.5°C (Fig. 4d). Again, as early as 3 days p.i., some pigs had viral RNApositive oronasal swabs; by day 7 all swabs were positive (Fig. 4f). In most pigs, oronasal virus excretion lasted 5-6 days.
Viral tropism for non-central nervous system (CNS) tissue was similar in needle and oronasally infected pigs. At necropsy 10 days p.i., lymph nodes, ileum and tonsils from the lowestdose-infected pigs were positive for viral RNA (Fig. 5a). In the lymph nodes and ileum, only 100-1,000 RNA U g À 1 were detected, while almost 100,000 RNA U g À 1 were found in the tonsils. No difference was observed between the 10 3 and 10 5 dose in the lymph nodes and tonsils. The trachea and nasal cavity were negative for viral RNA except in one pig (Fig. 5a). Urine samples were collected on the day of slaughter; we found one positive sample (0.5 RNA U ml À 1 ). RNA levels in the brain were comparable in tissues isolated from pigs infected with the low and middle doses. Thalamus and basal nuclei reached the highest levels of around 1,000-10,000 RNA U g À 1 (Fig. 5b).
Histopathological lesions. Regardless of mode of infection, JEV induced histopathological CNS lesions typical of a viral meningoencephalomyelitis. Lesions were characterized by multifocal lymphohistiocytic perivascular cuffs affecting mainly the grey matter, and to a lesser degree, the white matter. They were associated with glial nodules and evidence of neuronal degeneration and necrosis. Frequently, few neutrophils were present in the areas of neuronal necrosis. In addition, multifocal mild lymphohistiocytic meningitis was present. Scoring the lesions in the brain stem, cerebellum, midbrain, thalamus, hippocampus, basal nuclei, neocortex and the bulbus olfactorius indicated that the mode of infection did not fundamentally influence virusinduced pathology and distribution of CNS lesions (Fig. 6). Note that the time after the virus had reached the CNS was unequal between the groups as the animals were not slaughtered the same day, the incubation period differed (Figs 1 and 4) or the time of infection was unknown. Nevertheless, the overall score calculated as an average score of the CNS tissue analysed was similar for all modes of infections. Lymphatic tissues including tonsils showed slight follicular hyperplasia, which is indicative of activation but otherwise no pathological alterations.
JEV can persist in the tonsils for at least 25 days. The levels of viral RNA were always highest in the tonsils in all animals, independent of the route and dose of infection (Figs 2 and 5). Given that the longest observational time in our initial experiment was 11 days p.i., we decided to keep a new group of animals longer to examine the potential persistence of JEV. Strikingly, on day 21 after oronasal infection with the Nakayama strain, the peripheral organs and CNS for two animals were negative for viral RNA, but 10 3 -10 4 U g À 1 remained in the tonsils. These values were comparable to those at 7 and 11 days p.i. The lymph nodes, jejunum, trachea, olfactory bulb, neocortex and basal nuclei were positive in one animal (Fig. 7a, red squares). This animal had a second viraemia at 17 days p.i. (Fig. 4b). RT-PCR was negative in urine samples collected at 21 days p.i.
To determine if persistence of virus in the tonsils is unique to the Nakayama strain or if it can result from infection with other genotypes, we analysed six pigs infected with Laos strain, a genotype I JEV. At necropsy on day 11, the highest RNA values were in the tonsils, whereas the CNS samples were negative or roughly two orders of magnitude lower (Fig. 7b). Strikingly, at 25 Fig. 1 (experiment one and two as described in Methods); viral loads were quantified by real-time RT-qPCR (1 U corresponding to the RNA quantity found in 1 TCID 50 of a virus stock). Black: needle-infected animals (n ¼ 5); red: contact-infected pigs (n ¼ 3). p.i., all organs were negative for viral RNA except the tonsils, confirming JEV's ability to persist in this organ for over 3 weeks (Fig. 7c). All tonsils of infected pigs were also positive for virus isolation.
Immune response. All infected animals mounted a rapid immune response in terms of JEV-neutralizing antibodies (Fig. 8). At 7-10 days p.i. (6-9 days after first viraemia), all animals had titres of 40-80 TCID 50 per ml. Similar titres were found in pigs infected by contact. This finding was confirmed in the oronasally infected pigs. The levels of neutralizing antibodies increased with time after infection, but they did not differ by mode of infection or JEV genotype. These results demonstrate that JEV persists in the tonsils despite the presence of an efficient humoral immune response.
Discussion
This study describes two findings concerning JEV infection in pigs, both of which may have a significant impact on our understanding of JEV's ecology, epidemiology and on approaches to controlling it. First, vector-free transmission between pigs can occur via direct contact, with animals being highly susceptible to oronasal infection. Second, the tonsils are a primary replication site of JEV, regardless of mode of infection, and JEV can persist in them for at least 25 days despite the presence of neutralizing antibodies.
Textbooks and published scientific articles describe Japanese encephalitis as being exclusively mosquito-borne, with Culex species as the main vectors (reviewed in refs 2,3). In our first experimental infection, both sentinel animals became ill, but only one out of six was infected in the second. This difference could be due to the fact that in the second experiment only two animals were needle-infected, which could have reduced the chances of contact. In fact, our facility's efficient ventilation system and low stocking density (43 m 2 per animal) support transmission by contact-dependent route rather than by aerosols. Although our study had too few animals to estimate the reproduction value of transmission, it indicates that this process is not as efficacious as with viruses that have adapted to enter through the mucosal surfaces of the airways, such as influenza virus. This possibility is understandable, given that mosquitoes are clearly the main transmission mode of JEV. Nevertheless, it is possible that under field conditions with a dense pig population and other pathogens, the rate of vector-free transmission could be higher compared with experimental conditions with clean stables, controlled temperature and humidity, high ventilation and no crowding. Furthermore, with our experimental conditions and specificpathogen-free (SPF) status, our pigs are indeed relatively resistant to pathogenic virus infections. Certainly, our data demonstrate that pigs are highly susceptible to oronasal infection with JEV, as a dose of only 10 TCID 50 per animal was sufficient to infect all animals in this group. Interestingly, in a Rhesus macaque model used to test JEV vaccines, nasal infection was also used [18][19][20] . In that experiment, doses causing disease were much higher with 6.6 Â 10 6 -2 Â 10 10 infectious units per animal required for infection. One study found that mice are also susceptible to oral infection with a dose of 1-2 Â 10 7 infectious units per animal, although in that study, no virological data was published 21 .
Importantly, although the levels of viral RNA and live virus isolated from oronasal swabs was low, mucosal virus shedding lasted up to 6 days in some animals, indicating that this could be the relevant source of virus for transmission. Our viraemia data indicate that the incubation period for pigs infected by contact was 3-5 days with respect to the development of viraemia. The incubation period was only 1 day when high oronasal virus doses were used (10 5 and 10 7 infectious units), but 2-3 days with low doses (10-1,000 units). This short incubation time could indicate that the total amount of virus transmitted by contact between pigs and resulting in an infection can be below 10 infectious units.
Different modes of infection did not result in fundamental differences in viraemia, virus excretion through the upper respiratory tract, virus tropism in the lymphoid and CNS tissues, and antibody response. If anything, viraemia lasted longer after oronasal/contact transmission. The virus doses employed for oronasal inoculation also did not appear to have a major impact on these parameters. The viraemia we observed is similar to that found by others, reaching levels of B10 4 infectious units per ml (refs 7,22-24). This level appears to be sufficient to transmit the virus to mosquitoes. For example, Takahashi et al. 25 found that 50% of mosquitoes ingesting 50 LD 50 became infected. Thus, assuming a blood meal of 2 ml, it can be expected that a low level of viraemia would transmit virus to significant numbers of mosquitoes. Other work has demonstrated that viraemic levels comparable to those found here were sufficient to transmit virus to up to 33% of mosquitoes 22 .
Considering the high viral load in the tonsils, the lymphoid tissue of the oropharynx could be a possible source of virus leading to oronasal infection. Also, the peak of viral RNA in oronasal swabs was found around 6 days p.i., which was 2-3 days after the peak of viraemia. In addition, virus-positive swabs were still found after the viraemic phase.
Several reports have demonstrated oral or nasal infection of West Nile virus (WNV), a closely related flavivirus, in a wide range of different species, including mice, wild birds, hamsters and alligators [26][27][28][29] . In humans, there is evidence for transmission via breastfeeding; this evidence is supported by data in hamsters 27,30 . Furthermore, laboratory infections may have occurred through aerosol transmission 31,32 . Although data supporting oronasal JEV infection species other than pigs is rare, our data should be taken as a warning that infection via the oronasal route might be possible, and direct pig-human and bird-bird transmissions cannot be excluded. Clearly, these possibilities require future investigation.
Our second important observation was JEV's tropism for the tonsils, where viral loads were 2-3 orders of magnitude higher than in other organs. Furthermore, JEV can persist in the tonsils for at least 3 weeks. To our knowledge, JEV infection of the tonsils in other species has not been described, although in one study, the tonsils were used as a source of virus isolation in pigs 33 . We found that high viral load in the tonsils persisted well beyond the viraemic phase of infection, despite the presence of neutralizing antibodies. Even by week 3 p.i., high RNA and live virus levels were detected in tonsil homogenates. All other organs tested were negative by then.
As this finding indicates a possible persistence for more than 1 month, future studies are required to determine the occurrence and duration of JEV persistence in porcine tonsils under field conditions. Persistence may be associated with reactivation and oronasal transmission events to naive pigs, thereby affecting the epidemiology of Japanese encephalitis.
A recent review 34 notes that WNV virus persists in several mammalian species, including rhesus monkeys, hamsters, mice and humans. In monkeys, hamsters and mice, virus can persist for several weeks to months in the CNS and peripheral tissues, including lymphoid tissues and kidney. Kidney targeting by WNV is related to viruria and renal pathology in human WNVinfected patients. We did not observe renal targeting by JEV, and positive RT-PCR in urine was rare. These findings indicate tropism differences between WNV and JEV.
In fact, for recurrent JEV outbreaks in temperate regions such as Hokkaido, the mechanisms of JEV hibernation are still unexplained 14,35 . In tropical regions, Japanese encephalitis is endemic throughout the year, as are mosquito vectors, and disease occurrence is clearly related to vector-borne transmission. In contrast, in temperate regions, Japanese encephalitis cases occur only in the warm season. Therefore, JEV re-emergence would require either reintroduction of JEV by migrating birds or a mechanism of virus overwintering in unknown hosts 3,35 . The re-emergence of porcine Japanese encephalitis cases in Hokkaido at the same locations indicates that JEV can overwinter locally 14 . In fact, more recent molecular analyses of JEV isolates from several genotypes present in temperate regions showed an important relationship between phylogeny and sampling location, favouring the concept of local overwintering. These studies indicate that genetic diversity of JEV isolates is driven by local virus transmission cycles rather than virus introduction from distant regions for instance by migratory birds 36,37 . Researchers have proposed that overwintering occurs in vertebrate hosts such as bats, in cold-blooded species or in invertebrates such as mosquitoes and ticks, the latter involving vertical virus transmission [35][36][37] . Indeed Culex species can overwinter locally 35 been demonstrated experimentally in mosquitoes [39][40][41] . Nevertheless, the winter host has not been identified, despite significant effort 35 . For example, only one JEV-positive larva was found in 382,000 larvae over a period of 3.5 years in Taiwan, a fact that questions the importance of vertical transmission as an overwintering mechanism 42 . In the abovementioned areas of Hokkaido Island, in which JEV remained endemic in pigs kept in distinct areas for several years, early JEV-induced abortions were observed before mosquito season 14 . Thus, alternative transmission pathways for JEV in pigs might exist, and our results should be the basis for field studies investigating the possible persistence of JEV in pigs, as well as vector-free transmission. Future studies are now urgently required to define the impact of our findings in the light of the One Health Initiative. Despite answering questions on the occurrence of vector-free transmission and virus persistence in pigs under field situations, the cellular target of virus replication and persistence in the tonsils, the swine immune response and the impact of persistence and vector-free transmission on virus adaptation and evolution need to be investigated.
Methods
Animal experiment. Five animal experiments were performed under biosafety level 3 (BSL3) conditions and approved by the Cantonal Ethical Committee for animal experiments (BE 118-13). In total, 28 healthy 7-week-old Swiss Large White pigs (15 castrated males and 13 females) from our specific-pathogen-free breeding facility were used. Animals were housed in groups of Z3 inside pens of 15 m 2 in the containment facility of the IVI, representing a BSL3-Ag facility. Before infection, they were allowed 1 week to adapt to the new environment.
A first transmission experiment was performed in the frame of a pathogenesis study, in which three animals were infected both into the jugular vein and intradermally (i.d.) with a total dose of 10 7 TCID 50 of JEV (Nakayama strain, obtained from the National collection of pathogenic viruses, NCPV, Salisbury, UK) in a volume of 2 ml. The Nakayama strain is a human genotype III isolate. It was used after two passages on Vero cells (ATCC, Manassas, VA, USA). Two naive animals were kept together with the needle-infected animals to determine a possible vector-free transmission. All animals underwent necropsy at day 11 p.i.
In a second experiment, we confirmed the ability of JEV to transmit in the absence of vector by infecting two animals i.v. and i.d. as described above, and adding six naive pigs to the same pen after 24 h. The needle-infected animals underwent necropsy at day 7 p.i. and the contact pigs 9 days after being in contact with the needle-infected pigs.
In a third experiment, we determined the efficacy of oronasal infection. Nine animals were housed separately in groups of three. Each group was infected oronasally with either 10 3 , 10 5 or 10 7 TCID 50 of JEV (Nakayama strain). Six animals underwent necropsy at day 10 p.i., and those infected with the highest dose were kept until day 21 p.i. In a fourth experiment, nine pigs were again housed separately in groups of three and infected oronasally with a lower dose (10 1 , 10 2 or 10 3 TCID 50 of JEV Nakayama strain). For both experiments, 1 ml of infectious solution was injected carefully into different parts of the mouth without pressure using a needle-free syringe, and 1 ml was applied dropwise into the nose while holding the pig's head up.
In the fifth experiment, six animals were infected with 10 6 TICD 50 with a JEV genotype I strain derived from the strain JEV_CNS769_Laos_2009 strain (GenBank accession number: KC196115). It was produced as previously described using the ISA (Infectious Subgenomic Amplicons) reverse genetic method 43 . Three pigs received the virus i.d. at the base of the ear and were kept for 25 days p.i., and three pigs were infected i.v. and kept for 11 days.
The following were assessed daily in all animals: body temperature, awareness, appetite, manure excretion, breathing, gait and neurological signs. Oronasal swabs were sampled daily from each animal, and vaginal/preputial, eye and rectal swabs were taken daily from pigs of experiment three and five (Sarstedt, Nümbrecht, Germany). Blood was drawn daily using monovettes (Sarstedt). Pigs were killed by electroshock and subsequent exsanguination. Sampling was performed immediately after exsanguination. It included swabs, blood, urine, as well as organs for RT-qPCR, virus isolation and histology. The following organs were sampled: peripheral lymph nodes, tonsils, ileum, jejunum, trachea and nasal cavity. The brain was taken out in toto. The following CNS parts were collected: brain stem, olfactory bulb, neocortex, thalamus and basal nuclei. For histology, organ samples were fixed in 4% buffered formalin.
Virological analyses. Organ samples were collected in 1.5-ml tubes (Sarstedt) containing 500 ml minimum essential medium (MEM; Life Technologies, Zug, Switzerland), and weighed before lysing with a BulletBlender (Next Advanced Inc., Averill Park, NY, USA). Lysed organs were centrifuged and the supernatants transferred into new tubes and frozen immediately at À 80°C. For real-time RT-qPCR, samples were thawed and spiked with enhanced green fluorescent protein RNA as an internal control, prepared as previously described 44 . RNA was extracted using a QIAmp viral RNA extraction kit (Qiagen AG, Hombrechtikon, Switzerland) according to the manufacturer's instructions. Real-time RT-PCR was performed as published 45 , using the 5 0 -GGTGTAAGGACTAGAGGTTAGAGG-3 0 as forward primer, 5 0 -ATTCCCAGGTGTCAATATGCTGTT-3 0 as reverse primer and FAM-cccgtggaaacaacatcatgcggc-TAMRA as probe. The RT-PCR employed the SuperScript III Platinum One-Step RT-qPCR Kit (Life Technologies) and was run on a 7900HT Thermocycler (Applied Biosystems) for 50 cycles. To ensure the quality of RNA extraction and PCR reactions, results were used only if the enhanced green fluorescent protein RT-qPCR showed a CT value o28.
Viral load was quantified relatively by using RNA from a stock of Nakayama JEV with a known titre as a standard. The stock was serially diluted 10-fold, RNA was extracted and cycle threshold (CT) values were determined to draw a standard curve, which was linear (correlation coefficient R ¼ 0.99) in the range of 41-12.6 CT, corresponding to 1 Â 10 À 1 -1.2 Â 10 7 TCID 50 per ml of the viral stock. No amplification was obtained with CT values higher than 42. The CT value corresponding to 1 TCID 50 was defined as 1 RNA unit. Using this standard, the CT values of our samples were transformed into relative quantities as RNA U ml À 1 . Organ samples were corrected for their weight and data calculated as relative RNA quantities in U mg À 1 .
To quantitate infectious virus, samples were serially diluted twofold in duplicates starting at a dilution of 1:2, and 100 ml of each dilution was added to confluent Vero cells (ATCC) cultured in 96-well plates with MEM supplemented with 1% fetal bovine serum (Biochrome) and 0.01 M HEPES (Life Technologies). After 4 h of incubation, the inoculum was removed and replaced with fresh medium. The cells were incubated for 72 h before fixing with 4% paraformaldehyde (Polysciences, Warrington, PA, USA) for 10 min and staining with anti-flavivirus E protein monoclonal antibody 4G2 (HB-112, ATCC) diluted in a saponin-PBS buffer (Sigma-Aldrich Chemie GmbH, Buchs, Switzerland), followed by horseradish peroxidase-conjugated goat-anti mouse antibody (Dako, Baar, Switzerland) and a final colour reaction with the 3-amino-9-ethylcarbazole (Sigma-Aldrich). Titres were calculated using the Reed and Muench formula 46 .
Histopathology. Samples were embedded in paraffin, cut to 4-mm thickness and stained with haematoxylin and eosin. Lesions in the CNS system were semiquantitatively scored from 0 to 4 (0: no lesions; 1: minimal lesions; 2: mild lesions; 3: moderate lesions; 4: severe lesions). Scoring was performed by a blinded histopathologist and based on lymphohistiocytic perivascular cuffs, neuronal necrosis, glial nodules and parenchymal infiltration by inflammatory cells.
Antibody responses. For plaque reduction neutralization tests (PRNT), sera were serially diluted twofold in medium in triplicate, starting at a 1:5 dilution in medium. One hundred plaque forming units per well of homologous virus were added to each well, and the serum-virus mix was gently agitated and incubated at 37°C for 30 min. Confluent Vero cells were then incubated with the serum-virus mix for 1 h at 37°C before washing with warm MEM (as above) and adding 200 ml 1% methylcellulose medium (Sigma-Aldrich) supplemented with 100 IU penicillin and 100 mg ml À 1 streptomycin per well. After incubation for 48 h at 37°C, the cells were fixed and stained as described above. As a secondary antibody, horseradish peroxidase-conjugated goat-anti pig was used at 1:500 (Bethyl, Montgomery, TX, USA). PRNT 50 titres were read as the last serum dilution that showed a 50% plaque forming unit reduction. | 2016-05-14T12:51:31.487Z | 2016-02-23T00:00:00.000 | {
"year": 2016,
"sha1": "b3f5e5770f2ef2c2668f58dfe5e625532bb0eb31",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms10832.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3f5e5770f2ef2c2668f58dfe5e625532bb0eb31",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
250048570 | pes2o/s2orc | v3-fos-license | Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance
Summary The utilization of artificial intelligence (AI) applications has experienced tremendous growth in recent years, bringing forth numerous benefits and conveniences. However, this expansion has also provoked ethical concerns, such as privacy breaches, algorithmic discrimination, security and reliability issues, transparency, and other unintended consequences. To determine whether a global consensus exists regarding the ethical principles that should govern AI applications and to contribute to the formation of future regulations, this paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide. We identified at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open source database and tool. We present the limitations of performing a global-scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
Introduction
Since the end of our last "AI winter," 1987 -1993, AI research and its industry have seen massive growth, either in developed technologies, investment, media attention, or new tasks that autonomous systems are nowadays able to perform.By looking at the history of submissions in ArXiv (between 2009 and 2021), 12 starting from 2018, Computer Science-related papers have been the most common sort of submitted material, increasing tenfold.Also, when we examine the category of Computer Science alone, "Computer Vision and Pattern Recognition," "Machine Learning," and "Computation and Language" are the most submitted types of sub-categories.Note that these are areas where Machine Learning is the current paradigm.
Besides the number of papers produced, we have never had more capital invested in AI-related companies and startups, either by governments or Venture Capital firms (more than 90 billion USD in 2021 in the USA alone) and AI-related patents being registered [72].
With the expansion of the AI field/industry also came another, the "AI Ethics boom," where a never-before-seen demand for regulation and normative guidance of these technologies has been put forward.The growth of AI has brought the risks and side effects of its use.The implications of unregulated AI translate into ethical concerns, for example, about privacy and surveillance, prejudice, and discrimination that may be more socially harmful than economically beneficial.Thus, philosophical concerns about the political and moral implications of the interaction between technologies and human judgment are urgent.Such a scenario has led several entities and companies worldwide to publish manifestos about their ethical principles concerning AI, 200 of which were mapped and analyzed by this meta-analysis.
Despite the number of publications on the subject, we can see the difficulty that countries have faced in defining guiding principles and rules that will regulate artificial intelligence due to the complexity of interests involved, as can be seen in discussions about the AI ACT in the European Union or even in the parliament of our country of origin, which discusses the Legal Framework for Artificial Intelligence.
That said, inspired by the work done by previous meta-analysts in the field, this study presents a systematic review of 200 documents related to AI ethics and governance to map whether there is any consensus or similarity among the ethical principles advocated by the institutions involved in the topic (industry, academia, civil society, etc.) that can or should be protected by future legislation.
Through this analysis, the reader will be able to see that there is a need for regulation and that one of the biggest challenges we face in the field today is that ethical principles cannot be universalized, making the standardization of contextual ethical parameters a real challenge in the search for regulation.Problems involving ethical boundaries and artificial intelligence systems are emerging and demanding responses from government and private companies.However, these responses do not account for the scope of what is yet to come.Laws and regulations are still principled at best and do not focus on restricting the development of these systems.
It was also possible to notice that the highest number of documents manifesting ethical concerns come from "AIdeveloping" countries, while nations that are "AI-users" do not have so many manifestos.Another perceived point is that most of these documents are superficial and generic concerning their practical application and have a non-binding character, which hinders their effectiveness.
The exploration of these and other points is the aim of this study.
Related Work
One of the first studies to promote a meta-analysis of published AI ethical guidelines was that of Jobin et al. [31].In this study, the authors sought to investigate whether a global agreement on emerging questions related to AI ethics and governance would arise.The research identified 84 documents containing ethical guidelines for intelligent autonomous systems, some of them being one of the most cited guidelines in the literature, like the Organization for Economic Co-operation and Development (OECD) Recommendation of the Council on Artificial Intelligence [69], the High-Level Expert Group on AI Ethics Guidelines for Trustworthy AI [46], the University of Montréal Declaration for responsible development of artificial intelligence [44], the Villani Mission's French National Strategy for AI [36], among many others. 13 12 ArXiv in Numbers 2021.https://arxiv.org/about/reports/2021_usage. 13Jobin et al. [31] sample also contained documents from governmental organizations (e.g., Australian Government Department of Industry Innovation and Science [42]), private companies (e.g., SAP [55], Telefonica [53], IBM [10]), non-governmental organizations (e.g., Future Advocacy [1], AI4People [2]), non-profit organizations (e.g., Internet Society [63], Future of Life Institute [43]), academic institutions (e.g., AI Now Institute [28]), professional associations (e.g., IEEE [52]), among other types of institutions.
In addition to listing the most common principles among the documents analyzed, 14 the authors also pointed to a "substantive divergence concerning how these principles were defined" (p. 1). 15Furthermore, Jobin et al. work is careful not to impose any normative guidelines for the effectiveness of the mentioned principles.It attempts to raise the issue and map the global picture.However, the limited sample entails that their findings do not wholeheartedly address the obstacles of integrating ethical guidelines and AI development.
Another work that presented a similar type of analysis was that of Hagendorff [21].Hagendorff's meta-analysis/critical review focused on a smaller sample (21 documents) but prioritized a specific set of documents.Documents classified as older than five years only referred to a national context, 16 not "specifically" related to AI (e.g., data ethics and the ethics of robotics), and corporate policies, 17 were not included in Hagendorff's study, which only chose documents that (according to his criteria) "proven to be relevant in the international discourse" (p.3).
Even working with a smaller sample, Hagendorff's findings corroborate with those of Jobin et al., and the most cited principles found in his research were accountability (77%), privacy (77%), justice (77%), and transparency (68%).Jobin et al. and Hagendorff's studies also presented "blind spots" in the field of AI ethics, like the under-representation of ethical principles such as sustainability, 18 and the under-representation of documents produced by institutions from South America, Africa, and the Middle East. 19 Other points cited by Hagendorff's work [21] served as motivation for the present meta-analysis, such as: • The lack of attention given to questions related to labor rights, technological unemployment, the militarization of AI and the creation of LAWS (Lethal Autonomous Weapons Systems), the spread of disinformation, electoral interferences, misuse/dual-use of AI technology, some cited by last than half of the documents; • The lack of gender diversity in the tech industry and AI ethics.According to Hagendorff's study, excluding the documents written by the research institute AI Now, a deliberately female-led organization, the proportion of female authors is only 31%; • The short, brief, and minimalist views some documents give to normative principles.Some documents have a size of no more than 500 words; • The lack of technical implementations for how to implement the defended principles in AI development (only 9% propose such implementations); • The lack of discussion on long-term risks (e.g., AGI Safety, existential risks).
Hagendorf´s work on AI ethics deficits [22] and Jobin et al. [31] meta-analysis are valuable contributions to the field of AI ethics and provide the stepping stone to a worldwide analysis seeking the inclusion of a large number of other worth mentioned documents in several countries.
Finally, we would like to cite the work done by Fjeld et al. [15].In their study, the authors worked with a sample of 36 documents originating from regions like Latin America, East/South Asia, the Middle East, North America, and Europe, being produced by a variety of institutions types, like governmental institutions (13 documents), private companies (8), professional associations and NGOs (5), intergovernmental organizations (2), and other types of multi-sectoral According to the authors, most of their sample came from private institutions (22,6%), governmental organizations (21,4%), and academic research institutions (10,7%), other documents having varied origins (e.g., NGOs, non-profits, professional associations). 14In their sample [31], the detected ethical principles were Transparency, Justice/Equity, Non-maleficence, Accountability, Privacy, Beneficence, Freedom & Autonomy, Trust, Dignity, Sustainability, and Solidarity.Of these 11 ethical principles cited, five were the most recurrent: Transparency (86%), Justice (81%), Non-maleficence (71%), Responsibility (71%), and Privacy (56%). 15The top creators of these guidelines are European Union nations (especially France, Germany, and Italy), followed by the United States of America, the United Kingdom, Canada, Finland, Sweden, Denmark, Japan, China, India, Mexico, Australia, and New Zealand.In summary, the main actors from the Global North. 16However, Hagendorff maintained samples from the European Union, the USA, and China.In Hagendorff's words, AI "superpowers" (p.3). 17Concerning these exclusion criteria, Hagendorff's allowed corporate policies of specific institutions (IEEE, Google, Microsoft, IBM) due to their "well-known media coverage" (p.3). 18Only 16% of the documents reviewed by Jobin et al. cite this principle.In Hangerdoff's study, only 4% of the documents cite this principle. 19According to the NGO AlgorithmWatch, its AI Ethics Guidelines Global Inventory contains 167 d ocuments.None of these documents predate the year 2013, and only two have their origin tied to Southern Africa and Southern Asia (no documents produced by Latin America are listed).initiatives (7). 20Like in the study of Jobin et al. [31], Fjeld et al. [31] also cite the variability in how such principles are defined. 21 However, once again, the restrictions of the sample analyzed can be seen since it also focused on documents that specifically address Autonomous and Intelligent systems (A/IS) (as defined by the IEEE [41]), leaving robotics and other AI-tangent fields aside (e.g., Data Science).Documents that did not present normative directions (i.e., only descriptive samples) or addressed only specific AI applications (e.g., facial recognition) were filtered out.
Fjeld et al. also pointed out that their choice of a thirty-six documents sample size aimed to facilitate (besides all previously mentioned exclusion criteria) a data visualization framework that could deliver a side -by-side comparison of individual documents (p.14).Some of the conclusions and recommendations for future work that the authors arrive at are [15]: 1. Ethical principles can only be interpreted inside a specific cultural context and need to be embedded inside a stricter form of policy governance to have true normative strength.Thus, their effectiveness in international settings remains, at best, uncertain; To summarize, we can contribute by highlighting the issue of how to make ethical guidelines compatible with AI development, expanding this debate to more parts of the globe and more AI-related fields. 20According to the authors, eight principles were the most commonly cited in their sample: Fairness/Non-discrimination (present in 100% of the analyzed documents), Privacy (97%), Accountability (97%), Transparency/Explainability (94%), Safety/Security (81%), Professional Responsibility (78%), Human Control of Technology (69%), and Promotion of Human Values (69%).They also defined these main principles as "themes," each theme containing a different group and distribution of principles that align with each other (e.g., Promotion of Human Values: Human Values and Human Flourishing, Access to Technology, Leveraged to Benefit Society).The authors cite the defense of Human Rights as a recurring theme in these documents.However, and maybe even surprisingly, the institutions that bring forth the cause of human rights with more frequency are not governmental (46%) or intergovernmental organizations (67%), but private institutions (88%) and civil societies (NGOs/non-profits) (80%), which by their very nature lack the power to make their normative claims more than mere suggestions or recommendations. 21For example, in the 2018 version of the Chinese Artificial Intelligence Standardization White Paper [29], the authors mention that AI can serve to obtain more information from the population, even beyond the data that has been consented to (i.e., violation of informed consent would not undermine the principle of Privacy), while the Indian National Strategy for Artificial Intelligence Discussion Paper (National Institution for Transforming India [16] argues that its population must become massively aware so that they can effectively consent to the collection of personal information.Fjeld et al. [15] study present other more nuanced forms of how these principles diverge in definitions. There is more meta-analytical work done in AI ethics that we will not cite in depth.For example, we could cite Zeng et al. [71] systematic review (which produced one of the repositories used in this research) or the Global AI Policy monitoring sustained by the Future of Life Institute (FLI).The studies cited are those that worked with the largest sample (i.e., nº of documents), while other studies show similar results.For a complete review of meta-analytical research on normative AI documents, we recommend the work done by Schiff et al. [57], which cites many other important works.
We believe that many of the later analyses, following the work of Jobin et al. [31], suffer from a small sample size.Even if the documents selected by all the before-mentioned studies are indeed the "most relevant," that does not mean other relevant issues are not raised in less-known documents.Perhaps better tools for implementing normative principles in AI systems design are present in those documents.To help the AI community create a better understanding of our AI ethics global landscape, in this study, we sought to analyze a larger sample size, also proposing more diverse categories and typologies for the documents found.
Methodology
From the gaps pointed out in the previous meta-analyses, in this study, we sought to combine: 1.A quantitatively larger and more diverse sample size, as done by Jobin et al. [31].Our sample possesses 200 documents originating from 37 countries, spread over six continents, in six different languages; 2. Combined with a more granular typology of document types, as done by Hagendorff [21].This typology allowed an analysis beyond the mere quantitative regarding the content of these documents.3. Presented in an insightful data visualization framework.We believe the data presentation done by authors like Hagendorff and Fjeld et al. [15] was not "user-friendly" or clear.Something that we tried to overcome in our work.
We used as primary sources for our sample two public repositories, the "AI Ethics Guidelines Global Inventory," from AlgorithmWatch (AW) 22 23 and the "Linking Artificial Intelligence Principles" (LAIP) Guidelines. 24The AW repository contained 167 documents, while the LAIP repository contained 90 documents.
Initially, we checked for duplicate samples between both repositories.After disregarding duplicates, we also scavenge for more documents through web search engines and web scraping, utilizing keywords such as "Artificial Intelligence Principles," "Artificial Intelligence Guidelines," "Artificial Intelligence Framework," "Artificial Intelligence Ethics," "Robotics Ethics," "Data Ethics," "Software Ethics," "Artificial Intelligence Code of Conduct," among other related keywords.We limited our search to samples written/translated in one of the four languages our team could cope with English, Portuguese, French, German, and Spanish.
It is important to remember that this is an incomplete sample.Due to the language barrier, our sample consisted of only documents available in the languages we had proficiency.
Diving into our methodological setting, we refer to "guidelines" as documents conceptualized as recommendations, policy frameworks, legal landmarks, codes of conduct, practical guides, tools, or AI principles for the use and development of this type of technology.Most of these documents present a form of Principlism, i.e., normative documents based on ethical principles.Even purely practical documents/tools have as a foundation certain ethical principles (e.g., debiasing tools built upon the principle of Fairness).These principles are a foundation for how AI technologies should be used and developed.From them, documents usually create normative mechanisms/instruments, e.g., codes of conduct, ethical frameworks, governance frameworks, software development tools, impact assessment tools, etc.Now, deconstructing the expression "AI Technologies," with "AI," our scope of interest encompasses areas that inhabit the multidisciplinary umbrella that is Artificial Intelligence research, such as Statistical Learning, Data Science, Machine Learning (ML), Logic Programming/Symbolic AI, Optimization Theory, Robotics, Software Development/Engin eering, etc.
And with the term "Technologies," we refer to specific tools/techniques (e.g., Rule-Based systems, Convolutional Neural Networks, Transformers), applications (e.g., image recognition software, chatbots, applied robotics), and services (e.g., Netflix recommendation system, Banks automated fraud detection, etc.).The term refers to technologies used for automating decision processes and mimicking intelligent/expert behavior. 25 We analyzed our sample (200 documents) in two phases.In phase one, a team of ten researchers was assigned different quotas of documents.Researchers were responsible for reading, translating when needed, and hand-coding pre-established features.The first features looked for were: • Institution responsible for producing the document; • Country/World Region of the institution; • Type of institution (e.g., Academic, non-profit, Governmental, etc.); • Year of publication; • Principles (principles were considered to be all those values/goals/commitments/rights defined/supported by the institutions/authors that listed them in their guidelines); • Principles description (i.e., the words used in a document to define/support a given principle); • Gender distribution among authors (inferred through a first name automated analysis); • Size of the document (i.e., word count).
We did not break principles ("themes") into related sub-principles like in the Fjeld et al. [15] study but merely paraphrased all documents' definitions for the defended principles found.We used this approach to avoid creating a biased pool of sub-principles, a problem already cited by Fjeld et al. (p. 16), since the majority of documents in our sample, like in previous studies, originated in Western European and North American countries.
In the first phase, based on past works, we established a list of principles so our team could focus their search.These were: Accessibility, Accountability, Auditability, Beneficence/Non-Maleficence, Dignity, Diversity, Freedom/Autonomy, Human-Centeredness, Inclusion, Intellectual Property, Justice/Equity, Open source/Fair Competition, Privacy, Reliability, Solidarity, Sustainability, Transparency/Explainability.The first phase helped us refine our list of principles.We determined that similar principles could be aggregated under the same category (by expanding its name, e.g., Diversity/Inclusion/Pluralism/Accessibility)since they upheld resonant values and ideas (e.g., Inclusion, Diversity, Accessibility).
In the second phase of our analysis, we also set new principles to be contemplated.These principles were later added because: 1. We were initially unaware of their existence in the current debate.
2. They appear to be cited a sufficient number of times (>10).
They could not be integrated into another category without redefining it.
The definition of these categories was subjective, but a deep dive into our sample provided the input needed.To create our "overall definition" for each category, a text mining technique called n-gram analysis was utilized, where the successive repetition of words (and groups of words) was counted in every principle category (Fig. 1). 26 The defined principles helped aggregate similar and resonating values while maintaining significant typological differences.Bellow, the reader can find the definition that we gave to each of them: • Accountability/Liability: accountability refers to the idea that developers and deployers of AI technologies should be compliant with regulatory bodies, also meaning that such actors should be accountable for their actions and the impacts caused by their technologies; • Beneficence/Non-Maleficence: beneficence and non-maleficence are concepts that come from bioethics and medical ethics.In AI ethics, these principles state that human welfare (and harm aversion) should be the goal of AI-empowered technologies.Sometimes, this principle is also tied to the idea of Sustainability, stating that AI should be beneficial not only to human civilization but to our natural environment and other living creatures; 25 However, it is important to remember that "intelligence" is not a well-defined concept backed by a strong scientific consensus. 26The word frequency for every principle category was calculated using the "sklearn.feature_extraction.text.CountVectorizer".We also used a custom stopword list file to filter common words (like "the" or "and") found in our sample (like "ai" and "intelligence").The code for our implementation can be found in this repository: https://github.com/Nkluge-correa/worldwide_AI-ethics.• Children & Adolescents Rights: the idea that the rights of children and adolescents must be protected.AI stakeholders should safeguard, respect, and be aware of the fragilities associated with young people; • Dignity/Human Rights: this principle is based on the idea that all individuals deserve proper treatment and respect.In AI ethics, respect for human dignity is often tied to human rights (i.e., Universal Declaration of Human Rights); • Diversity/Inclusion/Pluralism/Accessibility: this set of principles advocates the idea that the development and use of AI technologies should be done in an inclusive and accessible way, respecting the different ways that the human entity may come to express itself (gender, ethnicity, race, sexual orientation, disabilities, etc.).This principle is strongly tied to another set of principles: Justice/Equity/Fairness/Non-discrimination; • Freedom/Autonomy/Democratic Values/Technological Sovereignty: this set of principles advocates the idea that the autonomy of human decision-making must be preserved during human-AI interactions, whether that choice is individual, or the freedom to choose together, such as the inviolability of democratic rights and values, also being linked to technological self-sufficiency of Nations/States; • Human Formation/Education: such principles defend the idea that human formation and education must be prioritized in our technological advances.AI technologies require a considerable level of expertise to be produced and operated, and such knowledge should be accessible to all.This principle seems to be strongly tied to Labor Rights.The vast majority of documents concerned with workers and the work-life point to the need for re-educating and re-skilling the workforce as a mitigation strategy against technological unemployment; • Human-Centeredness/Alignment: such principles advocate the idea that AI systems should be centered on and aligned with human values.AI technologies should be tailored to align with our values (e.g., value-sensitive design).This principle is also used as a "catch-all" category, many times being defined as a collection of "principles that are valued by humans" (e.g., freedom, privacy, non-discrimination, etc.); • Intellectual Property: this principle seeks to ground the property rights over AI products and/or processes of knowledge generated by individuals, whether tangible or intangible; • Justice/Equity/Fairness/Non-discrimination: this set of principles upholds the idea of non-discrimination and bias mitigation (discriminatory algorithmic biases AI systems can be subject to).It defends the idea that, regardless of the different sensitive attributes that may characterize an individual, all should be treated "fairly"; • Labor Rights: labor rights are legal and human rights related to the labor relations between workers and employers.In AI ethics, this principle emphasizes that workers' rights should be preserved regardless of whether labor relations are being mediated/augmented by AI technologies or not.One of the main preoccupations pointed out when this principle is presented is the mitigation of technological unemployment (e.g., through Human Formation/Education); • Open source/Fair Competition/Cooperation: this set of principles advocates different means by which joint actions can be established and cultivated between AI stakeholders to achieve common goals.It also advocates for the free and open exchange of valuable AI assets (e.g., data, knowledge, patent rights, human resources) to mitigate possible AI/technology monopolies; • Privacy: the idea of privacy can be defined as the individual's right to "expose oneself voluntarily, and to the extent desired, to the world."In AI ethics, this principle upholds the right of a person to control the exposure and availability of personal information when mined as training data for AI systems.This principle is also related to concepts such as data minimization, anonymity, informed consent, and other data protection-related concepts; • Reliability/Safety/Security/Trustworthiness: this set of principles upholds the idea that AI technologies should be reliable, in the sense that their use can be verifiably attested as safe and robust, promoting user trust and better acceptance of AI technologies; • Sustainability: this principle can be understood as a form of "intergenerational justice," where the well-being of future generations must also be counted during AI development.In AI ethics, sustainability refers to the idea that the development of AI technologies should be carried out with an awareness of their long-term implications, such as environmental costs and non-human life preservation/well-being; • Transparency/Explainability/Auditability: this set of principles supports the idea that the use and development of AI technologies should be transparent for all interested stakeholders.Transparency can be related to "the transparency of an organization" or "the transparency of an algorithm."This set of principles is also related to the idea that such information should be understandable to nonexperts, and when necessary, subject to be audited; • Truthfulness: this principle upholds the idea that AI technologies must provide truthful information.It is also related to the idea that people should not be deceived when interacting with AI systems.This principle is strongly related to the mitigation of automated means of disinformation.
These 17 principles contemplate all of the values and worries we could interpret.This typology enabled us to encompass all normative principles, recommendations, tools, and general normative discourse expressed in our sample.
The first phase of our study also determined certain "categories/types" assigned to each document in the second phase.These types were determined by (1) the nature/content of the document, (2) the type of regulation that the document proposes, (3) the normative strength of this regulation, and (4) the impact scope that motivates the document.
The first type relates to the nature/content of the document: • Descriptive: descriptive documents take the effort of presenting factual definitions related to AI technologies.These definitions serve to contextualize "what we mean" when we talk about AI, and how the vocabulary utilized in this field can be understood; • Normative: normative documents present norms, ethical principles, recommendations, and imperative affirmations about what such technologies should, or should not, be used/developed for; • Practical: practical documents present development tools to implement ethical principles and norms, be they qualitative (e.g., Self-assessment surveys) or quantitative (e.g., debasing algorithms for ML models).
These first three categories were defined as mutually inclusive, meaning that documents could be, for example, descriptive and normative, normative and practical, all three types, only one type, etc.
The second type relates to the form of regulation that the document proposes: • Government-Regulation: this category is designed to encompass documents made by governmental institutions to regulate the use and development of AI, strictly (Legally binding horizontal regulations) or softly (Legally non-binding guidelines); • Self-Regulation/Voluntary Self-Commitment: this category is designed to encompass documents made by private organizations and other bodies that defend a form of Self-Regulation governed by the AI industry itself.
It also encompasses voluntary self-commitment made by independent organizations (NGOs, Professional Associations, etc.); • Recommendation: this category is designed to encompass documents that only suggest possible forms of governance and ethical principles that should guide organizations seeking to use, develop, or regulate AI technologies.
We defined these categories as mutually exclusive, meaning that documents could only be one of the three established categories.
The third type relates to the normative strength of the regulation mechanism proposed by the document.For this, two categories were defined: 27 • Legally non-binding guidelines: these documents propose an approach that intertwines AI principles with recommended practices for companies and other entities (i.e., soft law solutions); • Legally binding horizontal regulations: these documents propose an approach that focuses on regulating specific uses of AI on legally binding horizontal regulations, like mandatory requirements and prohibitions.
We defined them as mutually inclusive, meaning that documents could present both forms of regulations.
The final type relates to the impact scope that motivates the document.With impact scope, we mean the dangers and negative prospects regarding the use of AI that inspired the type of regulation suggested by the document.For this, three final categories were defined and also posed as mutually exclusive: • Short-Termism: this category is designed to encompass documents in which the scope of impact and preoccupation focus mainly on short-term problems, i.e., problems we are facing with current AI technologies (e.g., algorithmic discrimination, algorithmic opacity, privacy, legal accountability); • Long-Termism: this category is designed to encompass documents in which the scope of impact and preoccupation focus mainly on long-term problems, i.e., problems we may come to face with future AI systems.Since such technologies are not yet a reality, such risks can be classified as hypothetical or, at best, uncertain (e.g., sentient AI, misaligned AGI, super intelligent AI, AI-related existential risks); • Short-Termism & Long-Termism: this category is designed to encompass documents in which the scope of impact is both short and long-term, i.e., they present a "mid-term" scope of preoccupation.These documents address issues related to the Short-Termism category, while also pointing out the long-term impacts of our current AI adoption (e.g., AI interfering in democratic processes, autonomous weapons, existential risks, environmental sustainability, labor displacement, the need for updating our educational systems).
While in the first phase of our analysis, our team reviewed the entirety of our sample (each team member with its assigned quota), in phase two, a single team member reviewed all 200 documents.We concluded this approach would result in a standardized final sample.Thus, all post-processed documents passed the same criteria (and evaluator).In cases where uncertainties between classifications arose, we reached a consensus as a team.
We sought to establish our types and categories in the most objective way possible, as presented above.However, we recognize that even objective parameters are perceived and analyzed by subjective entities (people).Even if our final result possesses evaluation biases (some of our types may still be subject to interpretation and discussion), they all come from one source, consensually validated.
An abstract was written for each document in our sample as a final contribution.The reader can quickly scan the research contents thanks to the abstracts.We also attached their URL to each sample, which linked to the institution's website and any other significant attachments cited in the original document.
In the end, all documents received 13 features: Origin Country, World Region, Institution, Institution Type, Year of Publication, Principles, Principles Definition, Gender distribution, Size, Type I (Nature/Content), Type II (Form of Regulation), Type III (Normative Strength), Type IV (Impact Scope), plus Document Title, Abstract, Document URL, and Attachments.
We used all the information obtained during the second phase to create a database that feeds our visualization tool.The dashboard was created using the Power BI tool.A secondary dashboard (open-source) has been developed using the Dash library. 28 The main difference between our tool from Hagendorff's table [21] and Fjeld et al. [15] graphs are its interactivity and the possibility to combine different filters without being restricted to preconfigured orderings.This enables researchers to draw on it to problematize characteristics found in their regions, map trends and behaviors, or investigate categories related to their particular research focus.
Another distinguishing feature of our tool is its ability to condense large amounts of information into a single visualization panel.Our choice for such a way of presenting our data was to make it easier to interpret how certain 27 Such categories were defined based on the definitions from the Innovative and Trustworthy AI [27]. 28All resources are available in the following URL: https://nkluge-correa.github.io/worldwide_AI-ethics/.
features interact with others.Previous works demonstrate the statistical distribution of certain features, such as country of origin and type of institution.But how are these features related to each other?For example: • Of the countries that have published the most papers, what are the main and least, advocated principles?
• What are the major concerns of NGOs?
• Which institutions are more concerned with the long term?
• Which governments are prioritizing stricter forms of regulation?And of these governments, what are the main principles advocated?
• Which institutions defend the least cited principles?
• How is the gender distribution among authors from governmental institutions?
The reader can access all this information by selecting and combining different categories on our panel.
Results
Our tool allows users to explore different statistics in varied forms.Users can select specific categories (e.g., specific countries or world regions) and see how the whole landscape of documents changes.The dashboards also present a detailed resume of each document (all of its typological classifications and information), as well as a page containing all paraphrased descriptions of gathered ethical principles (e.g., all accounts of "Accountability/Liability").
When we examine our sample through a "country" level of granularity, we see that the bulk (13 countries = 77%) of our total sample size is represented by the United States of America, the United Kingdom, 29 Germany, Canada, China, Japan, France, Finland, Netherlands, Switzerland, Belgium, Brazil, and South Korea, while a myriad of 24 countries (12,5%) represents the remainder of our sample, along with Intergovernmental organizations, like the EU (9 = 4,5%) and the UN (6 = 3%).
Looking at the distribution among world regions (Fig. ??) (aggregated by continent), we see that the bulk of produced documents come from Europe, 30 North America, 31 and Asia, 32 while regions South America, Africa, and Oceania represent less than 4,5% of our entire sample size, with countries like Brazil (3 = 1,5%) spearheading the "rest" of our world sample (Latin America, 7 = 3,5%).If it was not for the significant participation of Intergovernmental Organizations, like NATO, UN, and UNESCO, which represent 6% of our sample size (13 documents), other world regions/countries would be even more underrepresented.However, this still excludes the Holy See/Vatican City and Palestine. 33 Switching our gaze to institution types (Fig. ??), except for institutions like IBM (5), Microsoft (4), and UNESCO (3), most other institutions do not have more than two published documents.We can also see that the bulk of our sample was produced by governmental institutions and private corporations (48%), followed by CSO/NGO (17%), non-profit organizations (16%), and academic institutions (12,5%).However, this trend only follows if we look at the totality of our sample size.If we look at documents produced by continents, for example, in North America (69), private corporations (24 = 34,7%) and non-profit organizations (18 = 26%) produced most documents, followed by governmental institutions (12 = 17,4%).Meanwhile, when we look at Europe, the global trend is restored.
An in-depth analysis segmented by countries shows that the engagement of particular AI stakeholders (i.e., institution types) differs between countries.For example, in China (11), the majority of documents are produced by academic institutions (5 = 45,4%), while in Germany (20), most documents in our sample came from private corporations (6 = 30%), and CSO/NGO (4 = 20%).
When we examined gender distribution among authors (removing documents with unspecified authors), 34 we performed an analysis based on the first names of each author.Given the variety/diversity that names can possess, it was necessary to use automation to predict gender encodes (male/female).To make an accurate inference, it was also necessary to extract (in addition to each author's name) the most likely nationality associated with each name.For this, we used (in addition to the country/origin of each document) nationalize.io, 35an API service that predicts the nationality of a person given their first name.After that, we grouped the names of authors who had the same origin/nationality associated with their names.Finally, we used the API services of the genderize.ioplatform 36 to infer the gender of each name.We made each request by providing the name to be inferred and the ISO-2 code of the nationality associated with that name.
The result showed that the distribution of authors with "male" names was favorable in our database (549 = 66%).While 34% (281) of these names were inferred as "female." Concerning the year of publication of the documents from our sample, one can see that the majority (129 = 64,5%) was published between 2017 and 2019.What we may call the "AI ethics boom" constitutes the significant production of documents in the year 2018, which represents 30,5% (61) of our entire sample (Fig. 4).Regarding the previously defined typological categories, when looking at the document's Nature/Content, we found that the majority of our sample is from the normative type (96%), which a third of the time also presents descriptive contents (55,5%), and more rarely, practical implementations (2%).
When we look at the form of regulation proposed by the documents of our sample, more than half (56%) are only recommendations to different AI stakeholders, while 24% present self-regulatory/voluntary self-commitment style guidelines and only 20% propose a form of regulation administered by a given state/country.This lack of convergence to a more "government-based" form of regulation is reflected in the normative strength of these documents, where the vast majority (98%) only serve as "soft laws," i.e., guidelines that do not entail any form of a legal obligation, while only 4,5% present stricter forms of regulation.Since only governmental institutions can create legally binding norms (other institutions lack this power), and they produced only 24% of our sample, some may argue that this imbalance lies in this fact.However, by filtering only the documents produced by governmental institutions, the disproportion does not go away, with only 18,7% of documents proposing legally binding forms of regulation.The countries on the front of this still weak trend are Canada, Germany, and the United Kingdom, with Australia, Norway, and the USA coming right behind.
Our last typology group is impact scope.Looking at the totality of our sample size, we see that short-term (47%) and "mid-term" (i.e., short-term & long-term = 52%) prevail over more long-term preoccupations (2%).When we filter our sample by impact scope and institution type, it seems to us that private corporations think more about the short-term 35 https://nationalize.io/. 36https://genderize.io/.Currently (February 2023), the service is free, and no API key is required for less than 1000 requests/day.(33%), governmental institutions about the short/long-term (28%), and academic (66%) and non-profit organizations (33%) with the long-term impacts of AI technologies.
Finally, examining the distribution of principles among our total sample size, we arrive at the following results: the top five principles advocated in the documents of our sample are similar to the results shown by Jobin et al. [31], and Hagendorff [21], with the addition of Reliability/Safety/Security/Trustworthiness (78%), which also was top five in Fjeld et al. [15] meta-analysis (80%) (Fig. 5).Looking at principle distribution filtered by continent, the top five principles remain the same in both North America and Europe, but the Asian continent introduces the principle of Beneficence/Non-Maleficence as is 5th (74%) most cited principle, putting Accountability/Liability in 6th place (70%).Filtering our results by country, we see no change in the top five principles when comparing EUA and the UK.However, looking under the top five principles, we begin to see differences, like Freedom/Autonomy/Democratic Values/Technological Sovereignty (38%) and Beneficence/Non-Maleficence (34,4%) being the 6th and 7th most cited principles in the EUA, and Open source/Fair Competition/Cooperation (45,8%) and Diversity/Inclusion/Pluralism/Accessibility (41,6%) being 6th and 7th most cited principles in the UK.
When examining principle distribution filtered by institution type, we also can find many insights.For example, looking at our total sample, we notice that the main preoccupation of governmental institutions (worldwide) is the need for transparent systems (89,5%), private corporations mainly advocate for Reliability (87,5%), and CSO/NGOs primarily defend the principle of fairness (88,2%).
Finally, regarding the "principle definition divergence," i.e., divergent forms of defining the observed ethical principles, we bring some of the cases that most sparked curiosity.First, let us look at our most cited principle: Transparency/Explainability/Auditability.When examining the definition proposed in "ARCC: An Ethical Framework for Artificial Intelligence" [65]: • "Promote algorithmic transparency and algorithmic audit, to achieve understandable and explainable AI systems.Explain the decisions assisted/made by AI systems when appropriate.Ensure individuals' right to know, and provide users with sufficient information concerning the AI system's purpose, function, limitation, and impact." In comparison with the one provided in "A practical guide to Responsible Artificial Intelligence (AI)" [49]: • "To instill trust in AI systems, people must be enabled to look under the hood at their underlying models, explore the data used to train them, expose the reasoning behind each decision, and provide coherent explanations to all stakeholders promptly.These explanations should be tailored to the different stakeholders, including regulators, data scientists, business sponsors, and end consumers." Both definitions seem similar, but the "devil is in the details."Only the first definition entails the concept of auditing, which means (in some interpretations) a third-party review of the system in question.Also, while the first document mentions that "one must explain," "ensure the right," and "provide enough information for people," clearly implying the idea of a "duty to explain" (without specifying who should explain), coupled with the "right to know," the second document says that people have "to be able to look under the hood" (also without specifying who should be able to look), without bringing the idea of right or duty.Nevertheless, only the second one proposes that this knowledge should be tailored and accessible to different stakeholders since an explanation fit for a machine learning engineer may not be understandable to a consumer.
Keeping in mind that the concept of transparency is a well-fundamental idea/concept in AI (especially machine learning research), what kinds of differences may occur when we look at "not so well defined" principles, like human-centeredness?In "Data, Responsibly (Vol. 1) Mirror, Mirror," [33], we find the following recommendation: • "Maybe what we need instead is to ground the design of AI systems in people.Using the data of the people, collected and deployed with an equitable methodology as determined by the people, to create technology that is beneficial for the people." While in "Everyday Ethics for Artificial Intelligence," [11] following norm is suggested: • "AI should be designed to align with the norms and values of your user group in mind." The first document mentions ideas like "the use of an equitable methodology" and "technology that is beneficial for the people."This idea of "people" seems to refer to a large and diverse group (perhaps "all people").Meanwhile, the second specifically states "your user group in mind," which could mean "a small and select group of people," if that is what the designers have in mind as "their users." Many other differences can be found in our sample, for example: • "Tieto's AI ethics guidelines" [12] takes a different take on explainability, saying its systems "can be explained and explain itself ", placing some of the responsibility on explainability in the AI system itself, making it a "stakeholder" in the accountability chain (a curious approach); • The "The Toronto Declaration" [44] gives an extensive and non-exhaustive definition of what "discrimination" means under international laws, while most other documents resume themselves by only citing the concept, leaving open to interpretation the types of "discrimination that is permissible"; • In "Artificial Intelligence and Machine Learning: Policy Paper," [63] fairness is related to the idea of "AI provides socio-economic opportunities for all" (benefits), in "Trustworthy AI in Aotearoa: AI Principles" [45] fairness is also defined as "AI systems do not unjustly harm" (impacts), which we can relate to the difference between certain notions of algorithmic fairness; • While some documents (e.g., "Telefónica´s Approach to the Responsible Use of AI" [53] state how privacy and security are essential for AI systems developments, only a few define (e.g., "Big Data, Artificial Intelligence, Machine Learning, and Data Protection" [25] what "good privacy criteria" are (e.g., data minimization).• And as a final example, while most documents interpret accountability/liability as "developers being responsible for their projects" (e.g., "Declaration of Ethical Principles for AI in Latin America" [35], some documents also put this responsibility on users, and even algorithms "themselves" (e.g., "The Ethics of Code: Developing AI for Business with Five Core Principles" [48]).
Besides the ones mentioned above, many other forms of analysis are possible, for example, the shift of attention between principles through the passing of the years, or more detailed "multi-filtered" forms of analysis, like: • What is the principle most defended by private corporations worldwide?(Reliability).
• What is the principle that possesses the least amount of practical documents proposing solutions for their specific problems in North America?(Children & Adolescents Rights).• Which country is at the front in terms of legally binding regulations aided by practical tools to assist AI developers?(United Kingdom).
All these results are available in our panel.
Discussion
The first point we would like to explore is the apparent unwavering distribution of documents into world regions/countries.Even with a sample size twice as large as the one analyzed by Jobin et al. [31], we seem unable to escape this result.We affirm that our results should be viewed as a "true snapshot" of the current AI Ethics global landscape.Other academic sources do provide some perspective on the subject, and, in the end, the pictures do not match.According to Savage [56], from 2016 to 2019: "China's output of AI-related research increased by just over 120%, whereas output in the USA increased by almost 70%.In 2019 China published 102,161 AI-related papers, and the USA published 74,386.".
Based on our analysis of the AI Index 2022 Annual Report [72], the top three countries by the Vibrancy Ranking 37 score are the US, China, and India.And with more than 52 billion USD invested in the AI field in 2021, it is comprehensible that almost a third of our sample size (58 documents) comes from the USA.But what about China and India?According to Zhang et al. [72], China has far surpassed the USA in terms of journal/conference publications and citations, while most of the "AI talent concentration" is found in India (followed by countries like the US, South Korea, Israel, and Singapore).However, China represents only 5,5% of our total sample (India being 0,5%).
Does this mean these places, like many others still underrepresented in our sample, are not paying attention to the current AI ethics debate?We believe this would be a wrong conclusion.We argue that a vibrant, unique, and culturally shaped discussion exists in places we are still unaware of, either due to language barriers or other factors. 38For example, Kiemde and Kora [34], after acknowledging the dominance of Western values over the African population and the diaspora, where contributions to the debate remain very little (or not mainstream), bring an insightful discussion about the state-of-the-art on AI use in Africa (which is still controlled and managed by western monopolies), the foreign abuses suffered by the African population through AI experimentation [18], and the current state of affairs surrounding AI ethics and governance.
According to Kiemde and Kora [34], 17 of the 55 African Union (AU) member states possess data protection and privacy legislation, while Mauritius has announced the establishment of a National AI Council, also being the first African state to present an AI strategy/roadmap.The authors also demonstrate in their review a collection of published papers and documents about AI ethics in Africa [19,54,66,20], and other underrepresented countries [5], which helps us to show that there is AI Ethics in Africa and probably in all other states that did not show up in our sample.They only do not come in the format we were first looking for.
However, the insights gathered from our sample are not entirely misleading.According to the AI Index of 2022 [72], private investment in AI reached an all-time high, surpassing the USD 90 billion mark, becoming more centralized (fewer companies funded/started, few companies receiving a greater partition of the total funding).This could help explain why so much of our sample (24%), tied with governmental institutions, comes from the private sector.Most AI technologies are created in the industry, and this industry, seeing the demands for regulation and accountability, quickly reacted by proposing its form of regulation: self-regulation (i.e., we promise we will do well).Many of such promises are, perhaps, genuine, but when governments and private institutions have "the same weight" in our sample, attention to the matter seems needed.The impasse between private and public interests remains a question that demands a proactive legislative push with measurable gauges of ethical practices beyond broad guidelines.
This fact may become more alarming when we look at the distribution of government documents that opt for "soft" forms of regulation (91,6%).The critique that "ethical principles are not enough to govern the AI industry" is not a new one [31,37,50,21,13].However, some defend the "principlism" approach as something good [59].A type of embryonic state that precedes stricter forms of regulation.Seger [59] argues that principles can work as a valuable starting point in discussions around regulation, helping us bring cultural context inside the formation of a new rule system.
Also, even considering that most countries in our sample 39 seem to opt for legally non-binding forms of regulation, there seems to be a growing adoption of stricter, and legally binding, solutions.The idea that "Ethics" and "Compliance" are separate domains seems to get ever-growing acknowledgment by countries such as Canada, Germany, and the United Kingdom (which comprise 66,6% of our total "legally binding" sample), while according to Zhang et al. [72], the legislative records on AI-related bills that passed into law grown from just one in 2016 to 18 in 2021, with Spain, the UK, and the USA being the top three "AI-legislators" from 2021, showing that in fact, we currently have more 37 https://aiindex.stanford.edu/vibrancy/. 38Reminding that we focused on ethical guidelines, not academic papers. 39According to the document cited below, 14 countries of the European Union, those being Denmark, Belgium, the Czech Republic, Finland, France, Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain, and Sweden have stated favoritism towards soft law solutions [27].
regulations related to AI then we ever had.Nevertheless, pushing or achieving legally binding regulations toward AI products does not end the ethical conundrum. 40 Another factor worth contextualizing is the "2018 AI Ethics boom," i.e., the fact that almost a third of our sample (30,5%) got published in 2018 (64,5% if extended from 2017 to 2019).The AI Index report also points to this trend, where since 2014, we had a five-time increase in publications related to AI Ethics, where topics like algorithmic fairness have stopped being only academic objects of research and actual AI industry areas of R&D [72].It is also interesting to see the shift of interest during the timeline we analyzed.In 2014, the top-cited principles were Fairness, Reliability, and Dignity (Transparency was not even in the top 10 at this time), and in 2016, Accountability, Beneficence, and Privacy received more attention (Accountability being the number one concern of documents published in 2017).But in 2018, Transparency (Explainable AI/XAI, Mechanistic Interpretability) became the dominant topic of concern.
What factors could explain this shift in attention?We can start by analyzing historical marks in those years that may be relevant to the field.AI research has seen a new wave of interest after its last winter since a deep neural network won the ImageNet 2012 Challenge, consider by many as the cementing of deep learning as the new parading in AI research.In subsequent years, we saw many remarkable feats that this type of system could perform, like a self-taught reinforcement learning agent that became the world's "Best Go Player" [60], and machine learning models that can create other machine learning models [74].
Deep learning systems rapidly became utilized in many real-world applications, making technology with little theoretical (but strong empirical support) backing widely adopted in many critical and sensitive areas of our society.Something that later would prove to be prone to lead to many unwanted consequences.Consequences may be a possible source of the sudden AI Ethics boom we experienced between 2017 and 2019.Some high-profile cases that are worth mentioning are the COMPAS software use, which in the year 2016, Angwin et al. [4] showed that "blacks are almost twice as likely as whites to be labeled a higher risk but not re-offend."In 2018, we had the first case of a human killed by an Uber self-driving car [8].And in the same year, the Cambridge Analytica case gained considerable media attention, where personal data was used without consent for personal profiling and targeting for political advertising [30].We can also mention relevant works that helped cement the AI Ethics field as a popular area of research, like the celebrated book "Weapons of Math Destruction" [47]).All these events and many others helped bring life to a field that, according to the Google book n-gram viewer, 41 had almost no mention before 2012, and after 2014, saw a significant increase.
Perhaps some of these events could come to explain the swings of attention on AI Ethics.And perhaps transparency, or machine learning interpretability, assumed the top position of concern for being an older and more substantial area of study in machine learning [38], i.e., an area that could present some solid techniques and results for its problems.In a more critical sense, one could say that "it is better to show concern with a problem that we have some idea on how to start solving (interpretability) than problems we have a little-to-none idea on how to approach (truthfulness and labor rights)."As Hagendorff [21], asks: "What does a 'human-centered' AI look like?" (p.8).
Meanwhile, we can see that only 55,5% of documents (111) seek to define what is the object of their discourse, i.e., "we are talking about autonomous intelligent systems, and this is what we understand as an autonomous intelligent system ."This is a curious phenomenon, more so if we acknowledge that there is no consensual definition of what "Artificial Intelligence" is and what is not [39].There are many interpretations and contesting definitions, which may prove to be a challenge for regulating organizations, i.e., there needs to be a rigorous definition of what we are trying to regulate.If you choose to define "AI" as only "systems that can learn" you will leave outside your scope of regulation an entire family of systems that do not learn (rule-based systems) but can still act "intelligently" and autonomously (e.g., UAVs -Unmanned aerial vehicles).
We would also like to point out the seemingly low attention given to the "long-term impacts" of AI (1,5%).Even though there is a considerable amount of work produced on AI safety 42 [6,62,61,3,24,14,32,23], many times the terms "safety," "alignment," or "human-level AI" are generically dismissed as not serious, or as Stuart Russell would say: "myths and moonshine" [51].Possible explanations for this fact could be: (a) the AI community does not find these problems real; (b) the AI community does not find these problems urgent; (c) the AI community thinks we have more urgent problems at hand; or even (d) that the AI community does not know about issues like "alignment" or "corrigibility."If we look at the distribution of papers submitted in the NeurIPS 2021, 43 approximately 2% were safety-related (e.g., AI safety, ML-Fairness, Privacy, Interpretability). 40Legislation may be topical and narrow in scope. 41 Finally, let us shift our attention from "what is being said" to "who is saying."There are significant gender differences between the male and female authors in our sample.The fact that 64% (128 documents) of our sample have unidentified authors may still mask this inequality.Academic institutions (62% male, 38% female) and non-profit organizations (65% male, 34% female) "approximate" gender parity but still fall short of the 1:1 parity ratio.
It is important to notice that it is hard to find present-day data about gender disparity since many organizations do not provide this information.More importantly, gender prediction methods still contain a "gender bias error rate" [67] and possible misclassification.Although the tools we used can be regarded as accurate tools [58,73], gender predictions by name are blindsided for non-binary gender accounts and disregard self-declaration (e.g., non-binary, genderfluid, queer, or transgender).
With an understanding of the method's limitation to gender prediction, the ethical implications of gender encoding in data analysis, and AI gender bias analysis, one fundamental ongoing discussion in the AI community, we considered that the data contribute to turning the gaps and bias increasingly apparent.Therefore, below we bring some statistics to explain this kind of disproportion.
Going back to the AI Index report of 2022 [72], we see that, to our surprise, AI skills penetration, when filtered by gender, from the top 15 countries listed, rates of females are higher than those of males in India, Canada, South Korea, Australia, Finland, and Switzerland.However, this does not mirror the rest of the globe.For example, in the AI Now report of 2018, Whittaker et al. [68] showed that 80% of the professors at the world's leading universities (e.g., Stanford, Oxford) are male.According to the US National Center for Education Statistics (NCES), 44 between 2008 and 2017, women earned only 32% of undergraduate degrees in STEM (even given that more women graduate in the US than men, 60% more), and 18% of degrees in Computer Science.In the UK, women account for only 16% of the tech industry [26], and in Silicon Valley, the male/female proportion is 4:1 [9].
Google is one of the few big tech companies that make their internal demographic distribution publicly available [17], we can look at it and see that there still is a gender gap, especially if we filter gender by race.In their 2022 report, Google proudly stated that "Black+ representation grown 2x faster than Googlers overall," however, men (62,5%) are still more hired than women (37,5%) at Google (globally), and leadership positions are still predominantly held by man (69,4%), while Black, Latin, and Native American women represent only 19,2% of their female "Googlers."Although the data provided by the 200 documents in our meta-analysis allowed us to show a gap between male and female authorship distribution, we recognize there is a broader limitation of gender representation and gender bias discussion in AI, such as the inclusion of transgender and non-binary individuals, sexist data-algorithmic bias, discrimination, and so forth [7].
The underrepresentation of women and minority groups in technology sectors should be approached in a multidisciplinary and multimodal (numerical, textual, images, etc.) approach [40], including various levels of analysis such as historical, political, and technological.In addition to considering the issue of the presence of gender biases in IA systems, it is necessary to recognize the importance of diversity within the social and economic justice debate for gender parity [70].We hope to start a conversation in a future meta-analysis on gender matters, which seems important to ethics in the field.
This work represents a mere fraction of what our true global landscape on this matter is.There is much data to be collected and typologies to be improved, and we hope to have helped lay the ground for future meta-analysts.
Conclusion
In this work, we sought to bring new data, insights, tools, typologies, and our interpretation/description of our sample.These descriptions sometimes went against other statistical data outside our original distribution and scope of focus.Thus, we sought to substantiate our interpretations with information beyond what we collected in our analysis.We believe this shows the, perhaps obvious, necessity to look "outside of the data" and contextualize it.To do so, we aimed to collect data from documents throughout the world, which are diverse and multi-faceted.Although this diversity is not fully represented in the data available, we argue that these 200 documents paint the picture of a world needing clear and enforceable rules for AI development.
From these analyses, it was possible to diagnose at least 17 groups of principl es listed among the 200 guidelines analyzed, with the first six being present in more than 50% of the guidelines.This information certainly contributes as a guide for the discussions that are taking place on how to regulate artificial intelligence, indicating what objectives/minimum rights should be protected by future legislation.Therefore, we consider that we achieved the initial proposal of this research.
The development space of systems and applications that affect so many people is still a "black box" for most.All stakeholders should be aware of the inner working of the processes that regulate and control their surroundings.The process to change our current deregulated AI industry may require a push for stronger government regulation, combined with a change in the culture of AI development.Many developers are already aware of the risks and flaws of these types of technologies.Flaws that endanger the most vulnerable among us.
We recognize the limitations that we, as researchers, face.This is reflected in our findings, where diversity, information about non-hegemonic countries, and broader gender and LGBTQIA+ representation remain deficient.AI governance is still a new and open field that demands greater attention and exploration, and this research sought to bring new tools to future explorers.As a final statement, we remind the reader that this is an ongoing project, and we hope to keep refining our sample.
Figure 2 :
Figure 2: Number of published documents by country.
Figure 5 :
Figure 5: Number of times an aggregated principle was cited. | 2022-06-27T01:16:11.297Z | 2022-06-23T00:00:00.000 | {
"year": 2023,
"sha1": "6690f43a609564c8f455a1e17dbf1fec9f240a56",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2666389923002416/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "d000b12f3377133d42b079dd9ec7098c63b916fc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
3457930 | pes2o/s2orc | v3-fos-license | A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.
Introduction
High-resolution optical remote sensing images have become an important research topic in many marine applications. Due to their large scale and high efficiency, such images have been extensively used in ship detection, such as in dynamic harbor surveillance, maritime management, ship rescue and smuggling activity monitoring [1][2][3].
In particular, ocean-going vessels and inshore ships are considered typical ship detection scenes. A number of previous studies have focused on ocean-going vessel detection, and they usually showed good performances [4][5][6][7][8][9]. In addition, for inshore ship detection scenes, anchored ships, which are docked in harbor but are not connected to a dock, have similar backgrounds as ocean-going vessel scenes. Given these characteristics, anchored ships were detected effectively in [10,11]. However, in inshore ship scenes, compared with anchored ships, ships berthed at a dock, which are called "docked ships", are rarely focused on. This is primarily because of the high degree of similarity in gray information and textures between the dock and the docked ship, which are almost connected. These factors make it challenging to accurately detect docked ships from harbor regions.
Therefore, methods have recently been developed to address these intractable problems. Based on detection approach, these detection methods can be divided into three categories. The first category is based on a priori information. Long et al. [12] employs a priori geographic information to rapidly locate harbors. An accurate geographic information system (GIS) contributes to realizing the segmentation between the sea and harbor land and facilitates separating between inshore ships from the harbor. The second category is based on water-land segmentation and contour extraction. These types of methods, which rely on primary image features such as gray information and textures, are proposed in [13][14][15]. An alterable included angle code-based method is proposed by Jiang et al. in [13]. This algorithm for the alterable included angle code is simplified by the evaluation parameter of the broken line in this paper. Xu et al. [14] employs the invariant generalized Hough transform, which could adapt to the translation, scale and rotation transformation of ships, to extract the ship shape. A method based on shape and context information is presented in Liu et al. [15]. Successive shape analyses are proposed in this work to achieve accurate detection of docked ship locations. Recent, model-based methods could be considered to belong to yet a third category. Xu et al. [16] proposes a new detection framework based on the robust invariant generalized Hough transform to detect inshore ships that could adapt to deformations of various ships. The saliency and S-HOG (histogram of oriented gradient) descriptor is presented to detect unsupervised ships by Qi et al. [17]. This method characterizes the gradient symmetry of ship sides to distinguishing between ships and false alarms.
The methods introduced above offer superior detection in certain scenes; however, they have some limitations. Although the methods in the first category are effective, they are limited to specific areas for which accurate GIS information is available and could not be used in unknown areas. During the detection process, water-land segmentation results play an important role in the method of the second category. However, the gray information and textures of inshore areas are complex and are typically influenced by shadows and sunlight illumination. Thus, the segmentation results are unstable. Furthermore, these algorithms are challenging to use to extract continuous contours of docked ships. Ultimately, in practice, it is difficult to effectively distinguish ships from their harbor regions. Recently, many models-based methods that adapt to deformation are presented to detect inshore ships that could be classified as belonging to the third category. However, these models are not applicable to targets that are partly covered by the shadows of higher ship superstructures. The detection accuracy is also influenced by different location angles of docked ships and the complexity of ports. Moreover, these methods would be computationally demanding because detailed scans of the entire image are required to obtain ship-like regions.
In this work, a hierarchical framework for inshore ship detection is designed to overcome the aforementioned problems, and Figure 1 shows the algorithm workflow of the proposed method. This framework has two major parts: (1) the candidate region extraction phase, which is conducted at a rapid speed and presents fewer missed errors, and (2) the candidate region identification phase, which has high precision and a lower false alarm rate value. In the candidate region extraction phase (part (1)), a rapid screening method is presented to extract candidate regions of inshore ships. Specifically, the candidate regions of anchored ships in water areas could be easily extracted based on fusing the gray information and texture information. Furthermore, an omnidirectional intersected two-dimension scanning (OITDS) method is presented to quickly extract candidate regions of docked ships from coastal areas in large scenes without GIS information. In the candidate region identification phase (part (2)), we propose a decision mixture model (DMM) to effectively differentiate ships from candidate regions. Due to the robustness to deformation, we use a deformable part model (DPM) as proposed by Felzenszwalb et al. [18,19] to construct the key part sub-model and whole ship sub-model of a single ship. Then, a ship surrounding correlation context sub-model is designed. Finally, the scores of these three sub-models in each candidate region are fused into a decision template (DT), as proposed in [20], to effectively discriminate between ships and false alarms. Previous work could not extract valid candidates caused by discontinuous contours; however, the proposed DMM resolves these issues. The DMM uses a small number of sub-models to describe a ship and ensure the robustness of identification to account for different ship appearances. To improve the detection accuracy, the feature information from the three sub-models is used in the DMM to identify ships. Therefore, the DMM could not only deal with typical scenes of docked ships with different location angles but could also adapt to non-optimal scenes, such as docked ships partially obscured by shadows from a higher ship's superstructure and scenes in which the gray features of a docked ship are extremely similar to the harbor area, etc. sub-model is designed. Finally, the scores of these three sub-models in each candidate region are fused into a decision template (DT), as proposed in [20], to effectively discriminate between ships and false alarms. Previous work could not extract valid candidates caused by discontinuous contours; however, the proposed DMM resolves these issues. The DMM uses a small number of sub-models to describe a ship and ensure the robustness of identification to account for different ship appearances.
To improve the detection accuracy, the feature information from the three sub-models is used in the DMM to identify ships. Therefore, the DMM could not only deal with typical scenes of docked ships with different location angles but could also adapt to non-optimal scenes, such as docked ships partially obscured by shadows from a higher ship's superstructure and scenes in which the gray features of a docked ship are extremely similar to the harbor area, etc.
Extraction of Candidate Regions
Distinct contrasts occur between anchored ships and their surrounding water; therefore, extracting candidate regions of anchored ships is simple. However, the similar features between docked ships and docks increase the difficulty of extracting candidate regions of docked ships. Docked ships always present a protruding characteristic in relatively smooth areas along the coast; therefore, when searching for this characteristic, the rapid and low-error method OITDS is designed to extract candidate regions of docked ships. Additional details are presented below.
Rapid Water-Land Segmentation
Water-land segmentation can help extract the candidate regions of two types of inshore ships. To improve efficiency, down-sampling is used to input large-scale images. After sampling, the gray feature distribution and texture neighborhood variance distribution from the sampling images are calculated. Based on the peak characteristics of these two features, an adaptive segmentation threshold could be obtained to preliminarily extract the harbor water areas. According to the shiplike shape and size characteristics of most ships, false candidate regions that are obviously incorrect can be deleted. Finally, water segmentation binary images are obtained. A more complete description can be found in [10]. Suppose that ( , ) F x y is the location feature of a pixel ( , ) I x y in these binary images. The feature of the pixel located in the land area is labeled ( , ) 1 F x y = , and the feature of the pixel located in the water area is labeled ( , ) 0 F x y = .
After water-land segmentation, the obvious candidate regions of anchored ships can be obtained from the binary images. Due to the similar features and the close connection between the ship and dock, a docked ship is often misjudged as a land area. Therefore, further separation is necessary to distinguish docked ships from the obtained land area, as presented in the following section.
Extraction of Candidate Regions
Distinct contrasts occur between anchored ships and their surrounding water; therefore, extracting candidate regions of anchored ships is simple. However, the similar features between docked ships and docks increase the difficulty of extracting candidate regions of docked ships. Docked ships always present a protruding characteristic in relatively smooth areas along the coast; therefore, when searching for this characteristic, the rapid and low-error method OITDS is designed to extract candidate regions of docked ships. Additional details are presented below.
Rapid Water-Land Segmentation
Water-land segmentation can help extract the candidate regions of two types of inshore ships. To improve efficiency, down-sampling is used to input large-scale images. After sampling, the gray feature distribution and texture neighborhood variance distribution from the sampling images are calculated. Based on the peak characteristics of these two features, an adaptive segmentation threshold could be obtained to preliminarily extract the harbor water areas. According to the ship-like shape and size characteristics of most ships, false candidate regions that are obviously incorrect can be deleted. Finally, water segmentation binary images are obtained. A more complete description can be found in [10]. Suppose that F(x, y) is the location feature of a pixel I(x, y) in these binary images. The feature of the pixel located in the land area is labeled F(x, y) = 1, and the feature of the pixel located in the water area is labeled F(x, y) = 0.
After water-land segmentation, the obvious candidate regions of anchored ships can be obtained from the binary images. Due to the similar features and the close connection between the ship and dock, a docked ship is often misjudged as a land area. Therefore, further separation is necessary to distinguish docked ships from the obtained land area, as presented in the following section.
OITDS of Docked Ship Candidate Regions
A docked ship presents a protruding characteristic in a smooth area along the coast. Considering this characteristic, intersected two-dimension scanning (ITDS) is used to extract the candidate from the binary images of water-land segmentation. First, the intersected scanning of the vertical and horizontal directions finds the coastline (sea-land alternating pixels) which has two adjacent pixels F(x, y) change from "0" to "1" or from "1" to "0" in the binary images. Then, the F(x, y) locations of these pixels that satisfy the coastline location are changed to "2". When the coastline is ensured, bulges surrounded by coastline could be found because of their protruding characteristics. Thus, in each vertical and horizontal direction, the positions between each pair of coastline pixels may be a part of the bulging areas. If these location features are labeled "1", these positions are parts of the bulge in the coast. Therefore, these pixels are instead labeled as "3" and considered as candidates of a docked ship. Eventually, pixels labeled F(x, y) = 3 are accumulated and the suspected bulge areas that connect smooth coastal area and water area are obtained. Thus, these pixels labeled as "3" represent docked ship candidates.
Due to the various locations of docked ships and their different location angles, ITDS cannot easily capture all the docked ships. To overcome this problem, an omnidirectional intersected two-dimension scanning (OITDS) technique is designed that is based on the omnidirectional rotation iteration of ITDS, as shown in Figure 2. This method rotates the binary images to scan as many suspected protruding bulges as possible in different directions. Furthermore, the results of all the directions are synthesized to obtain the vast majority of suspected protruding bulges in the coastal area. In addition, based on the geometric appearance information of ships, such as their length, width and aspect ratios, these features are used to roughly discriminate the scanned results. After primary discrimination, the minimum enclosing rectangles of these bulges are set as the suspected candidate regions (SCRs).
OITDS of Docked Ship Candidate Regions
A docked ship presents a protruding characteristic in a smooth area along the coast. Considering this characteristic, intersected two-dimension scanning (ITDS) is used to extract the candidate from the binary images of water-land segmentation. First, the intersected scanning of the vertical and horizontal directions finds the coastline (sea-land alternating pixels) which has two adjacent pixels ( , ) F x y change from "0" to "1" or from "1" to "0" in the binary images. Then, the ( , ) F x y locations of these pixels that satisfy the coastline location are changed to "2". When the coastline is ensured, bulges surrounded by coastline could be found because of their protruding characteristics. Thus, in each vertical and horizontal direction, the positions between each pair of coastline pixels may be a part of the bulging areas. If these location features are labeled "1", these positions are parts of the bulge in the coast. Therefore, these pixels are instead labeled as "3" and considered as candidates of a docked ship. Eventually, pixels labeled ( , ) 3 F x y = are accumulated and the suspected bulge areas that connect smooth coastal area and water area are obtained. Thus, these pixels labeled as "3" represent docked ship candidates. Due to the various locations of docked ships and their different location angles, ITDS cannot easily capture all the docked ships. To overcome this problem, an omnidirectional intersected twodimension scanning (OITDS) technique is designed that is based on the omnidirectional rotation iteration of ITDS, as shown in Figure 2. This method rotates the binary images to scan as many suspected protruding bulges as possible in different directions. Furthermore, the results of all the directions are synthesized to obtain the vast majority of suspected protruding bulges in the coastal area. In addition, based on the geometric appearance information of ships, such as their length, width and aspect ratios, these features are used to roughly discriminate the scanned results. After primary discrimination, the minimum enclosing rectangles of these bulges are set as the suspected candidate regions (SCRs).
Acquirement of Identifiable Candidate Region
To facilitate the subsequent candidate region identification processing, particularly the analysis and identification of ship surrounding correlation context, we need to acquire the identifiable candidate regions. The geometric center point O of the suspected protruding bulge in each SCR is calculated, and point A , which is the maximum distance from the center point O to the bulge, is identified. The length of the line between A and O is l , and the angle α between OA and the horizontal direction is calculated. Along the long axis to a point 2 l from both ends of the SCR, a new rectangle is structured as shown in Figure 3. The original resolution gray scale image can be rotated by the angle α to determine the upright area of the new rectangle in the gray image and construct the new candidate region for the bulge. This type of candidate region would be
Acquirement of Identifiable Candidate Region
To facilitate the subsequent candidate region identification processing, particularly the analysis and identification of ship surrounding correlation context, we need to acquire the identifiable candidate regions. The geometric center point O of the suspected protruding bulge in each SCR is calculated, and point A, which is the maximum distance from the center point O to the bulge, is identified. The length of the line between A and O is l, and the angle α between → OA and the horizontal direction is calculated. Along the long axis to a point l/2 from both ends of the SCR, a new rectangle is structured as shown in Figure 3. The original resolution gray scale image can be rotated by the angle α to determine the
Candidate Region Identification Based on Decision Mixture Model
Based on the candidate regions, a DMM strategy is proposed to distinguish between ships and false alarms in this section. First, a DPM is employed to build the key part sub-model and the whole ship sub-model of a ship. Second, according to the context features of the ship bow and stern, a local context for docked ship discrimination is designed by the surrounding correlation sub-model. Finally, the DMM was used to fuse the scores of the three sub-models to achieve reliable ship detection in the candidate regions.
Key Part and Whole Ship Sub-Models
Although various ships are observed in the inshore area, the bow and hull of single ships have relatively constant structural features. Therefore, these structural features could be described by the deformable models. Specifically, the bow is recognized as a typical ship feature and considered the key part of a ship for identification. The excellent adaptability of object deformation has been presented in other works [21][22][23]; therefore, the DPM introduced in [18,19] is employed to train a key part sub-model and a whole ship sub-model for the proposed DMM framework.
First, histogram of oriented gradient (HOG) feature maps for each input image are constructed by calculating 8 × 8 adjacent pixel blocks according to Dalal et al. [24]. A pyramid map of features at the λ th level for each input image, H , is constructed at a multi-scale resolution. In this work, both the key part feature sub-model and the whole ship feature sub-model can be defined by the ( 2) n + tuple as follows:
Candidate Region Identification Based on Decision Mixture Model
Based on the candidate regions, a DMM strategy is proposed to distinguish between ships and false alarms in this section. First, a DPM is employed to build the key part sub-model and the whole ship sub-model of a ship. Second, according to the context features of the ship bow and stern, a local context for docked ship discrimination is designed by the surrounding correlation sub-model. Finally, the DMM was used to fuse the scores of the three sub-models to achieve reliable ship detection in the candidate regions.
Key Part and Whole Ship Sub-Models
Although various ships are observed in the inshore area, the bow and hull of single ships have relatively constant structural features. Therefore, these structural features could be described by the deformable models. Specifically, the bow is recognized as a typical ship feature and considered the key part of a ship for identification. The excellent adaptability of object deformation has been presented in other works [21][22][23]; therefore, the DPM introduced in [18,19] is employed to train a key part sub-model and a whole ship sub-model for the proposed DMM framework.
First, histogram of oriented gradient (HOG) feature maps for each input image are constructed by calculating 8 × 8 adjacent pixel blocks according to Dalal et al. [24]. A pyramid map of features at the λth level for each input image, H, is constructed at a multi-scale resolution. In this work, both the key part feature sub-model and the whole ship feature sub-model can be defined by the (n + 2) tuple as follows: where F 0 is a root model (root filter), (F i ,v i ,d i ) is a series of part models, n is the number of part models, and b is a necessary real-valued bias term. In each part model, F i is a part filter and v i is a two-dimensional vector, which indicates the relative position between the anchor position of part i and the root position. In addition, d i is a four-dimensional vector that specifies the coefficient of a quadratic function defining a deformation cost for each possible position of part i relative to v i . Both the root filter and the part filter are sized w × h and calculated in H. The part filters are HOG [24] detectors reshaped as liner filters. The root filter is designed to determine the approximate position of the suspected object, and the part filter is designed to determine the precise information of the suspected parts. Therefore, the root filter is calculated at the coarse level, and the part filters are calculated at the finer levels of the pyramid. In addition, the bias term is the deformation cost that realizes the deformation of the filters. The object hypothesis presents the location of the root filter and the part filters in H, z = (p 0 , . . . p n ), where p i defines the level and position of the filter in H. The score of the object hypothesis is given by the score of each filter at their locations minus a deformation cost that depends on the relative position of each part to the root and with the addition of bias, gives the displacement of the ith part filter relative to the anchor location, and φ d (dx i , dy i ) = (dx, dy, dx 2 , dy 2 ) is the ith part filter deformation feature. A more complete description can be found in [18,19]. Both the key part sub-model and the whole ship sub-model are constructed by the method introduced above. Specifically, these two sub-models could structure the complementarity between the whole and key parts of a ship in the identification section. In other words, the sub-models are conducive to overcoming certain non-ideal conditions in which a ship is covered by shadows from the higher superstructure. In addition, the sub-models are also considered to be adaptive to the intra-class deformations of various ships.
Ship Surrounding Correlation Context Sub-Model
Due to the different docking locations and the variety of docking angles of docked ships, identifying a uniform context representation in a scene level for a docked ship is difficult. However, in the majority of docking situations, the surrounding areas of both the bow and stern have obvious context features. Compared with protruding artificial constructions located in coastal areas, the areas surroundings both the bow and stern of a ship are usually water. Furthermore, the gray information and texture features of these two local water regions are similar to those of the water in the inshore area. Thus, a ship surrounding correlation context sub-model is built to describe this feature.
Vertical candidate regions captured from Section 2.3 are translated into binary images based on the adaptive segmentation method introduced in Section 2.1. Depending on the vertical and horizontal symmetry axes, each binary image of the candidate region is evenly divided into a number of blocks (I 1 , I 2 , . . . , I N ) in this work, we set N = 4. In each block, the area that can be considered water is labeled I k as shown in Figure 4. Subsequently, features could be calculated from the corresponding position of I k in the original candidate regions. Let F 1 (I k ), F 2 (I k ), . . . , F M (I k ) represent the set of features, where M is the number of features. The gray mean and standard deviation are expressed as F 1 (I k ) and F 2 (I k ), respectively. The contrast feature and correlation feature are extracted from the gray level co-occurrence matrix and could be represented by F 3 (I k ) and F 4 (I k ), respectively. Consequently, the neighborhood descriptions are four-dimensional, M = 4. Two of the dimensions are gray descriptors and the other two dimensions are texture descriptors. (5) where SDF is the similarity value of each k I' , M is the number of descriptions and N is the number of blocks. This type of similarity function could denote the correlation context of the majority of docked ships. However, harbor wharfs, quay cranes and other facilities, that could easily produce disturbances during detection, would not have this type of similarity in coastal areas.
DMM Based on Decision Template
For each candidate region, by normalizing the outputs of the above three sub-models, the DT ensemble method in [20] is employed to achieve reliable identification of suspected ship candidates as described below.
Training of DMM
First, for each candidate region, identification scores are calculated using the above three submodels. Different dynamic ranges are observed for each sub-model output; therefore, the outputs are normalized to [ ] 0,1 .
The three sub-models introduced above are used as decision factors In addition, it is important to ensure that the surrounding areas of the bow and stern have similar characteristics, which are defined as follows: where R k (i) is the average of the features F i that are extracted from the water except for I k and DF(i, k) is the diversity between I k and the other water areas with the same feature F i . Finally, a function is used to judge the similarity of each water area in a block as follows: where SDF is the similarity value of each I k , M is the number of descriptions and N is the number of blocks. This type of similarity function could denote the correlation context of the majority of docked ships. However, harbor wharfs, quay cranes and other facilities, that could easily produce disturbances during detection, would not have this type of similarity in coastal areas.
DMM Based on Decision Template
For each candidate region, by normalizing the outputs of the above three sub-models, the DT ensemble method in [20] is employed to achieve reliable identification of suspected ship candidates as described below.
Training of DMM
First, for each candidate region, identification scores are calculated using the above three sub-models. Different dynamic ranges are observed for each sub-model output; therefore, the outputs are normalized to [0, 1].
The three sub-models introduced above are used as decision factors {D 1 , D 2 , D 3 } in this section. Specifically, the key part sub-model is first the factor D 1 , the whole ship sub-model is D 2 and the context sub-model is D 3 . There are two classes of suspected objects in the candidate region. One is the ship object presented by w 1 and the other is the false alarm presented by w 2 . Let Z j = z j , 1, z j , 2, . . . , z j , m, . . . , z j , N denote a training database of class w j , where z j,m is a database sample and N is the number of training samples. The decision contour matrix is employed to denote the decision habit of D i to w j , which is defined as follows: where DP(z j,m ) is the decision contour matrix and d i,j is the detection result of decision factor D i to the w j . When the result is closer to w j , the numerical value of d i,j is closer to 1; otherwise, it is closer to 0. Afterward, the decision template DT j is used to represent the class of the suspected ship object w j , and it is defined as follows: The average value of DP(z j,m ) presents the general decision habit of the decision factors to w j with the corresponding training data. According to this formula, the DMM of a ship object is represented as DT 1 , while a false alarm is represented as DT 2 .
Ship Identification Based on DMM
The DMM is used to confirm ship objects in the candidate regions z ROI . By calculating the decision contour matrix of the candidate region DP(z ROI ), the similarity between the candidate and DT j could be calculated as follows: where µ j (z ROI ) is the similarity value between DP(z ROI ) and DT j . According to the different categories w j , two types of Euclidean metrics {µ 1 (z ROI ), µ 2 (z ROI )} are required. The minimum value of the metrics µk(zROI) indicates that w k is the class of the suspected object in z ROI . Furthermore, a ship object could be confirmed in z ROI . When there is a ship in z ROI , the minimum bounding rectangle of the ship can be obtained, and that rectangle can be shown in the remote sensing image at the original resolution.
Results and Evaluation
A number of experiments are designed and evaluation methods are presented in this section. Eighty harbor remote sensing images were gathered from Google Earth. These images were approximately 6000 × 8000 pixels in size and had resolution from 1 m to 1.7 m per pixel. A large number of ships with different shapes and location angles could be observed in these images. For sub-models training and DMM training, ship candidate region patches were captured from 50 images to build the positive sample training database. Note that each of these image patches contains only one ship, and these patches contained the vast majority of ship shapes. In addition, image patches of interference false alarms were found, such as ports, convex coasts, islands, etc., at sizes ranging from 400 × 100 pixels to 600 × 200 pixels. These patches were used to compile a negative sample training database. Furthermore, the remaining 30 images were used to build the testing database. This database contained a total of 350 ships located at different angles and multiple shapes in inshore areas. Herein, certain performance metrics are defined as follows: Please note that the total number of real ships is the sum number of both anchored ships and docked ships.
Key Parameter Analysis
In the proposed detection method, the key part sub-model and the whole hull sub-model play critical roles in the DMM. In Figure 5, the first row shows the key part sub-model and the second row shows the whole ship sub-model. docked ships.
Key Parameter Analysis
In the proposed detection method, the key part sub-model and the whole hull sub-model play critical roles in the DMM. In Figure 5, the first row shows the key part sub-model and the second row shows the whole ship sub-model.
There is one key parameter n , which represents the number of part filters in the sub-model. The ability of the model to describe the target is closely related to this parameter. In addition, this parameter influences the capacity of the model to resist object deformation. Thus, certain experiments for this key parameter were structured to assess how the detection performance was affected by the value of n . We tested the key part sub-model and whole ship sub-model respectively in the test database we built and there are 30 images including 350 real ships that were involved in the test. As presented in Figure 6, the recall and precision curves of the detection results have a certain trade-off relationship.
(a) (b) (c) There is one key parameter n, which represents the number of part filters in the sub-model. The ability of the model to describe the target is closely related to this parameter. In addition, this parameter influences the capacity of the model to resist object deformation. Thus, certain experiments for this key parameter were structured to assess how the detection performance was affected by the value of n. We tested the key part sub-model and whole ship sub-model respectively in the test database we built and there are 30 images including 350 real ships that were involved in the test. As presented in Figure 6, the recall and precision curves of the detection results have a certain trade-off relationship.
When large recall is required, the precision value must be reduced and vice versa. To obtain the best detection result, a stable detection effect should be selected, and both the recall and precision should be at a high level. Values of n = 4, 6, 8, 10 were tested; the detection results are shown in Figure 6. And a detection-sensitive threshold is set as a tuning parameter of this experiment. This threshold was used to filter scores of object detection. Specifically, the best detection effect is obtained in both the key part sub-model and the whole ship sub-model when n = 8. Consequently, n = 8 in this work. When large recall is required, the precision value must be reduced and vice versa. To obtain the best detection result, a stable detection effect should be selected, and both the recall and precision should be at a high level. Values of 4,6,8,10 n = were tested; the detection results are shown in Figure 6. And a detection-sensitive threshold is set as a tuning parameter of this experiment. This threshold was used to filter scores of object detection. Specifically, the best detection effect is obtained in both the key part sub-model and the whole ship sub-model when
Detection Result Analysis
A typical detection result of the proposed algorithm in a large-scale remote sensing harbor image is presented in Figure 7. In addition, selected typical local results from the large-scale harbor image are taken as examples. The proposed method could successfully address diverse harbor locations, complex illumination conditions and different ship location angles. Furthermore, this approach also has a better adaptability to shadow interference that occurs on water and ship decks. Therefore, this method has good robustness for complex scenes. However, it is unable to detect a few ships because of more specific interference factors (e.g., dock facilities) connected to the bow and stern at the same time, as shown in Figure 7a. These specific interference factors may lead to a low score in the ship surrounding correlation context sub-model. Thus, this scene would affect the final decision.
Detection Result Analysis
A typical detection result of the proposed algorithm in a large-scale remote sensing harbor image is presented in Figure 7. In addition, selected typical local results from the large-scale harbor image are taken as examples. The proposed method could successfully address diverse harbor locations, complex illumination conditions and different ship location angles. Furthermore, this approach also has a better adaptability to shadow interference that occurs on water and ship decks. Therefore, this method has good robustness for complex scenes. However, it is unable to detect a few ships because of more specific interference factors (e.g., dock facilities) connected to the bow and stern at the same time, as shown in Figure 7a. These specific interference factors may lead to a low score in the ship surrounding correlation context sub-model. Thus, this scene would affect the final decision.
To demonstrate the advantages of this method, it was compared with other typically used methods as applied in [15,16] as well as a basic DPM method. The comparison results are shown in Table 1.
To ensure the fairness of the experiment, the same database was used for all the methods, and the parameters of the contrastive algorithms were adjusted to the optimal state. When large recall is required, the precision value must be reduced and vice versa. To obtain the best detection result, a stable detection effect should be selected, and both the recall and precision should be at a high level. Values of 4,6,8,10 n = were tested; the detection results are shown in Figure 6. And a detection-sensitive threshold is set as a tuning parameter of this experiment. This threshold was used to filter scores of object detection. Specifically, the best detection effect is obtained in both the key part sub-model and the whole ship sub-model when
Detection Result Analysis
A typical detection result of the proposed algorithm in a large-scale remote sensing harbor image is presented in Figure 7. In addition, selected typical local results from the large-scale harbor image are taken as examples. The proposed method could successfully address diverse harbor locations, complex illumination conditions and different ship location angles. Furthermore, this approach also has a better adaptability to shadow interference that occurs on water and ship decks. Therefore, this method has good robustness for complex scenes. However, it is unable to detect a few ships because of more specific interference factors (e.g., dock facilities) connected to the bow and stern at the same time, as shown in Figure 7a. These specific interference factors may lead to a low score in the ship surrounding correlation context sub-model. Thus, this scene would affect the final decision. To demonstrate the advantages of this method, it was compared with other typically used methods as applied in [15,16] as well as a basic DPM method. The comparison results are shown in Table 1. To ensure the fairness of the experiment, the same database was used for all the methods, and the parameters of the contrastive algorithms were adjusted to the optimal state.
Method Recall Precision
Ship detection-based method [15] 73.5 81.3 Ship detection-based method [16] 80 As shown in Table 1, the basic DPM, the method applied in [15] and the method applied in [16] could not achieve better detection results than the proposed method. Method [15] achieves lower recall in these experiments; it is designed to analyze binary images obtained after water-land segmentation. These segmentation results are influenced by the shooting and illumination angles. In addition, it requires a large number of parameters in the analysis stage. Specifically, method [16] had lower precision because it relies on contour information, particularly bow contour information. Moreover, the structural characteristics of whole hulls are considered less. In contrast, the proposed method focuses on complementary both bow and hull features of a ship. In contrast, the proposed method is based on gray images to achieve object detection and obtain a decision template via automatic learning. Obviously, the basic DPM method, which is based on only a one hull sub-model, has the lowest recall and precision because it lacks consideration of bow features and context features.
The proposed method also has fast processing speed. These three methods always focus on image patches and must meticulously scan all regions in the image patches. When dealing with largescale harbor remote sensing images, their calculation process is intensive. However, the proposed method works by structuring a hierarchical detection framework; therefore, extracting candidate
Recall Precision
Ship detection-based method [15] 73.5 81.3 Ship detection-based method [16] 80 As shown in Table 1, the basic DPM, the method applied in [15] and the method applied in [16] could not achieve better detection results than the proposed method. Method [15] achieves lower recall in these experiments; it is designed to analyze binary images obtained after water-land segmentation. These segmentation results are influenced by the shooting and illumination angles. In addition, it requires a large number of parameters in the analysis stage. Specifically, method [16] had lower precision because it relies on contour information, particularly bow contour information. Moreover, the structural characteristics of whole hulls are considered less. In contrast, the proposed method focuses on complementary both bow and hull features of a ship. In contrast, the proposed method is based on gray images to achieve object detection and obtain a decision template via automatic learning. Obviously, the basic DPM method, which is based on only a one hull sub-model, has the lowest recall and precision because it lacks consideration of bow features and context features.
The proposed method also has fast processing speed. These three methods always focus on image patches and must meticulously scan all regions in the image patches. When dealing with large-scale harbor remote sensing images, their calculation process is intensive. However, the proposed method works by structuring a hierarchical detection framework; therefore, extracting candidate regions is a coarse-to-fine process. The small-sized images of local candidates are roughly captured from large-scale images. Thus, the precise detection function could gradually focus on the target area and it does not require globally detailed computing processes. This algorithm structure effectively reduces the calculations required to perform detection.
Conclusions
In this paper, a hierarchical framework for inshore ship detection in large-scale harbor remote sensing images is presented. To improve the efficiency, a novel scanning method, OITDS, is proposed for candidate region extraction. This method ensures that candidate regions can be extracted from large-scale images at high speed and with fewer errors. To enhance the robustness of detection in complex scenes, a DMM strategy is presented to confirm the captured candidate regions. The whole ship features, the key part features and the context features of ships are fused into the DMM strategy. This comprehensive decision method fully considers the adaptability of target deformation and increases the robustness on unsatisfactory scenes. Experiments on large-scale harbor remote sensing images verify that the proposed method is effective and robust when applied to unsatisfactory scenes. Compared with typical methods, the proposed method also achieves better detection results. This method aims at rapidly getting the berthing situation of inshore ship from interested port by using a wide range of remote sensing images. Especially, some situations that traditional information acquisition methods could not deal with (such as non-cooperation) would suit our method. In the future, based on the proposed detection method, we plan to identify and classify inshore ships. Moreover, additional data for different types of ships in inshore areas will be analyzed. The proposed method will benefit the integrated harbor management and support shipping management and other activities. | 2017-08-06T06:02:12.227Z | 2017-06-22T00:00:00.000 | {
"year": 2017,
"sha1": "78dbcdae839e468ea95a08d80f63c09ea1347ee5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/7/1470/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da634c82b0c6c74e6807b302fab1f4dbeacda9b7",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
209202386 | pes2o/s2orc | v3-fos-license | Crossing Reduction of Sankey Diagram with Barycentre Ordering via Markov Chain
Sankey diagram is popular for analyzing primary flows in network data. However, the growing complexity of data and hence crossings in the diagram begin to reduce its readability. In this work, we studied the NP-hard weighted crossing reduction problem of the Sankey diagram with both the common parallel form and the circular form. We expect to obtain an ordering of entities that reduces weighted crossings of links. We proposed a two-staged heuristic method based on the idea of barycentre ordering and used Markov chain to formulate the recursive process of obtaining such ordering. In the experiments, our method achieved 300.89 weighted crossings, compared with the optimum 278.68 from an integer linear programming method. Also, we obtained much less weighted crossings (87.855) than the state-of-art heuristic method (146.77). We also conducted a robust test which provided evidence that our method performed consistently against the change of complexity in the dataset.
Introduction
Sankey diagram depicts flows among entities in a system, where the thickness of a flow represents the flow quantity.Although Sankey diagram is originally introduced in [6] to display energy flows of a steam engine, its excellence in emphasizing dominant flows in a network makes it popular for analyzing sequence data, such as a vendor-to-customer network or page-viewing paths in Google Analytics ( [5]).To better display data with increasing complexity, methods that automatically adjust the layout to enhance readability is in much desire.According to [1], a proper layout of the Sankey diagram should meet the following three criteria: minimum edge intersection, short-as-possible edge lengths and straight edges.In this paper, we focus on the first criteria: reducing the number of edge crossing.
The most common form of a Sankey diagram, called the parallel form, is a horizontally-layered diagram with entities assigned to different vertical layers and flows with various thicknesses connecting entities between layers (normally from left to right).By viewing entities as vertices and flows edges, we can transform the hierarchical structure of a Sankey diagram into a layered graph.In this way, layout optimization of a Sankey diagram can be generalized as the layered graph drawing problem formulated by [8], which is further decomposed into several sub-problems in [7].Among the sub-problems our work focuses on the NP-hard (proven in [2]) crossing minimization problem where an ordering of vertices for each layer is determined to achieve minimum edge crossing.However, our problem differs from the classic crossing minimization problem in that each edge crossing is weighted as the intersected edges bear different weights from the thicknesses of flows.
For the classic crossing minimization problem (without considering edge weight), the most famous heuristic method is the barycentric (BC) method proposed by [7].In this method, each vertex is sorted among its layer in ascending order of the barycentre of vertices connecting to it.For crossing reduction with weighted edges, [1] gives an approach combining the BC method and linear programming.This combined method uses the ordering produced by the BC method as input of the linear programming method whose objective function is minimizing the weighted sum of distances between connected nodes.That is, the linear programming method improves the placement of nodes within each layer determined previously by the BC method.However, the resultant layout is non-optimal since the heuristic BC method is used without considering edge weights.For exact solution, there is an integer linear programming (ILP) model with the objective being minimizing the sum of weighted crossing [2].The attained optimal layout, in comparison to the layout obtained from the BC method, shows remarkable improvement on crossing reduction and readability improvement.
Another form of Sankey diagram with increasing popularity is the cycle form with flows travelling in opposite direction (right to left) representing reversed data flows such as the recycle of resources.In this case, we study the crossing reduction problem for a specific type of the cycle form where the reverse flows only exist between the last (rightmost) layer and the first (leftmost) layer.There is not yet an algorithm to the crossing reduction problem on this type of Sankey diagram.
In this work, we first propose a two-staged heuristic method to reduce weighted crossing in the parallel form of the Sankey diagram (Section 2).In the first stage, we design a Markov Chain Method where we formulate the process of obtaining an ordering as a Markov chain and solve it with the eigenvector corresponds to the second largest eigenvalue of the Markov transition matrix.
The solution is sufficient while non-optimal.For the second stage, we design a recursive Partition Refinement Method to further improve the ordering from the first stage.In this method, a vertex is given a range within the level and it gives different value within the range when used to calculate the barycentre of a connected vertex.We iterate the ordering in a back-and-force manner among layers until the positions of vertices remain unchanged.
In the following Section 3, we show a modified version of the above method that is applicable for reducing weighted crossing on our specified cycle form of the Sankey diagram.In the beginning, we ignore the connection between the first and the last level such that the formulated graph becomes parallel again, allowing us to obtain a partially calculated barycentre ordering.Subsequently, the second stage of this amended method undergoes a circular iteration route to also include the connection between the first and last levels.
In the Experiment Section, we first show the effect of our method on the parallel form by comparing the resultant ordering with those of the exact ILP method, the heuristic BC method and the combined method.The comparison includes both the visual effects as well as the weighted and non-weighted number of crossed edges.We find that our method is able to produce much better ordering than the two heuristic methods.In the process we also compare the difference between orderings produced in the two stages to demonstrate the effectiveness of both stages.For the cycle form, as there is no other methods for comparison, we apply our modified method on an artificial dataset with known optimal ordering.The result shows that in this case we are able to achieve optimal ordering even in the first stage.We also select a non-optimal output from stage 1 by using a different parameter set to verify the effectiveness of the second stage, which still produces the optimal ordering.Finally, we conduct a robustness test where we vary the complexity of the dataset and use the result from the ILP method as comparison.The test result verifies the stability of our method against the change of datasets.
Method on the Parallel Form
We start by formulating a Sankey diagram in parallel form with n layers as a n-level layered graph G.We regard each entity in the Sankey diagram as a vertex v i and let V denote the set of all vertices in G.We say v i and v j belong to the same level in the graph if the corresponding entities lie in the same layer in the diagram.Set of vertices in the i-th level is denoted as V i and V 1 , V 2 , ..., V n form a partition of V .Without loss of generality, we assume that in a Sankey diagram all links are formed between successive layers.For links connecting nodes belonging to non-successive levels, we follow the practice in [2] and [1] where dummy entities are added to all crossed levels such that the "long link" becomes the composition of several "short links" of the same thickness as the "long link" itself.Consequently, we say an undirected edge (v i , v j ) exists only if v i and v j belong to successive levels and are connected by a link in the diagram.
We then have the edge set of G as With our assumption, E can be partitioned into n − 1 subsets where each subset E i is the set of edges connecting vertices between V i and V i+1 .Weight of an edge (v i , v j ), denoted as w(v i , v j ), follows from the thickness of the corresponding link in the Sankey diagram.
For the n-level layered graph G = (V, E, n), its ordering σ is the set {σ 1 , ..., σ n } where σ i denotes the ordering of vertices in V i .With the formulated graph G(V, E, n), our aim is to find an ordered graph G(V, E, n, σ) with reduced weighted crossing.We measure the weighted crossing of an ordered graph by K, the sum of production between weights of the crossed edges.Its calculation is a variation from the method of obtaining crossing number K in [8] and is described in the following.Given ordering σ, we first define for each E i with a weighted interconnection matrix In particular, we use M :,k to denote the j-th row vector and transposed k-th column vector of M (i) .
To obtain the weighted crossing of E i for a particular pair of ordering σ i nad σ i+1 , we need to reorder the rows and columns M (i) such that they comply with the given ordering.Therefore, we define for each ordering σ i a transformation matrix A i with a (i) j,k equals 1 if the j-th vertex in σ i has index k and 0 otherwise.Then the transformed matrix, denoted as M (i) , can be derived from the equation The weighted crossing of E i can therefore be calculated by the formula Subsequently, the total weighted crossing number of ordered graph G with ordering σ is defined as It has been shown in [7] that barycentre ordering can effectively reduce crossing number K.Here barycentre of a vertex is the weighted average of the position value of its connected vertices.A barycentre ordering places each vertex at its barycentre while avoiding the trivial case where all vertices share on barycentre value.To find such ordering, [7] also gives the heuristic Barycentre(BC) Method in which weights among connected vertices are equal, regardless the different weights of the corresponding edges.However, in our case, crossing involving edge with larger weight will contribute more to the total weighted crossing number K(G) and therefore has higher priority to be avoided when deciding the ordering.That is, the weight of a connected vertex is proportional to the weight of the corresponding edge.Moreover, to define the positions of vertices, we view each level as a vertical line of height equal 1 and each vertex point on the vertical line takes a position value within [0, 1].We further define the position vector u (i) of the i-th level where u i.e. in one the the middle levels, all its connected vertices forms a neighboring vertex set N (v j ).We further partition this set into the left neighboring vertex set N L (v j ) and the right neighboring set N R (v j ) containing vertices belonging to (i − 1)-th level and the (i + 1)-th level respectively.For v k in the first or last level, N (v j ) consists of one one-sided neighboring set and an empty set for the other side.For each of the parted neighboring set, we have the vertex barycentre by the following equations where ||x|| 1 is the l 1 norm of x.Subsequently, we have the two-sided barycentre of v j as the average of the two one-sided barycentres: Given position vector u (i) , σ i is the descending order of entries in u (i) with the first vertex in the ordering placed uppermost in the level.Subsequently, to find a barycentre ordering is to find the corresponding position vectors where each entry is the barycentre of the corresponding connected vertices.To this end, we design a two-stage algorithm.In Stage 1, we introduce a Markov Chain Method which produces an ordering where most vertices satisfy the requirement for a barycentre ordering.In Stage 2, we give a Partition Refinement Method to refine the ordering from Stage 1 towards a complete barycentre ordering.
Stage 1: the Markov Chain Method
In Stage 1, we start with one-sided barycentre and eventually we want each vertex to have equal left and right barycentres so as to achieve two-sided barycentre.Given position vector u (i) where i ∈ [1, n − 1], we can update the the position vector u (i+1) such that u where However, we cannot determine an ordering if multiple entries in u (i+1) have the same barycentre value.Consequently, we add a random matrix S L (i) of the same size as L (i) with normalized rows (row entries summing to 1) to equation 7 by a factor α 1 ∈ [0, 1].Then equation 7 becomes where we have the modified left transition matrix as Similarly, given position vector u (i) with i ∈ [2, n], we update u (i−1) such that u Here we define the modified right transition matrix Then we have the matrix form of the process as In addition, both modified transition matrices have the following properties: (1) all entries are non-negative as the entries of M (i) are non-negative weights of edges, (2) all row sums equal 1.
With the above, given position vector u (1) , with equation 7, we have u (2) = L (1) u (1) , which means that vertices in V 2 are placed based on the positions of nodes in V 1 such that each node in V 2 is at its left barycentre.Then, with u (2) calculated, we can calculate u (3) and so forth.Finally we can obtain (1) .(12) Therefore, by propagating the position vector of u (1) thorough the product of the position matrix from L (n−1) to L (1) , we are able to place all vertices in all levels in their left barycentres except for the first level based on u (1) .
On the other hand, given position vector u (n) , we can also get from equation 11 the following equation: Note that in this way all vertices in V i with i ∈ [1, n − 1] are placed in their right barycentres based on u (n) .
From above, we can formally describe the Markov Chain Method.We set an initial position vector u (1) and use it to calculate u (n) from equation 12, with which we can reversely update u (1) by equation 13.We repeat the above process until it converges, i.e. both u (1) and u (n) remain unchanged in iteration.This indicates that each vertex in V is placed on both its left and right barycentres, meaning that B L (v j ) = B R (v j ) for any v j ∈ V .From equation 6, we have for all v j defined above, showing that we are able to achieve a barycentre ordering with this process. Let , we simplify the above process as u (1) = Ru n = RLu (1) := Tu (1) . ( Here we define transition matrix T = RL.We show in the following that it is a right stochastic matrix, i.e. a non-negative real square matrix with each row summing to 1 that are used to represent the probabilities in the transition of a Markov chain.Firstly, T is of size (15) Then we have and the sum of the i-th row of P is With lemma 1, it follows that the product of more than two row-normalized matrices is still row-normalized.Thus, we have that L, R and therefore T are row-normalized matrices.This completes the proof that T is a right stochastic matrix.As a result, we rewrite equation 14 in the form of a Markov chain u (1) k+1 = Tu (1) where convergence is guaranteed.
To solve a Markov chain π n+1 = Pπ n , we need to find the stationary distribution π which satisfies π = Pπ, i.e. π is invariant by the transition matrix P. Normally, the stationary distribution is first right eigenvector x 1 corresponding to the largest eigenvalue λ 1 = 1 of T. However, in our case, all entries in x 1 are identical, which means that all vertices in V 1 should be placed at the same position.Consequently, vertices in other levels will also be placed at the same position as the barycentre remains unchanged.The resultant order is not a barycentre ordering by our previous definition.
On the other hand, the second largest sign-less eigenvalue of T, λ 2 , gives a heuristic solution for the Markov chain.One example of the usage of λ 2 comes from [4], where the author aims to use Markov chain to solve the NPcomplete state clustering problem.In their case, the eigenvector associated with the largest eigenvalue yields a trivial solution having all vertices in one cluster.The second largest eigenvalue, however, gives a eigenvector that generates a satisfying approximation of the proper clustering.In our case, we find that x 2 is also a competent alternative to x 1 .The sign of an eigenvalue is insignificant here as it just inverts the resultant ordering upside down without changing the crossing of edges.
Given u (1) from solving the Markov chain, we obtain all other position vectors by u (i+1) = L (i) u (i) and subsequently the ordering σ.This completes the Markov Chain Method.
The Markov Chain Method cannot yield a complete barycentre ordering for the following two reasons: (1) the random component in transition matrix T from modified left/right transition matrix affects the final ordering; (2) the use of the second largest eigenvector is not optimal in nature.We propose a solution for the first problem.We solve the second problem in Stage 2 using the Partition Refinement Method.
Adding a random component in the left/right transition matrix avoids the problem of having multiple vertices with the same barycentre for which an ordering is impossible to determine.However, the calculated barycentres incorporate the randomness and then pass on to the subsequent levels.As a result, output of the Markov chain method is different each time.In this case, we repeat the Markov Chain Method for a predefined N times and each time we calculate and record the output ordering σ k and its weighted crossing K(G(V, E, n, σ k )).Then we choose the ordering with minimum weighted crossing as the best-in-N ordering of Stage 1.This is facilitated by the fact that there are various efficient methods for calculating the second largest eigenvalue and the corresponding eigenvector.While by the nature of randomness, the best-in-N ordering is still not a complete barycentre ordering, we have that the larger N is, the smaller weighted crossing number the best-in-N ordering has.
Stage 2: Partition Refinement Method
In Stage 1, we view each vertex as a point in its level.As a result, it gives the position value of the corresponding point when calculating the barycentres of all its connected vertices.Stage 2, on the other hand, allows a vertex to give each connected vertex an individual position value within a certain range for the calculation of barycentre.That is, instead of using a point to represent a vertex, each vertex is given a range, called block, within the level while the end of each edge connecting this vertex is symbolized as a point in this given range.For v k ∈ N (v j ), we use P (v j , v k ) to denote the point of edge on the block of v k and p(v j , v k ) for the position value of P (v j , v k ).
Apparently, p(v j , v k ) should be dependent on the positions and therefore orders of v j and v k in their own levels.First of all, the i-th level is splitted into |V i | blocks of equal height and the j-th block from top to bottom is assigned to the j-th vertex in the ordering σ i .Thus, the range of p(v j , v k ) is attribute to the order of v j .Furthermore, the value of p(v j , v k ) within the block follows from the position of v k , specifically the value of P (v k , v j ).For the j-th block in the i-th level, it has base b i,j = |Vi|−1 |Vi| and height h i = 1 |Vi| .We obtain the initial position of P (v j , v k ) from the order of v j in best-in-N ordering from Stage 1.For v q , v p ∈ N L (v j ), ordering of points P (v j , v q ) and P (v j , v p ) complies with that of v q and v p .Ordered points of vertices in N L (v j ) are then distributed evenly within the block from top to bottom, dividing the block into |N L (v j )| + 1 equal segments.Points of vertices in N R (v j ) follow the same.Let v j has order λ in σ i and a connected vertex v k has order µ in N L (v j ), then With σ from Stage 1, we can assign the vertices based on their order in the level.Subsequently, we obtain initial values for all points with the above equation.Then the iteration begins with calculating the barycentre for each of the middle levels from left to right.Starting with the second level, we obtain the barycentre of each vertex from both the first and the third levels.For the calculation of B L (v j ), rather than using the position vector u (i−1) ) as in Equation 4, we use a position vector unique for this v j where And we have Then the two-sided barycentre can be derived from Equation 6.With all barycentres of vertices in the second layer calculated, we have a new ordering σ 2 complying with the descending ordering of the barycentres.On the basis of the new σ 2 , we reassign a block for each vertex.Asides from the change in the range of value, new position of a point P (v j , v k ) is also under the affect of point P (v k , v j )'s position.Instead of evenly distributing the points within the block as before, we let P (v j , v k ) takes the value of p(v k , v j ) after being scaled to the height of the block.Thereupon, let the order of v j in the new σ 2 be λ , Equation 19becomes The calculation of the remaining middle levels follows.For each vertex in the last level, its barycentre is equal to its left barycentre.After updating the ordering and points for the last level, we go backwards and repeat the operation on the middle levels from right to left.Procedure on the first level is similar to that of the last.This completes one iteration.
This iteration converges to a complete barycentre ordering when the ordering remains unchanged for a predetermined M times.However, as mentioned before, a complete barycentre ordering is not necessarily an optimal one.It is therefore possible that we encounter an ordering with less weighted crossing than the final complete barycentre ordering.In light of this possibility and our ultimate aim of reducing weighted crossing, we keep a record of the weighted crossing from each iteration if the resultant ordering is different from its predecessor.This allows us to choose the ordering with minimum weighted crossing produced in the process as the output ordering of Stage 2.
Modified Method on the cycle form
In this section, we modify the method introduced in previous section to reduce weighted crossing of the specified cycle form of Sankey diagram.We start by formulating the cycle form of the Sankey diagram as a circular layer graph.We refer the links connecting the last and the first level as the binding links.By ignoring the binding links, the cycle form becomes a parallel one with On the other hand, for the binding links, we use E n to denote corresponding the edge set.Subsequently, we have the graph for the original cycle form by adding back For Stage 1, we supply G to the Markov Chain method.In this way, Stage 1 yields the best-in-N ordering without considering the binding links.Subsequently, we utilize Stage 2 to take the binding links back into consideration.
In Stage 2, the first modification we make to the Partition Refinement method is on the calculation of barycentre.In the previous section, the barycentre of vertex in V 1 is just the barycentre of its right neighboring set.Now each vertex in V 1 also has a left neighboring set consisting of connected vertices in the last level, allowing the Equation 6 to be applicable.Similarly, vertices in the last level also have their own right neighboring set for calculating the barycentres.
Given the circular nature of graph G, the route of iteration also changes.Instead of adopting the "back and forth" manner in the previous section, after obtaining a new σ n , we proceed to apply the method on the first level.The modified Partition Refinement method is summarized as pseudo code in Algorithm 1.
Require:
The best-in-N ordering σ from Stage 1.The maximum repeat number M .Ensure: Improved ordering σ.
4:
Use τ (i) and (i) to update r (i) and l (i) ;
5:
Calculate the barycentres of vertices in V i and update the position vertex u (i) ;
6:
Use new u (i) to update ordering σ i ;
7:
end for 8: end for 9: return refined ordering
Result
We prove the efficiency of our method on the parallel form by comparing our result with that of the state-of-art heuristic method in [1].Moreover, we show that our method produces near-optimal result by demonstrating a low contrast between the our performance and that of the optimal ILP method in [2].Besides from visual comparison, we also measure the performances of the above methods by both the weighted and non-weighted crossing of the output ordering.In particular, resultant orderings from the three comparing methods are obtained from the corresponding Sankey diagrams provided in their articles.To increase the sensitivity of our method towards edges with considerably small weights, we use logarithm (with base 10) for the edge weights in the above tests to reduce their difference.For our adapted method on the cycle form, we apply it on a graph with zero crossing to see if it can achieve the optimal result in this case.We also conduct a robust test to validate the consistency of our method against varying complexity of graph.
Test against State-of-Art Heuristic Method
In [1], they applied their method on Canada's energy usage data in 1978 and Figure 1a displays their resultant Sankey diagram.We apply our method on the same dataset.In Stage 1, we set α 1 = 0.1 and N = 100.In Stage 2, we set α 2 = 0.1, M = 100 and obtain the convergent result with only 10 iterations.Figure 1b and Figure 1c give the Sankey diagrams with orderings from Stage 1 and 2 of our algorithm respectively.Table 1 summarizes the weighted and non-weighted crossing of orderings from all three Sankey diagrams in Figure 1.
Ordering from:
Figure 1a
Test against ILP Method and BC Method
This test examines the difference between output from our method and the optimal output from the ILP method in [2] to see if we have a near-optimal result.Moreover, we compare our result against that of the BC method which shares our idea of finding a barycentre ordering such that we can can demonstrate our ability to produce a better barycentre ordering.
In this test, we use the same dataset as in [2]: the "World Greenhouse Gas Emissions" data from the World Resource Institute [3].From [2] we have results of both ILP method and the BC method on this dataset.For our method, we supply α 1 = 0.01, N = 100 to Stage 1 and α 2 = 0.1, M = 100 to Stage 2 which converges with only one iteration.For the output orderings of the two comparing methods and the two stages of our method, we calculate their weighted and non-weighted crossings and summarize them in Table 2.We also plot the four orderings as Sankey diagrams in 2 for visual comparison.
Comparing Figure 2b and Figure 2d, we see that we are still at a relatively small distance from the optimal layout.Table 2 shows that although the output from Stage 2 has less non-weighted crossing number but still larger weighted crossing number.
On the other hand, our output excels from that of the BC method from both visual aspect and two crossing measurements.Also, BC method is an iterative method and therefore requires time to achieve a near-optimal ordering.In contrast, the output from Stage 1 already suffices as a near-optimal ordering.Moreover, iteration times M to convergence in Stage 2 is also small.
Test of Modified Method on the Cycle Form
In this test we apply the modified algorithm on an artificial dataset of cycle form.This dataset has the same number of level and same number of vertices in each level as the dataset in the previous test against the ILP method and the BC method.In particular, this artificial data has a known optimal layout which has zero crossing number.In this test case, we find that Stage 1 alone is able to produce an optimal layout with N = 100, α 1 = 0.01.To test the effectiveness of Stage 2, we reset N = 50 in Stage 1 and obtain an ordering of K = 4 and K = 2. Stage 2 then refines the Best-in-N ordering to the optimal ordering within 2 iterations with α 2 = 0.01.
Robust Test
With this test, we aim to show the stability of our method towards cases of different complexity level.The complexity level of a graph is measured by the number of level (denoted as n) and the number of vertex in each level (denoted as V ).Consequently, we vary both n and V and for each pair we generate ten different random cases.The total edge number of a random case is considered an estimation of its complexity.All test case run with α 1 = 0.01, N = 100 for Stage 1 and α 2 = 0.1, M = 100 for Stage 2. We record for each case the weighted crossing produced by both Stage 1 and Stage 2, as well as the optimal result from the ILP method for comparison.Let a result from our method be x and the corresponding optimal result be y, we measure their difference by a ratio r = x + y + (20) where is a very small number to avoid the denominator being 0 when y = 0.The reason we do not use the difference between x and y directly is because as the complexity of the graph increase, the resultant weighted crossing and therefore the difference are also increasing.We summarize the result of the robust test in Figure 3.For starters, it shows that most of result from Stage 1 are less than twice of the ILP result.Stage 2 further improves the result to be no more than 1.5 times of the ILP result.We believe this demonstrates the consistency of our method's performance on cases with various complexity.
Conclusion
In this paper, we investigate the NP-hard weighted crossing reduction problem for the Sankey diagram.Other than the common parallel form of Sankey diagram, we also study a particular circular form where the first and the last layers are connected.
Our heuristic method, aiming to find a barycentre ordering, are composed of two stages.The first stage employs the Markov Chain method and the second stage serves to improve Stage 1's output with the Partition Refinement Method.We also adapt this method to be applicable for reducing weighted crossing in the specified circular form of the diagram.
After experiments, we can conclude that our method performs nearly as well as the ILP method and surpasses the existing heuristic methods.In terms of the measurement of weighted crossing, in the ILP experiment, our method achieved 300.89 weighted crossings, very close to the 278.68 weighted crossings from the ILP method while the BC method has a weighted crossing number of 1220.07.Also, we obtained only 87.855 weighted crossings while the stage-of-art heuristic method attained 146.77.Visually speaking, we were able to obtain high readability even from complicated seven-layer data.We also performed a robust test which verified the stability of our method against changing complexity of dataset.
Lemma 1 .aabb
and therefore a square matrix.Moreover, by Property 2 of L i and R i , all entries in L i , R i and consequently L, R are obviously non-negative real values.It remains to show that T is a row-normalized matrix.To this end, we first prove the following lemma: Let l × m A, m × n B be two row-normalized matrices, then their product P = AB is also a row-normalized matrix.Proof.We denote A and B as 11 a 12 a 13 . . .a 1m a 21 a 22 a 23 . . .a 2m .l1 a l2 a l3 . . .a lm 11 b 12 b 13 . . .b 1n b 21 b 22 b 23 . . .b 2n .m1 b m2 b m3 . . .b mn (a) Sankey diagram produced by [1] (b) Sankey diagram produced by Stage 1 (c) Sankey diagram produced by Stage 2
Figure 1 :
Figure 1: Three Sankey diagrams corresponding the state-of-art heuristic method and two stages of our method
Figure 3 :
Figure 3: Summarization of result for the robust test 1: for each m ∈ [1, M ] do
Table 1 :
Summarization of measurements of orderings from the three Sankey diagrams in 1Base on the comparison of both crossing measurements, we see that even without the refinement in Stage 2, the output from Stage 1 already surpass that of the heuristic method.And the improvement in both measurements from Stage 1 to Stage 2 validates the effectiveness of Stage 2. To be specific, we find that Stage 2 is able to resolve some unnecessary crossings in Stage 1 resulting from the additional random component.On the other hand, Stage 1 gives a satisfying semi-barycentre ordering such that Stage 2 only takes a few iterations to converge.
Table 2 :
Table summarizing the measurements of orderings from the three Sankey diagrams in Figure 2. | 2019-12-12T02:01:28.809Z | 2019-12-11T00:00:00.000 | {
"year": 2019,
"sha1": "6bee003e6c6c67ee972c372fa9cd98695269c693",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6bee003e6c6c67ee972c372fa9cd98695269c693",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252367887 | pes2o/s2orc | v3-fos-license | Rank Aggregation Using Scoring Rules
To aggregate rankings into a social ranking, one can use scoring systems such as Plurality, Veto, and Borda. We distinguish three types of methods: ranking by score, ranking by repeatedly choosing a winner that we delete and rank at the top, and ranking by repeatedly choosing a loser that we delete and rank at the bottom. The latter method captures the frequently studied voting rules Single Transferable Vote (aka Instant Runoff Voting), Coombs, and Baldwin. In an experimental analysis, we show that the three types of methods produce different rankings in practice. We also provide evidence that sequentially selecting winners is most suitable to detect the"true"ranking of candidates. For different rules in our classes, we then study the (parameterized) computational complexity of deciding in which positions a given candidate can appear in the chosen ranking. As part of our analysis, we also consider the Winner Determination problem for STV, Coombs, and Baldwin and determine their complexity when there are few voters or candidates.
Introduction
Rank aggregation, the task of aggregating several rankings into a single ranking, sits at the foundation of social choice as introduced by Arrow [1951]. Besides preference aggregation, it has numerous important applications, for example in the context of meta-search engines [Dwork et al., 2001], of juries ranking competitors in sports tournaments [Truchon, 1998], and multi-criteria decision analysis.
One of the best-known methods for aggregating rankings is Kemeny's [1959] method: A Kemeny ranking is a ranking that minimizes the average swap distance (Kendall-tau distance) to the input rankings. It is axiomatically attractive [Young and Levenglick, 1978, Can and Storcken, 2013, Bossert and Sprumont, 2014 and has an interpretation as a maximum likelihood estimator [Young, 1995] making it well-suited to epistemic social choice that assumes a ground truth.
However, Kemeny's method is hard to compute [Bartholdi et al., 1989, Hemaspaandra et al., 2005 which makes the method problematic to use, especially when there are many candidates to rank (for example, when ranking all applicants to a university). Even if computing the ranking is possible, it is coNP-hard to verify if a ranking is indeed a Kemeny ranking [Fitzsimmons and Hemaspaandra, 2021]. Thus, third parties cannot easily audit, interpret, or understand the outcome, making systems based on Kemeny's method potentially unaccountable. This limits its applicability in democratic contexts.
These two drawbacks motivate the search for computationally simpler and more transparent methods for aggregating rankings. There is a significant literature on polynomial-time approximation algorithms for Kemeny's method [Coppersmith et al., 2006, Kenyon-Mathieu and Schudy, 2007, Ailon et al., 2008, van Zuylen and Williamson, 2009], but these algorithms are typically not attractive beyond their approximation guarantee. In particular, they would typically not fare well in an axiomatic analysis, and are unlikely to be understood by and appealing to the general public (many are based on derandomization).
Instead, we turn to one of the fundamental tools of social choice: positional scoring rules. These rules transform voter rankings into scores for the candidates. For example, under the Plurality scoring rule, every voter gives 1 point to their top-ranked candidate. Under the Veto (or anti-plurality) scoring rule, voters give −1 point to their last-ranked candidate and zero points to all others. Under the Borda scoring rule, every voter gives m points to their top-ranked candidate, m − 1 points to their second-ranked candidate, and so on, giving 1 point to their last-ranked candidate. We study three ways of using scoring rules to aggregate rankings: • Score: We rank the candidates in order of their score, higher-scoring candidates being ranked higher.
• Sequential-Winner : We take the candidate c with the highest score and rank it top in the aggregate ranking. We then delete c from the input profile, re-calculate the scores, and put the new candidate with the highest score in the second position, and so on.
• Sequential-Loser: We take the candidate c with the lowest score and rank it last. We then delete c, re-calculate the scores, and put the new candidate with the lowest score in the second-to-last position, and so on.
Ranking by score is the obvious way of using scoring rules for rankings, and so it has been studied in the social choice literature [Smith, 1973, Levenglick, 1977. Sequential-Loser captures as special cases the previously studied rules Single Transferable Vote (also known as Instant Runoff Voting, among other names), Coomb's method, and Baldwin's method. These are typically used as voting rules that elect a single candidate, but they can also be understood as rank aggregation methods. On the other hand, despite being quite natural, Sequential-Winner methods appear not to have been formally studied in the literature (to our knowledge).
Our Contributions
Axiomatic Properties (Section 4) Based on the existing literature, we begin by describing some axiomatic properties of the methods in our three families. For example, we check which of the methods are Condorcet or majority consistent, and which are resistant to cloning. We also consider independence properties and state some characterization results.
Simulations (Section 5) To understand how and whether the three families of methods practically differ from each other, and how they relate to Kemeny's method, we perform extensive simulations based on synthetic data (sampled using the Mallows and Euclidean models). We find that, for Plurality and Borda, ranking by score and Sequential-Loser usually produce very similar results, whereas Sequential-Winner offers a new perspective (which is typically closer to Kemeny's method). Moreover, we observe that Sequential-Loser rules seem to be particularly well suited to identify the best candidates (justifying their usage as single-winner voting rules), while Sequential-Winner rules are best at avoiding low quality candidates.
Computational Complexity (Section 6) The rules in all three of our families are easy to compute in the sense that their description implies a straightforward algorithm for obtaining an output ranking. However, for the sequential rules there is a subtlety: During the execution of the rule, ties can occur. It matters how these are broken, because candidates could end up in significantly different positions. For high-stakes decisions and in democratic contexts, it would be important to know which output rankings are possible. Thus, we study the computational problem of deciding whether a given candidate can end up in a given position. This and related problems have been studied in the literature under the name of parallel universe tie-breaking, including theoretical and experimental studies for some of the rules in our families [Conitzer et al., 2009, Brill and Fischer, 2012, Mattei et al., 2014, Freeman et al., 2015, Wang et al., 2019. We extend the results of that literature and find NP-hardness for all the sequential methods that we study. We show that the problem becomes tractable if the number of candidates is small. In contrast, for several methods we find that the problem remains hard even if the number of input rankings is small. Curiously, for few input rankings, methods based on Plurality, Borda, or Veto each induce a different parameterized complexity class.
Let C = {c 1 , . . . , c m } be a set of m candidates. A ranking of C is a linear order (irreflexive, total, transitive) of C. We write L(C) for the set of all rankings of C.
A (ranking) profile P = ( 1 , . . . , n ) is a list of rankings. We sometimes say that the rankings are voters.
For a subset C ⊆ C of candidates and ranking ∈ L(C), we write | C for the ranking obtained by restricting to the set C . For a profile P , we write P | C for the profile obtained by restricting each of its rankings to C .
A social preference function 1 f is a function that assigns to every ranking profile P a non-empty set f (P ) ⊆ L(C) of rankings. Here, f (P ) may be a singleton but there can be more than one output ranking in case of ties. For a ranking , we say that f selects on P if ∈ f (P ).
For a ranking ∈ L(C) and a candidate c ∈ C, let pos( , c) = |{d ∈ C : d c}| + 1 be the position of c in . For example, if pos( , c) = 1 then c is the most-preferred candidate in . We write cand( , r) ∈ C for the candidate ranked in position r ∈ [m] in ∈ L(C).
For an integer m ∈ N, a scoring vector s (m) = (s 1 , . . . , s m ) ∈ R m is a list of m numbers. A scoring system is a family of scoring vectors (s (m) ) m∈N one for each possible number m of candidates. For the sake of conciseness, we sometimes write s instead of (s (m) ) m∈N . We will mainly focus on three scoring systems: • Plurality with s (m) = (1, 0, . . . , 0) for each m ∈ N, • Veto with s (m) = (0, . . . , 0, −1) for each m ∈ N, • Borda with s (m) = (m, m − 1, . . . , 1) for each m ∈ N.
Given a profile P over m candidates, the s-score of candidate c ∈ C is score s (P, c) = i∈[n] s (m) pos( i ,c) . We say that a candidate is an s-winner if it has maximum s-score, and an s-loser if it has minimum s-score. For a scoring system s we denote by s * the scoring system where we reverse each scoring vector and multiply all its entries by −1, i.e., for each m ∈ N and i ∈ [m], we have (s * ) m−i+1 . Note that (s * ) * = s for every s, that Plurality * = Veto, that Veto * = Plurality, and that Borda * is the same as Borda, up to a shift.
For two rankings 1 and 2 , their swap distance (or Kendall-tau distance) κ( 1 , 2 ) is the number of pairs of candidates on whose ordering the two rankings disagree, i.e., κ( 1 , 2 ) = |{(c, d) ∈ C × C : c 1 d and d 2 c}|. Note that the maximum swap distance between two rankings is m 2 . Given a profile P , Kemeny's rule selects those rankings which minimize the average swap distance to the rankings in P , so it selects arg min ∈L(C) i∈N κ( , i ). We refer to the selected rankings as Kemeny rankings.
Scoring-Based Rank Aggregation
We now formally define the three families of scoring-based social preference functions that we study.
Definition 3.1 (s-Score). Let s be a scoring system. For the social preference function s-Score on profile P , we have ∈ s-Score(P ) if and only if for all c, d ∈ C with score s (c, P ) > score s (d, P ), we have c d. • the top choice c = cand( , 1) is an s-winner in P , • if |C| > 1, then | C\{c} ∈ Seq.-s-Winner(P | C\{c} ). • if |C| > 1, then | C\{c} ∈ Seq.-s-Loser(P | C\{c} ).
Example 3.4. Let P be the following ranking profile: Then for the three methods based on Plurality, we have: We sometimes view Seq.-s-Winner (or Seq.-s-Loser) rules as round-based voting rules, where in each round an s-winner (or an s-loser) is deleted from the profile and added in the highest (or lowest) position of the ranking that has not yet been filled. Notably, if there are multiple s-winners (or s-losers) in one round, each selection gives rise to different output rankings. Seq.-Plurality-Loser is also known as STV, Seq.-Veto-Loser as Coombs, and Seq.-Borda-Loser as Baldwin.
Sequential-Winner and Sequential-Loser rules are formally closely related: If a candidate is an s-winner in some profile P , then it is an s * -loser in the reverse profile rev(P ). Hence, we can conclude the following: Lemma 3.5. Let s be a scoring system. Then for each ranking profile P and for every ranking ∈ L(C), we have: ∈ Sequential-s-Winner(P ) ⇐⇒ rev( ) ∈ Sequential-s * -Loser(rev(P )).
For example, this lemma establishes a close connection between Seq.-Veto-Winner and Seq.-Plurality-Loser, as a ranking is selected under Seq.-Veto-Winner on profile P if and only if rev( ) is selected under Seq.-Plurality-Loser on profile rev(P ). This equivalence will prove useful in our axiomatic analysis and in our complexity results.
Axiomatic Properties
In this section, we will briefly and informally discuss some axiomatic properties and characterizations of the methods in our three families. A more formal treatment appears in Appendix A. See Table 1 for an overview.
A desirable property of a ranking aggregation rule is that if one candidate is deleted from the profile, then the relative rankings of the other candidates does not change (independence of irrelevant alternatives, IIA). Arrow's [1951] impossibility theorem shows that this property cannot be satisfied by unanimous non-dictatorial rules. Young [1988] proves that Kemeny's method satisfies a weaker version that he calls local IIA: removing the candidate that appears in the first or last position in the Kemeny ranking does not change the ranking of the other candidates. Splitting this property into its two parts, we can easily see from their definitions that Seq.-s-Winner satisfies independence at the top, and Seq.-s-Loser satisfies independence at the bottom.
Another influential axiom is known as consistency or reinforcement. A rule f satisfies reinforcement if whenever some ranking is chosen in two profiles, ∈ f (P ) ∩ f (P ), then it is also chosen if we combine the profiles into one, and in fact f (P + P ) = f (P ) ∩ f (P ). All the methods in this paper satisfy reinforcement. Notably, Young [1988] shows that Kemeny is the only anonymous, neutral, and unanimous rule satisfying reinforcement and local IIA. Focusing on Seq.-s-Loser, Freeman et al. [2014] define reinforcement at the bottom to mean that if the same candidate c is placed in the last position in the selected ranking in two profiles, then c is also placed in the last position in the selected ranking in the combined profile. They show that independence at the bottom and reinforcement at the bottom characterize Seq.-s-Loser rules (under mild additional assumption). Using Lemma 3.5, a simple adaptation of their proof shows that Seq.-s-Winner rules can be similarly characterized by independence at the top and reinforcement at the top. (s-Score methods do not satisfy similar independence assumptions; they have been characterized by Levenglick [1977] and Smith [1973].) Refining their characterization of Seq.-s-Loser rules, Freeman et al. [2014] characterize Seq.-Plurality-Loser (aka STV) as the only Seq.-s-Loser rule satisfying independence of clones [Tideman, 1987], Seq.-Veto-Loser (aka Coombs) as the only one that, in case a strict majority of voters have the same ranking, copies that ranking as the output ranking, and Seq.-Borda-Loser (aka Baldwin) as the only one always placing a Condorcet winner in the first position. Using Lemma 3.5, we can similarly characterize Seq.-Plurality-Winner as the only method in its class that copies a majority ranking.
Simulations
We analyze our three families of scoring-based ranking rules for Plurality and Borda on synthetically generated profiles.
Setup
To deal with ties in the computation of our rules, each time we sample a ranking profile over candidates C, we also sample a ranking tie ∈ L(C) uniformly at random and break ties according to tie for all rules. To quantify the difference between two rankings 1 , 2 ∈ L(C), we use their normalized swap distance, i.e., their swap distance κ( 1 , 2 ) divided by the maximum possible swap distance between two rankings m 2 .
(Normalized) Mallows We conduct simulations on profiles generated using the Mallows model Mallows [1957] (as observed by Boehmer et al. [2021] real-world profiles seem often to be close to some Mallows profiles). This model is parameterized by a dispersion parameter φ ∈ [0, 1] and a central ranking * ∈ L(C). Then, a profile is assembled by sampling rankings i.i.d. so that the probability of sampling a ranking ∈ L(C) is proportional to φ κ( , * ) . We use the normalization of the Mallows model proposed by Boehmer et al. [2021], which is parameterized by a normalized dispersion parameter norm-φ ∈ [0, 1]. This parameter is then internally converted to a dispersion parameter φ such that the expected swap distance between a sampled vote and the central vote is norm-φ · (m(m − 1)/4). Then norm-φ = 0 results in profiles only containing the central vote, and norm-φ = 1 leads to profiles where all rankings are sampled with the same probability, so that on average rankings disagree with the central ranking * on half of the pairwise comparisons. Choosing norm-φ = 0.5 leads to profiles where rankings on average disagree with * on a quarter of the pairwise comparisons.
Comparison of Scoring-Based Ranking Methods
We analyze the average normalized swap distance between the rankings selected by our three families of scoring-based ranking methods on profiles containing 100 rankings over 10 candidates. For this, we sampled 10 000 profiles for each norm-φ ∈ {0, 0.1, . . . , 0.9, 1} and depict the results in Figure 1(a). Let us first focus on Plurality: We find that the rankings produced by Seq.-Plurality-Loser and Plurality-Score are quite similar, whereas the ranking produced by Seq.-Plurality-Winner is substantially different. This observation is particularly strong for norm-φ ≤ 0.3: In such profiles, all the rankings are similar to each other. Accordingly, many candidates initially have a Plurality score of zero, and thus there are many ties in the execution of Plurality-Score and Seq.-Plurality-Loser (for the latter, ties occur in more than half of the rounds). Thus, the rankings computed by the two rules fundamentally depend on the (shared) random tie-breaking order tie . In contrast, for Seq.-Plurality-Winner, for norm-φ ≤ 0.3, no ties in its execution appear. In particular, Seq.-Plurality-Winner is thereby able to meaningfully distinguish the weaker candidates on these profiles. Turning to norm-φ ≥ 0.3 (where more candidates have non-zero Plurality score and thus the tie-breaking is no longer as important), Seq.-Plurality-Loser and Plurality-Score are still clearly more similar to each other than to Seq.-Plurality-Winner; this indicates that Seq.-Wi. rules indeed add a new perspective to existing scoring-based ranking rules.
Switching to Borda, the rankings returned by the three methods are quite similar. This is intuitive given that Borda scores capture the general strength of candidates in a profile much better than Plurality scores. Thus, the Borda score of a candidate also changes less drastically in case some candidate is deleted. Increasing norm-φ, the selected rankings become more different from each other (as profiles get more chaotic, leading to more similar Borda scores of candidates). Interestingly, for larger values of norm-φ, Borda-Score has the same (small) distance to the other two rules, whereas Seq.-Borda-Winner and Seq.-Borda-Loser are more different.
Comparison to Kemeny Ranking
To assess which method produces the "most accurate" rankings, we compare them to Kemeny's method. For 10 000 profiles for each norm-φ ∈ {0, 0.1, . . . , 0.9, 1}, in Figure 1(b), we show the average normalized swap distance of the Kemeny ranking to the rankings selected by our rules.
For Plurality, independently of the value of norm-φ, Seq.-Plurality-Winner produces the ranking most similar to the Kemeny ranking, then Seq.-Plurality-Loser and lastly Plurality-Score, indicating the advantages of sequential rules. What sticks out is that for norm-φ ≤ 0.3, Seq.-Plurality-Loser and Plurality-Score are far away from the Kemeny ranking. As discussed above, the reason is that, for both methods, large parts of the ranking are simply determined by the random tie-breaking order in such profiles. In contrast, Seq.-Plurality-Winner is not affected, and its output ranking is very close to the Kemeny ranking until norm-φ ≤ 0.5 (when their average normalized distance is only 0.004). For a larger dispersion parameter and in particular for norm-φ ≥ 0.7, the distance from the Kemeny ranking become more similar for our three methods. This behavior is intuitive, recalling that for norm-φ = 1, profiles are "chaotic", with many different rankings having comparable quality.
For Borda, the rankings produced by the three methods are all around the same (small) distance from the Kemeny ranking. This distance increases steadily from 0 for norm-φ = 0 to around 0.1 for norm-φ = 1.
Further Simulations
In Appendix B, we describe the results of further experiments. For instance, we analyze in which parts of the computed ranking the considered methods agree or disagree most. We find that for both Plurality and Borda, for the top positions the Kemeny ranking agrees frequently with the Seq.-Loser rule. For the bottom positions it agrees with the Seq.-Winner rule. This suggests that one should use Seq.-Loser for identifying the best candidates and Seq.-Winner for avoiding the worst candidates. Moreover, Seq.-Winner and Score agree more commonly on the top half of candidates, whereas Seq.-Loser and Score agree more commonly on the bottom half of candidates.
We repeat all our experiments on profiles sampled from Euclidean models. Obtaining similar results, this confirms that our general observations from above also hold for profiles sampled from other distributions. We also analyze the influence of the number of voters and candidates on the results, observing that increasing the number of voters leads to an increased similarity of the rankings for Plurality and Borda, whereas increasing the number of candidates leads to an increased similarity for Borda but not for Plurality. We also consider additional scoring vectors. For instance, we find that for Veto the roles of Seq.-Loser and Seq.-Winner are reversed, which is to be expected, recalling Lemma 3.5. n m Sequential-Plurality-Loser (STV) NP-c. (Thm. 6.2) FPT (Obs. 6.4) FPT (Thm. 6.1) Sequential-Veto-Loser (Coombs) NP-c. (Thm. 6.6) W[1]-h. (Thm. 6.6), XP (Thm. 6.7) FPT (Thm. 6.1) Sequential-Borda-Loser (Baldwin) NP-c. (Thm. 6.8) NP-c. for n = 8 (Thm. 6.8) FPT (Thm. 6.1) Table 2.: Our results for Sequential-Loser rules. All hardness results hold for Winner Determination; all algorithmic results also apply to Position-k Determination. The unparameterized NP-hardness results in the first column were already stated or proven by Conitzer et al. [2009] and Mattei et al. [2014] 6. Complexity We study various computational problems related to Sequential-Winner and Sequential-Loser rules. By breaking ties arbitrarily, it is easy to compute some ranking that is selected by such a rule. However, in some (high-stakes) applications, it might not be sufficient to simply output some ranking selected by the rule. For instance, some candidate could claim that there also exist other rankings selected by the same rule where that candidate is ranked higher. To check such claims, and understand which rankings can be selected in the presence of ties, we need an algorithm that for a given candidate d and position k, decides whether d is ranked in position k in some ranking selected by the rule. Accordingly, we introduce the following computational problem: Position-k Determination for social preference function f
Given:
A ranking profile P over candidate set C, a designated candidate d ∈ C, and an integer k ∈ [|C|]. Question: Is there a ranking selected by f on P where d is in position k, i.e., ∈ f (P ) with pos( , d) = k?
Where possible, we will design (parameterized) algorithms that solve this problem. We also prove hardness results, which will apply even to restricted versions of this problem that are most relevant in practice. Specifically, we would expect candidates to mainly be interested if they can be ranked highly. Thus, we introduce the Top-k Determination problem, where we ask whether a given candidate can be ranked in one of the first k positions. 2 Lastly, the special case of both problems with k = 1 is of particular importance: The Winner Determination problem asks whether the designated candidate is ranked in the first position in some ranking selected by the rule.
For the three Sequential-Loser rules, it is known that their Winner Determination problem is NP-complete. For STV, this was stated by Conitzer et al. [2009], and for Baldwin and Coombs, this was proven by Mattei et al. [2014]. We will see that the corresponding Top-k Determination problems for the Sequential-Winner rules are also NP-complete. Thus, since almost all of our problems turn out to be NP-hard, we take a more fine-grained view. In particular, we will study the influence of the number n of voters and the number m of candidates on the complexity of our problems. This analysis is not only of theoretical interest but also practically relevant, as in many applications one of the two parameters is considerably smaller than the other (e.g., in political elections m is typically much smaller than n, while in applications such as meta-search engines or ranking applicants, n is often much smaller than m). Tables 2 and 3 provide overviews of our results.
Parameter Number of Candidates
We start by considering the parameter m, the number of candidates. It is easy to see that Position-k Determination for all Sequential-Winner and Sequential-Loser rules is fixedparameter tractable with respect to m (by iterating over all m! possible output rankings). However, it is possible to improve the dependence on the parameter in the running time. [Proof] Theorem 6.1. For every scoring system s, Position-k Determination can be solved in • O(2 m · nm 2 ) time and O(m k · nm 2 ) time for Sequential-s-Winner, and • O(2 m · nm 2 ) time and O(m m−k · nm 2 ) time for Sequential-s-Loser.
Proof (algorithm). We present an algorithm for Seq.-s-Winner (the results for Seq.-s-Loser directly follow from this by applying Lemma 3.5). We solve the problem via dynamic programming. We call a subset C ⊆ C of candidates an elimination set if there is a selected ranking where the candidates from C are ranked in the first |C | positions. We introduce a table T with entry T [C ] for each subset C ⊆ C of candidates. T [C ] is set to true if C is an elimination set. We initialize the table by setting T [∅] to true. Now we compute T for each subset C ⊆ C in increasing order of the size of the subset using the following recurrence relation: We set T [C ] to true if there is a candidate c ∈ C such that T [C \ {c}] is true and c is an s-winner in P | C\(C \{c}) .
After filling the table, we return "true" if and only if there is a subset C ⊆ C \ {d} with |C | = k − 1 such that T [C ] is true and d is an s-winner in P | C\C . By filling the complete table we get a running time in O(2 m · nm 2 ). However, it is sufficient to only fill the table for all subsets of size at most k − 1, resulting in a running time in O(m k · nm 2 ).
Sequential Loser
We study Seq.-Plurality/Veto/Borda-Loser (aka STV, Coombs, and Baldwin). The Winner Determination problem is NP-hard for all three rules. Table 2 shows an overview of our results. In particular, we get a clear separation of the rules for the number n of voters: • Seq.-Plurality-Loser admits a simple FPT algorithm, • Seq.-Veto-Loser is W[1]-hard but in XP, • Seq.-Borda-Loser is NP-hard for 8 voters.
Plurality
Conitzer et al. [2009] stated that Winner Determination for Seq.-Plurality-Loser (aka STV) is NP-hard. This result has been frequently cited and used. The proof was omitted in the conference paper, and to our knowledge no proof has ever appeared in published work. To aid future research, we include a simple reduction here.
Proof. We reduce from the NP-hard variant of Satisfiability where each clause contains at most three literals and each literal appears exactly twice Berman et al. [2003]. Let ϕ be a formula fulfilling these restrictions with clause set F = {c 1 , . . . , c m } and variable set X = {x 1 , . . . , x n }. Let L = X ∪ X be the set of literals. We construct a ranking profile with candidate set n k n + k m For this ranking profile, in every execution of Sequential-Plurality-Loser the first n eliminated candidates must be a subset L ⊆ L of literals such that for every variable we select either its positive literal or its negative literal (but not both). In other words, L must satisfy ∈ L ↔ / ∈ L . To see this, note that all literal candidates initially have a Plurality score of 64, which is the lowest Plurality score in the profile, and that all other candidates have a higher Plurality score. Thus, in the first round an arbitrary literal of some variable x is eliminated. This increases the Plurality score of the opposite literal to over 120. In the second round, we have to eliminate again an arbitrary literal (however, this time a literal corresponding to a variable different from x). We repeat this process for n rounds until for each variable exactly one of the corresponding literals has been eliminated. We claim that an execution of Sequential-Plurality-Loser eliminates d last if and only if the assignment that sets all literals from L to true satisfies ϕ.
Suppose ϕ is satisfied by some variable assignment α, and consider an execution of Sequential-Plurality-Loser that begins by eliminating the n literals set to true in α. After this, the scores of the remaining candidates are: (i) d has 100 points, (ii) w has 99 points, (iii) c j for j ∈ [m] has between 100 and 104 points (as at least one of the literals occurring in c j has been eliminated), and (iv) each literal ∈ L set to false by α has 124 points.
In the next round, w is eliminated, reallocating its 99 points to d. Then, in the next m rounds, each clause candidate c j is eliminated, in each round reallocating its points to d. Finally, the remaining literals are eliminated, also each reallocating their points to d. Thus, d is the last remaining candidate and ranked in the first position in the selected ranking.
Let L ⊆ L be the set of literals eliminated in the first n rounds in some execution of the Sequential-Plurality-Loser rule (recall that ∈ L ↔ / ∈ L ). Suppose that the assignment α setting all literals from L to true does not satisfy ϕ. After the literals from L have been eliminated, the scores of the remaining candidate are: (i) d has 100 points, (ii) w has 99 points, (iii) c j for j ∈ [m] where α satisfies c j has between 100 and 104 points, (iv) c j for j ∈ [m] where α does not satisfy c j has 98 points, and (v) each literal ∈ L set to false by α has 124 points.
Thus, in the next round, one of the unsatisfied clauses is eliminated, redistributing its 98 points to w bringing the score of w to 197. Because all but 100 voters prefer w to d, the Plurality score of d will never exceed the score of w in consecutive rounds, so d cannot be eliminated last.
Motivated by this hardness result, we now turn to the problem's parameterized complexity. We have already seen in Theorem 6.1 that the problem is solvable in O(2 m · nm 2 ) time. Indeed, we show that unless the Exponential Time Hypothesis (ETH) 3 is false, we cannot hope to substantially improve the exponential part of this running time.
[Proof] Theorem 6.3. If the ETH is true, then Winner Determination for Sequential-Plurality-Loser (aka STV) cannot be solved in 2 o(m) · poly(n, m) time.
Turning to the number n of voters, we can observe that initially only at most n candidates have a non-zero Plurality score. All other candidates (which are not ranked first in any ranking) will be eliminated immediately, without thereby changing the Plurality scores of other candidates. After these eliminations, we are left with at most n candidates. This makes it easy to see that Position-k Determination is fixed-parameter tractable with respect to n (by using Theorem 6.1).
Veto
We now turn to Seq.-Veto-Loser (aka Coombs). Mattei et al. [2014] showed that the Winner Determination problem for this rule is NP-hard. We give an alternative NP-hardness proof that also implies an ETH-based lower bound for the parameter m. [Proof] Theorem 6.5. Winner Determination for Sequential-Veto-Loser (aka. Coombs) is NPcomplete. If the ETH is true, then the problem cannot be solved in 2 o(m) · poly(n, m) time.
For the parameter n, we show that the problem is W[1]-hard with respect to the number of voters. This is shown via an involved reduction from Multicolored Independent Set. This result suggests that Seq.-Veto-Loser behaves quite differently from Seq.-Plurality-Loser, even if these two rules might seem "symmetric" to each other.
[Proof] Theorem 6.6. Winner Determination for Sequential-Veto-Loser (aka. Coombs) is W[1]-hard with respect to the number n of voters.
However, on the positive side, Winner Determination and even Position-k Determination are solvable in polynomial-time if the number of voters is a constant. The intuition behind this result is that for Seq.-Veto-Loser, the "status" of an execution is fully captured by the bottom list of the ranking profile, i.e., a list containing the bottom-ranked candidate of each voter. Indeed, if we know the current bottom list, we can deduce exactly which candidates have been eliminated thus far. As there are only m n many possibilities for the bottom list, dynamic programming yields an XP algorithm for Position-k Determination. [Proof] Theorem 6.7. Position-k Determination for Sequential-Veto-Loser is in XP with respect to the number n of voters.
Borda
We conclude by studying Seq.-Borda-Loser (aka Baldwin). Mattei et al. [2014] proved that Winner Determination for this rule is NP-hard, adapting an earlier reduction about hardness of manipulation due to Davies et al. [2014]. In fact, by giving a construction based on weighted majority graphs and using tools from Bachmeier et al. [2019], we prove that this NP-hardness persists even for only n = 8 voters. This result suggests that the Borda scoring system leads to the hardest computational problems. [Proof] Theorem 6.8. Let n ≥ 8 be a fixed even integer. Then Winner Determination for Sequential-Borda-Loser (aka Baldwin), restricted to instances with exactly n voters, is NPcomplete. In addition, if the ETH is true, then the problem cannot be solved in 2 o(m) · poly(m) time.
Sequential Winner
In this subsection, we briefly summarize our results for Seq.-Plurality/Veto/Borda-Winner, which to the best of our knowledge have not been previously studied (for formal statements and proofs see Appendix D). As Winner Determination is trivial for these rules, we focus on Top-k Determination. Table 3 displays an overview of our results. For all three rules, it turns out that Top-k Determination is NP-hard and W[1]-hard with respect to k. In contrast, for the parameter n, the picture is again more diverse: For Borda, we once more get NP-hardness for a constant number of voters (n = 8), while Plurality and Veto switch their role (we have a fixed-parameter tractable algorithm for Veto and W[1]-hardness for Plurality). Recalling the equivalence from Lemma 3.5, this switch is unsurprising. Indeed, similar reductions are used here as for the corresponding results for Sequential-Loser for the other scoring system.
Future Directions
There are many directions for future work. In our complexity study, we have focused on the analysis of the space of possible outcomes. However, if we are happy to break ties immediately (e.g. by some fixed order), one could focus on finding the fastest algorithms for computing the output ranking. Interestingly, it is known that computing STV is P-complete [Csar et al., 2017], so its computation is unlikely to be parallelizable. An additional challenging open problem will be to determine if the hard problems we have identified become tractable if preferences are structured, for example single-peaked. Note that for single-peaked preferences, it is known that Coombs becomes a Condorcet extension and easy to compute [Grofman and Feld, 2004, Prop. 2]. Further, ranking candidates by Borda score is known to give a 5-approximation of Kemeny's method [Coppersmith et al., 2006]. This raises the question whether any other of the rules from our families provide an approximation? Other specific questions left open by our work are whether ETH lower bounds can be obtained for additional problems, and whether they can be strengthened to SETH bounds. Finally, one could try to extend our results to other scoring vectors, and potentially prove dichotomy theorems.
A. Additional Material for Section 4
In the main body, in Section 4, we have given an informal overview of axiomatic properties satisfied by the rules in our three families. In this appendix, we give formal statements of these results. In particular, we will give formal definitions of the relevant axioms.
Let us introduce some additional notation. For two ranking profiles P = ( 1 , . . . , n ) and P = ( 1 , . . . , n ), both defined over the same candidate set, we write P + P = ( 1 , . . . , n , 1 , . . . , n ) for the ranking profile obtained by concatenating the two lists. For an integer k, we write kP = P + · · · + P obtained by concatenating k copies of P . For a set S ⊆ L(C) of rankings, we write cand(S, r) = {cand( , r) : ∈ S} for the set of candidates that appear in position r in at least one of the rankings in S. If ∈ L(C) is a ranking and ρ : C → C is a permutation of the candidate set, then ρ( ) is the ranking where for all pairs a, b ∈ C of candidates, we have ρ(a) ρ( ) ρ(b) if and only if a b. For a set S ⊆ L(C) of rankings, we write Let f be a social preference function, defined for profiles with any number of voters and over all possible candidate sets. (We assume this large domain to be able to state axioms that reason about variable agendas (i.e. different candidate sets) and about variable electorates (i.e., different numbers of voters).) Whenever we do not specify otherwise, in the following axioms we implicitly quantify over all possible finite sets C of candidates.
We begin with some basic axioms.
• The rule f is anonymous if for all profiles P = ( 1 , . . . , n ) and all permutations σ : ). Thus, reordering the rankings does not change the outcome.
• The rule f is neutral if for all profiles P = ( 1 , . . . , n ) and all permutations ρ : C → C, we have f (ρ(P )) = ρ(f (P )). Thus, a relabeling of candidates leads to the same relabeling of the output.
• The rule f is unanimous if for all rankings ∈ L(C) and all profiles P = ( , . . . , ), where all rankings in P are equal to , we have f (P ) = { }.
• The rule f is continuous (sometimes known as the overwhelming majority axiom) if for any two profiles P and P over the same candidate set, there exists an integer k such that f (P + kP ) ⊆ f (P ).
The following are axioms about combining profiles.
• The rule f satisfies reinforcement if for all profiles P and P over the same candidate set, we have f (P + P ) = f (P ) ∩ f (P ) whenever the intersection is non-empty.
• The rule f satisfies reinforcement at the top if for all profiles P and P over the same candidate set, we have cand(f (P + P ), 1) = cand(f (P ), 1) ∩ cand(f (P ), 1) whenever the intersection is non-empty.
• The rule f satisfies reinforcement at the bottom if for all profiles P and P over the same candidate set C, we have cand(f (P + P ), |C|) = cand(f (P ), |C|) ∩ cand(f (P ), |C|) whenever the intersection is non-empty.
The following are independence axioms, describing that the output should not change when deleting certain candidates.
• The rule f satisfies independence at the top if for all profiles P and for all candidates a ∈ cand(f (P ), 1) that can appear in first position in f (P ), we have that for all rankings ∈ L(C \ {a}), ∈ f (P | C\{a} ) if and only if the ranking , obtained by placing a at the top of ranking , is a member of f (P ).
• The rule f satisfies independence at the bottom if for all profiles P and for all candidates a ∈ cand(f (P ), |C|) that can appear in last position in f (P ), we have that for all rankings ∈ L(C \ {a}), ∈ f (P | C\{a} ) if and only if the ranking , obtained by placing a at the bottom of ranking , is a member of f (P ).
Next, we introduce axioms that require a rule to follow the view of a majority of voters.
• The rule f places Condorcet winners at top if for all profiles P = ( 1 , . . . , n ) where there exists a candidate a ∈ C such that for all other candidates b ∈ C \ {a}, a majority of voters prefers a to b (i.e., |{i ∈ [n] : Thus, in profiles where a Condorcet winner exists, all output rankings must place it in the first position.
• The rule f copies a majority ranking if for all profiles P = ( 1 , . . . , n ) where there exists a ranking which makes up more than half the profile (i.e., |{i ∈ [n] : We will state Tideman's independence of clones property later.
We will now state some axiomatic characterization results, taken or adapted from the literature. The first is a characterization of Kemeny's rule.
Theorem A.1 (Young, 1988). A social preference function f satisfies anonymity, neutrality, unanimity, reinforcement, independence at the top, and independence at the bottom, if and only if it is Kemeny's rule.
Next, there is an existing characterization of Sequential-Loser rules. Proof. This is exactly Lemma 1 from Freeman et al. [2014], except that instead of continuity they use a condition called "continuity at the bottom", but this condition is weaker than continuity as we have defined it, because f (P + kP ) ⊆ f (P ) implies that cand(f (P + kP ), |C|) ⊆ cand(f (P ), |C|).
We will now "turn around" Theorem A.2 to obtain an axiomatic characterization of Sequential-Winner rules. To do so, we will use Lemma 3.5. For a social preference function f , let us write f * for the social preference function defined as follows: f * (P ) = rev(f (rev(P ))) for every profile P .
One can also easily deduce the following equivalences.
Lemma A.3. Lef f be a social preference function, and let f * be defined as above. Then the following equivalences hold: • f satisfies anonymity (resp., neutrality, unanimity, continuity, reinforcement, copying a majority ranking) if and only if f * satisfies the respective axiom.
• f satisfies reinforcement at the top (resp., at the bottom) if and only if f * satisfies reinforcement at the bottom (resp., at the top).
• f satisfies independence at the top (resp., at the bottom) if and only if f * satisfies independence at the bottom (resp., at the top).
Theorem A.4. A social preference function f satisfies anonymity, neutrality, unanimity, continuity, reinforcement at the top, and independence at the top, if and only if there exists a scoring system s = (s (m) ) m∈N such that f equals Sequential-s-Winner.
Proof. It is routine to check that Seq.-s-Winner satisfies the mentioned axioms for every scoring system s. Now let f be a social preference function satisfying these axioms. By Lemma A.3, the social preference function f * then satisfies anonymity, neutrality, unanimity, continuity, reinforcement at the bottom, and independence at the bottom. By Theorem A.2, there exists a scoring system s such that f * equals Seq.-s-Loser. By Lemma 3.5, f equals Seq.-s * -Winner, as desired.
Freeman et al. [2014] also provided characterizations of specific rules. For example, they characterize Seq.-Borda-Loser (aka Baldwin) as the only Sequential-Loser rule that places Condorcet winners at the top.
Theorem A.5 (Freeman et al., 2014, Theorem 3). A social preference function f satisfies anonymity, neutrality, unanimity, continuity, reinforcement at the bottom, independence at the bottom, and places Condorcet winners at the top if and only if f equals Sequential-Borda-Loser.
If desired, one could similarly characterize Seq.-Borda-Winner as the only Sequential-Winner rule that places Condorcet losers at the bottom, using Lemma A. 3. Freeman et al. [2014] also characterize Seq.-Veto-Loser (aka Coombs) as the only Sequential-Loser rule that copies a majority ranking.
Theorem A.6 (Freeman et al., 2014, Theorem 2). A social preference function f satisfies anonymity, neutrality, unanimity, continuity, reinforcement at the bottom, independence at the bottom, and copies majority rankings if and only if f equals Sequential-Veto-Loser.
Using Lemmas 3.5 and A.3, we can deduce a characterization of Seq.-Plurality-Winner as the only Sequential-Winner rule that copies majority rankings.
Theorem A.7. A social preference function f satisfies anonymity, neutrality, unanimity, continuity, reinforcement at the top, independence at the top, and copies majority rankings if and only if f equals Sequential-Plurality-Winner.
Turning towards Tideman's [1987] independence of clones property, Freeman et al. [2014] adapt the axiom to the ranking context. Given a profile P , we say that C ⊆ C is a clone set if in every ranking in P , the candidates in C appear consecutively (i.e., for each ∈ P and each a ∈ C \ C we either have a c for all c ∈ C or c a for all c ∈ C . • The rule f satisfies independence of clones (with top replacement) if the following property is satisfied. Let P be a profile and let C be a clone set. Let P be a profile obtained from P by replacing the set C by a single candidate a ∈ C. Then for all rankings ∈ L(C), we have ∈ f (P ) if and only if ∈ f (P ) where is the ranking obtained from by deleting the candidates in C and putting a at the position of the highest-ranked member of C in . Freeman et al. [2014] prove that within the family of Sequential-Loser rules, Seq. By invoking Lemmas 3.5 and A.3, we can again obtain a related characterization of Seq.-Veto-Winner, but using a slightly different version of the clones axioms. In particular, let us define the axiom independence of clones (with bottom replacement) exactly as before, except that the definition should end in saying "putting a at the position of the lowest-ranked member of C in ." Theorem A.9. A social preference function f satisfies anonymity, neutrality, unanimity, continuity, reinforcement at the top, independence at the top, and independence of clones (with bottom replacement) if and only if f equals Sequential-Veto-Winner.
In the main body (and in particular Table 1), we have taken the "official" version of independence of clones to be the one with top replacement.
B. Additional Material for Section 5 B.1. Setup-Euclidean Model
In addition to ranking profiles generated from the Mallows model, we have also considered profiles generated from the Euclidean model. This model is parameterized by the dimension d ≥ 1. To sample a Euclidean profile, for each candidate and voter we sample a point from the d-dimensional hypercube [0, 1] d uniformly at random. In the corresponding profile, each voter ranks the candidates in increasing order of their Euclidean 2 -distance to the voter.
B.2. Comparison of Ranking Methods-Euclidean Model
In the main body, in Section 5.2 and Section 5.3, we have analyzed the relation between rankings selected by the different rules on profiles generated using the Mallows model. To verify our results, we reran these experiments on profiles generated using the Euclidean model. Specifically, for each dimension d ∈ {1, 2, 3, 4, 5, 10, 15, 20}, we generated 10 000 profiles with 100 voters and 10 candidates. We depict the results in Figure 2 (which is analogous to Figure 1). Comparison of Scoring-Based Ranking Methods We start by analyzing Figure 2(a), where we compare the rankings selected by our different scoring-based rules. The general trend here is quite similar to Mallows profiles for a large dispersion parameter (see Figure 1(a)): For Borda, the agreement of the three methods is much higher than for Plurality, with Borda-Score producing rankings close to the other two. For Plurality, Seq.-Plurality-Loser and Plurality-Score produce again similar results, whereas the ranking produced by Seq.-Plurality-Winner differs more. The general level of disagreement between the rules for Plurality is remarkably high here. For d = 1, the difference between Seq.-Plurality-Winner and the other two methods is around 0.4, which is almost 0.5 (the expected distance of two rankings drawn uniformly at random). Moreover, even for larger d, the level of disagreement remains high and is in particular around the level of disagreement for Mallows profiles with parameter norm-φ = 1. This is somewhat surprising, as profiles produced by the Mallows model with norm-φ = 1 are "maximally chaotic" and thus give the rules only limited information to distinguish the strength of candidates.
Comparison to Kemeny Ranking
We now turn to the comparison of scoring-based ranking rules to Kemeny's method (Figure 2(b)). For Plurality, like we have seen in Mallows profiles, Seq.-Plurality-Winner produces the best results, followed by Seq.-Plurality-Loser, and lastly Plurality-Score. Considering the influence of the dimension d, the difference between the method's distance to the Kemeny ranking is more or less the same for all dimensions with d = 1 being the only exception: At d = 1, Seq.-Plurality-Loser and Plurality-Score are at normalized swap distance 0.43 from the Kemeny ranking, whereas Seq.-Plurality-Winner is at distance 0.27, highlighting again that Euclidean profiles with d = 1 are particularly challenging and that Seq.-Plurality-Winner does best. In contrast, for Borda, the rankings produced by the three methods are all around the same small distance from the Kemeny ranking (mostly independently of the dimension).
B.3. Similarity in Different Vote Parts
To shed some further light on the relation of the different methods, we next analyze in which parts of the computed ranking the considered methods agree or disagree most. For this, for two rankings , ∈ L(C), we define the position displacement in position i ∈ [|C|] as 1 2 |i − pos( , cand( , i))| + |i − pos( , cand( , i))| .
The position displacement quantifies how far away the candidates ranked in position i in one ranking are ranked in the other ranking. Consider as an example the following two rankings: Then, the position displacement on position 1 is 1 whereas the position displacement on position 2 is 1 2 ·(|2−3|+|2−4|) = 3 2 . In Figure 3, we show the average position displacement on 10 000 profiles with 100 voters and 10 candidates sampled from the Mallows model with norm-φ = 0.8. 5 (We chose norm-φ = 0.8 in order to ensure that the tie-breaking rule plays no critical role, while still keeping some structure in the profile.) First, the general picture for Plurality and Borda is similar in the sense that for all comparisons of rules, the shape of the respective curves is similar. Thus, all observations described in the following hold for both Plurality and Borda. Second, all four methods have a generally higher agreement on the top and bottom positions than on the middle positions. Third, focusing on the comparison of the different scoring-based methods (Figure 3(a)), by design Seq.-Winner and Score always place the same candidate in the first position. However, the agreement of the rules remains high in the second position and decreases continuously until position 7. This indicates the intuitive behavior that the more candidates are present in the current round, the higher is the correlation between the scores of the candidates in the initial profile and their score in this round. Comparing Seq.-Loser to Score, a reverse effect is present. For Seq.-Loser compared to Seq.-Winner, we almost have a symmetric curve with a generally slightly higher agreement on the top than on the bottom. Fourth, focusing on the comparison of the scoring-based methods and the Kemeny ranking (Figure 3
B.4. Number of Ties in Executions of the Rules
The main motivation for our complexity analysis is that ties might occur in the execution of our rules. To better understand whether ties actually occur in practice (and to provide evidence for the explanation we gave in the main body about the behavior of Plurality-Score and Seq.-Plurality-Loser for small dispersion parameters), we conducted the following experiment. We again sampled 10 000 profiles from the Mallows model, for each norm-φ ∈ {0, 0.1, . . . , 1}. For each profile, we executed our rules, as usual breaking ties according to a tie-breaking order tie that we sampled randomly, and checked in each round whether a tie is present. 6 The average number of rounds when a tie occurred is shown in Figure 4.
For Plurality, we previously observed that on Mallows profiles with a small dispersion parameter, Seq.-Plurality-Loser and Plurality-Score produce very similar rankings. We mentioned that this could be explained by a large number of ties (that get resolved via the same tie-breaking order). This observation is clearly confirmed here (see Figure 4 For Euclidean models, the number of rounds with a tie mostly does not vary with the dimension, and there are many fewer ties than for the Mallows model. In particular, for Sequential-Plurality-Winner, the average number of rounds with a tie is around 0.7, for Plurality-Score it is around 1, and for Sequential-Plurality-Loser it is around 1.4. Again we see that Sequential-Plurality-Winner produces the fewest ties among the Plurality-based rules. Moreover, for Borda the number of ties is again much lower than for Plurality, with Borda-Score producing the fewest ties, while Seq.-Borda-Winner and Seq.-Borda-Loser give results very similar to each other. Specifically, for Borda-Score the average number of rounds with a tie is around 0.05, and for Seq.-Borda-Winner and Seq.-Borda-Loser it is around 0.2.
B.5. Influence of Profile Size
So far, we have focused on profiles with n = 100 voters and m = 10 candidates. Now, we examine the influence of the size of our profile on the results. First, we will analyze the influence of varying the number of voters and second, we will analyze the influence of varying the number of candidates.
Varying the number of voters In Figure 5, we depict the pairwise difference of our ranking methods for Plurality and the Kemeny ranking, for profiles with 10 candidates and a varying number of voters. The profiles are generated using the Mallows model with norm-φ = 0.8 (Figure 5(a)) and the Euclidean model with d = 10 ( Figure 5(b) we generated 10 000 profiles. For both generation models, the different methods become pairwise more similar with a higher number of voters. For Mallows the distance between the rankings decreases steeply, while for the Euclidean model the decrease (after n = 100) is slower. Generally speaking, increasing the number of voters gives us additional information about the strengths of the candidates and reduces the probability of artifacts. For Mallows profiles, there exists a clear ordering of the candidates in terms of their strengths (namely the central order), and additional voters clarify this situation. For Euclidean profiles, candidates are less clearly distinguishable; explaining why the four approaches do not all "converge" to the same ranking as the number of voters increases (unlike for Mallows). For both models, the ordering of pairs of our three scoring-based ranking methods in terms of their similarity is independent of the number of voters (the same is also the case for their ordering with respect to their similarity to the Kemeny ranking). The above described general trends are also present if we use Borda instead of Plurality.
Varying the number of candidates We now turn to analyzing the influence of the number of candidates. In Figure 6, we depict the pairwise distances between our methods for Plurality (Figure 6(a)) and Borda (Figure 6(b)) in profiles with 100 voters and a varying number of candidates. The profiles are generated using the Mallows model with norm-φ = 0.8. For the Euclidean model with d = 10, the results are similar, so we omit them. For each m ∈ {5, 10, 25, 50, 75}, we generated 100 profiles. Note that our use of normalized distances is particularly convenient when comparing results with differing numbers of candidates. We start by examining the results for Plurality (Figure 6(a)). Here, as the number of candidates increases, the average number of Plurality points per candidate decreases and in particular more candidates get a Plurality score of zero. This leads to more ties in the execution -Plurality-Winner to Plurality-Score), they become less similar as the number of candidates increases. This is because Seq.-Plurality-Winner is able to distinguish candidates who initially have low Plurality scores and does not need to rely on tie-breaking. Indeed, when Seq.-Plurality-Winner starts to rank the weak candidates, all other candidates have already been deleted and thus the average Plurality score of the weak candidates is higher and more informative.
Comparing our three scoring-based methods to the Kemeny ranking, we see that the distance increases with more candidates. This is because decreasing the average number of Plurality points per candidates makes it harder for our three scoring-based methods to distinguish the strengths of candidates; intuitively speaking, as we increase the number of candidates, the information provided by only examining the first position decreases (but, as discussed above, this effect is smaller for Seq.-Plurality-Winner).
For Borda (Figure 6(b)), the general trend is reversed. With small exceptions, the higher the number of candidates, the more similar are the rankings produced by the different methods. One effect that potentially contributes to this is that for a higher number of candidates the range of awarded points increases thereby allowing for a clearer distinction of the candidates. Nevertheless, for both Plurality and Borda, the ordering of the pairs of methods remains largely unaffected by changing the number of candidates.
Overall, the results from this section suggest that while the size of the profile in question influences the level of similarity of the different methods, the general trends observed in the previous sections hold mostly independent of the size of the profile.
B.6. Further Voting Rules
In this section, we briefly examine two additional scoring systems. In accordance with our theoretical analysis, we start by examining ranking methods based on the Veto scoring system with scoring vector (0, . . . , 0, −1). Figure 7(a) shows the results of our experiment for the Mallows model (again we generated 10 000 profiles for each norm-φ ∈ {0, 0.1, . . . , 0.9, 1}). Comparing Figure 7(a) to the analogous plots for Plurality (see Figure 1(a) and Figure 1 Recalling that for Veto, Seq.-Winner is similar to Score, whereas for Plurality Seq.-Loser is similar to Score, we wanted to check the behavior of our rules in between these two extremes. Thus, we introduce a new scoring system, which we call Half. This scoring system assigns one point to the first m 2 candidates and zero points to all other candidates, where m is the number of candidates. We depict the results for profiles sampled from the Mallows model in Figure 7(b) (again we generated 10 000 profiles for each norm-φ ∈ {0, 0.1, . . . , 0.9, 1}). Indeed, in this case Seq.-Winner and Seq.-Loser are both at the same distance to Score and at the same distance to the Kemeny ranking. Nevertheless, naturally the rankings produced by Seq.-Half-Winner and Seq.-Half-Loser are still different. Remarkably, our three scoring-based ranking methods based on Half produce rankings that are closer to the Kemeny ranking then the rankings produced by any of our methods based on Plurality or Veto. This indicates the advantage of allowing voters to distinguish between many candidates to identify candidate strength.
C. Additional Material for Section 6.2 [Main] Theorem 6.1. For every scoring system s, Position-k Determination can be solved in • O(2 m · nm 2 ) time and O(m k · nm 2 ) time for Sequential-s-Winner, and • O(2 m · nm 2 ) time and O(m m−k · nm 2 ) time for Sequential-s-Loser.
Proof. It remains to prove the correctness of the algorithm described in the main body. For this, it is sufficient to prove that the recurrence relation is correct. To this end, assume a subset C ⊆ C of candidates is an elimination set as witnessed by the selected ranking and let c := cand( , |C |). Then, as witnessed by , C \ {c} is an elimination set, and no candidate has a higher s-score than c after deleting all candidates from C \ {c}. If T [C ] is set to true because C \ {c} is an elimination set (as witnessed by ) for some candidate c ∈ C , then C is clearly an elimination set as we can eliminate candidates from C \ c in the first |C | − 1 rounds (breaking ties according to ) and c in round |C |.
C.1. Plurality
Before we present our ETH-based lower bound for the parameter m, we state some relevant results.
Proposition C.1 (Amiri, 2021, extending results about subcubic vertex cover by Johnson andSzegedy, 1999 andKomusiewicz, 2018 ). If the Exponential Time Hypothesis (ETH) is true, there does not exist an algorithm solving Cubic Vertex Cover in time 2 o(n) · poly(n), where n is the number of vertices.
Corollary C.2. If the ETH is true, there does not exist an algorithm solving Regular Clique (the Clique problem restricted to graphs where every vertex has the same degree) in time 2 o(n) · poly(n), where n is the number of vertices.
Proof. Suppose there was such an algorithm. Let G = (V, E) be a cubic graph, and k be a target size for a vertex cover. Then the complement graph G is regular. Recall that a set T ⊆ V is a vertex cover in G if and only if V \ T is an independent set in G if and only if V \ T is a clique in G. Hence by applying the algorithm to find a clique of size at least n − k in G, we can find a vertex cover of size at most k in G in time 2 o(n) · poly(n), contradicting Proposition C.1.
[Main] Theorem 6.3. If the ETH is true, then Winner Determination for Sequential-Plurality-Loser (aka STV) cannot be solved in 2 o(m) · poly(n, m) time.
Proof. We reduce from Cubic Vertex Cover (given a graph where each vertex has degree three and an integer t, the problem asks whether there is a vertex cover of size t, that is, a set of t vertices such that each edge is incident to at least one of these vertices).
Given a cubic graph G with n vertices and 3n/2 edges, and some integer t, we will create an equivalent instance of Winner Determination for Seq.-Plurality-Loser. Since our reduction will only use 2n + 3n/2 + 3 candidates, this implies the claimed ETH-result by Proposition C.1.
Candidates For each vertex i, we create two vertex candidates v i and v i . For each edge j, we create one edge candidate e j . Moreover, we reference the edge candidates corresponding to edges incident to vertex i by e i 1 , e i 2 , and e i 3 . Finally we have the candidates d, w, and q.
Idea The idea is as follows. In the first n rounds, for each i ∈ [n] either v i or v i is eliminated.
Eliminating v i will correspond to selecting the ith vertex as part of the vertex cover. In round n+1, in order to make candidate d win the election, we must eliminate candidate w, because this is the latest point where it has smaller Plurality score than d. Eliminating any other candidate that can be eliminated in round n + 1 will result in w having score larger than the score of d and this can never change afterwards. To be able to eliminate w in round n + 1, however, we must ensure that the candidates e j , j ∈ [3n/2], and candidate q have at least the same score as w. This can only happen if the set {i | v i is eliminated before round n + 1} is a vertex cover. (Whenever some v i is eliminated, the edge candidate corresponding to edges incident to vertex i gain one point and reach at least the same score as w.) Moreover the set {i | v i is eliminated before round n + 1} must be of size at least n − t, ensuring that we have only selected t vertices to be part of the vertex cover. (Whenever some v i is eliminated, candidate q gains three points and needs in total at least 3(n − t) additional points to reach at least the same score as w in round n + 1.)
Voters
We have the following voters. This completes the construction.
Key Observations Observe that in round , ∈ [n] either v i or v i for some i ∈ [n] must be eliminated, since these candidates have score 60n, while every other candidate has score at least 99n − 3(n − t) > 96n. If some vertex candidate is eliminated, then the respective other vertex candidate gains 60n − 3 additional points and, hence, will not be eliminated before all edge candidates. In round n + 1 either some edge candidate e j (with score between 99n − 1 and 99n + 1), candidate q (with score between 96n and 102n), or candidate w (with score 99n) is eliminated. If any candidate different from w is eliminated in round n + 1, than w gains more than 95n additional points and will finally win the election. If candidate w is eliminated in round n + 1, then candidate d wins the election. Independent of whether we have eliminated candidate w in round n + 1 or we are still in round n + 1, next, all edge candidates and candidate q will be eliminated. Candidate w or, if w is eliminated than d receives the votes of all candidates eliminated after round n. If w is not eliminated, than d has still score 105n and will be eliminated next. Then, the remaining vertex candidates are eliminated and finally either w or d wins.
Correctness We show that d is a winner for Seq.-Plurality-Loser of the constructed profile if and only if graph G contains a vertex cover of size t.
For the "if"-part, assume that there is a vertex cover of size t. Without loss of generality, let the first t vertices denote such a vertex cover. To see that d is a winner of the election, consider the following elimination order. In round , ∈ [t], eliminate candidate v . In round , t + 1 ≤ ≤ n, eliminate candidate v . Now, each edge candidate has score at least 99n and also candidate q has score 99n. Thus, we next eliminate w, then q and then the edge candidates in an arbitrary order that is consistent with their scores (some may have score 99n being covered once, some have score 99n + 1 being covered twice). Finally eliminate the remaining vertex candidates (with scores between 120n − 6 and 120n − 3) so that only candidate d remains and wins. It is easy to verify that this elimination ordering is indeed consistent with the Plurality scores in the respective rounds. For the "only if"-part, recall the idea and key observations. Assuming d wins, it must be that w is eliminated in round n + 1. To do this, V * = {i | v i is eliminated before round n + 1} must be a vertex cover, since each edge candidate must have gained at least one additional point in the first n rounds to have a score of 99n in round n + 1 where w is eliminated. Moreover, since candidate q also needs score at least 99n in round n + 1 to allow w to be eliminated in this round, it must hold that {i | v i is eliminated before round n + 1} is of size at least n − t. Thus, V * is a vertex cover of size at most t.
C.2. Veto
[Main] Theorem 6.5. Winner Determination for Sequential-Veto-Loser (aka. Coombs) is NPcomplete. If the ETH is true, then the problem cannot be solved in 2 o(m) · poly(n, m) time.
Proof. We reduce from Regular Clique, i.e. Clique restricted to regular graphs (where all vertices have the same degree). Our reduction will imply NP-hardness and also the claimed ETH-result by Corollary C.2. Let (G, k) be an instance of Regular Clique, where G = (V, E) is regular and each vertex has degree r. We construct a profile on candidate set C = {d, w}∪V ∪{s v : v ∈ V } (the candidates from {s v : v ∈ V } act as dummy candidates). The question is whether d is a Coombs winner.
In these votes, "· · · " is replaced by all unmentioned candidates according to some common canonical order in which d is ranked first. To avoid doubt, for each edge {v, u} ∈ E, we introduce two votes of the bottom type, one with v u and one with u v. For convenience (to avoid talking about negative numbers), we say that the bottom count of a candidate is the number of times the candidate is ranked in last position. Thus the bottom count is the negative of the veto score, and Coombs proceeds by eliminating candidates with the highest bottom count. Throughout the proof, for a vertex v ∈ V , we write Nghbhd(v) = {u ∈ V : {u, v} ∈ E} for the neighborhood of v in G.
The intuition behind the construction is that to avoid eliminating d (so as to make d a winner), we need to first eliminate w, despite its initially low bottom count. The way to increase its count is to eliminate vertices forming a dense subgraph, since then many 'edge' votes (of the bottom type) are transferred to w. The rule will start by eliminating vertices. Once some are eliminated, it is then only possible to eliminate vertices which are adjacent to all previously eliminated vertices. Thus, elimination sequences encode cliques.
We start by proving the forward direction. Suppose G contains a clique T = {v 1 , . . . , v k } of size k. Note that in the initial profile, the bottom counts are: • d has count k(k − 2) + r + 1, • w has count r + 1, • every v ∈ V has count k(k − 2) + r + 1, • every s v , v ∈ V , has count 0.
Thus, we can eliminate v 2 (since it is a neighbor of v 1 ). After this, since we have eliminated both endpoints of the edge {v 1 , v 2 }, the votes of the two voters corresponding to that edge are transferred to w.
In fact, we can eliminate candidates in the order v 1 , . . . , v k . After we have eliminated candidates v 1 , . . . , v p , p ≤ k, we have the following situation.
Hence, w is a candidate with highest bottom count, and thus now be eliminated. All its votes are transferred to the last (uneliminated) candidate in the canonical order, call it x. Because previously w had maximum bottom count, x now has maximum bottom count and can be eliminated. This argument applies repeatedly, and hence from now on the remaining candidates are eliminated in the canonical order, finishing with d (which was placed first in the canonical order), so d is a Coombs winner.
Conversely, suppose d is a Coombs winner. Thus, there is a way to eliminate candidates so that d is eliminated last. Now, at each of the first k elimination steps (assuming that d is eliminated last) the set of eliminateable candidates is a subset of {d} ∪ V . To see this by induction, note that it holds at the first step. Further, after we have eliminated some candidates W ⊆ V with |W | = p < k, writing e W for the number of edges between vertices in W , the resulting bottom counts are • d has count k(k − 2) + r + 1 + p, • w has count r + 1 + 2 · e W , • every other s v has count 0.
Since 2 · e W ≤ 2 · p 2 = p(p − 1) < k(k − 2) + p (since p < k), the bottom count of w is smaller than the score of d, so the candidate w cannot be eliminated in round p; also clearly no candidate s v can be eliminated. Thus, only a subset of {d} ∪ V can be eliminated at this step. In fact, we see that a vertex v ∈ V \ W can be eliminated at this step if and only if v is adjacent to all candidates from W . It follows that if during the first k steps we do not eliminate d, then we eliminate k vertices. These vertices must form a clique in G. [Main] Theorem 6.6. Winner Determination for Sequential-Veto-Loser (aka. Coombs) is W[1]-hard with respect to the number n of voters.
Proof. We prove this statement by proving the hardness of an equivalent problem for Seq.-Plurality-Winner: Specifically, we show that it is W[1]-hard parameterized by n to decide whether given a ranking profile P and a designated candidate d there is a selected ranking by Seq.-Plurality-Winner where d is ranked last. Then by applying Lemma 3.5 the theorem follows.
In some round, we call the elimination of a candidate c valid if c is a Plurality winner in the election from this round.
We say that a candidate is present in some round if the candidate has not been deleted in some previous round.
For a set S and an element s ∈ S, we write S − s as a shorthand notation for S \ {s} Construction We prove hardness by a reduction from the W[1]-hard Multicolored Independent Set problem parameterized by the solution size . In Multicolored Independent Set, we are given a -partite graph (V 1 ∪ · V 2 ∪ · . . . ∪ · V , E) and the question is whether there is an independent set X of size with X ∩ V j = ∅ for all j ∈ [ ]. To simplify notation, we assume that V j = {v j 1 , . . . , v j ν } for all j ∈ [ ]. We refer to the elements of [ ] as colors and say that a vertex v has color j ∈ [ ] if v ∈ V j . Figure 8.: Global rankings from the construction for Theorem 6.6.
For each j ∈ [ ] and i ∈ [ν], we introduce two vertex candidates c j i and q j i . Moreover, for each edge e ∈ E, we introduce an edge candidate f e . Let F = ∪ e∈E f e . For j ∈ [ ] and i ∈ [ν], let F j i be the set of all edge candidates corresponding to edges incident to v j i . Moreover, we introduce blocker candidates B = {b 1 , . . . , b }. Lastly, we add dummy candidates G = {g 1 , . . . , g 4 } and T = {t 1 , . . . , t 4 }, and add the designated candidate d.
For a subset C of candidates, let [C ] be the lexicographic strict ordering of the candidates in C . In particular let [T ] be t 1 We now describe the ranking profile. We complete each ranking by appending the remaining candidates in an arbitrary order. First for each color j ∈ [ ], we introduce two color rankings and refer to them as the first and second color ranking: Moreover, we introduce four global rankings as depicted in Figure 8 (note that the last three rankings only "differ" in the beginning).
Lastly, we add five dummy rankings (these will be the rankings that contribute to the Plurality score of d at some point): All rankings are completed arbitrarily. The general intuition is that we need to eliminate g 1 in some "early" round where all candidates still have a Plurality score of at most one, as g 1 is only ranked before d in one ranking. The elimination of g 1 then triggers immediately some uniquely determined follow-up eliminations (specifically of all candidates in G ∪ T ). After that d has a Plurality score of five. As d needs to be eliminated last, all candidates that are still present after this need to have at least Plurality score five when they are eliminated. As each edge candidate appears only in four rankings before d, this implies that all edge candidates must have already been deleted. Moreover, in Statement 5 of Claim C.3, we prove that for each color j ∈ [ ], there is some i ∈ [ν] such that c j i j and q j i j are still present in the round where g 1 is eliminated. As all edge candidates must have been deleted before, from this one can conclude that {v 1 i 1 , . . . , v i } is an independent set.
Forward Direction For the forward direction, assume that we are given a multicolored independent set V = {v 1 i 1 , . . . , v i }.
We now describe a valid elimination order of the candidates for which d is eliminated last. We proceed in four phases.
In the first phase every candidate has Plurality score at most one in each round and it is thus valid to delete a candidate if it is ranked first in at least one ranking. For each j ∈ [ ], we do the following: We delete all candidates that are ranked before c j i j in the first color ranking for color j in the order in which they appear in this ranking. After that we delete all candidates that are ranked before q j i j in the second color ranking for color j in the order in which they appear in this ranking.
As V is an independent set, after this, for each j ∈ [ ] the color rankings for color j are: At the end of phase one, every candidate has a Plurality score of at most one. Subsequently in the second phase, we start by eliminating candidate g 1 . After that, for the next few rounds the elimination order is unique because one candidate is the unique Plurality winner. Specifically, afterwards we eliminate g 2 , then g 3 , then g 4 , then t 1 , then t 2 , then t 3 , and lastly t 4 . Afterwards, d is ranked in the first position in the five dummy rankings. The global rankings became: and three times: In the third phase, for j = 1 to j = , we first eliminate c j i j and then q j i j (and subsequently increment j by one). To argue why all these eliminations are valid observe that in each round in this phase the Plurality score of all candidates is at most five: For d this follows directly from the fact that d appears after the blocker candidates in all but five rankings. For the blocker candidates, observe that in each round in the third phase at least one vertex candidate is present. Thus, no blocker candidate is ever ranked in first place in one of the last three global rankings. Ignoring the last three global rankings and all rankings where the blocker candidates appear after d, each blocker candidate appears in at most three rankings before all other blocker candidates. Thus, each blocker candidate has at most a Plurality score of three in some round in this phase. To see why vertex candidates have a Plurality score of at most give, fix some j ∈ [ ]. For candidate q j i j the statement clearly holds, as q j i j only appears in five rankings before d. For candidate c j i j , observe that c j i j appears before d in six rankings; however, in one of these rankings c j i j is ranked after q j i j . As q j i j is eliminated after c j i j , the Plurality score of c j i j is at most five in each round.
Using that the Plurality score of all candidates in each round in the third phase is upper bounded by five, we now argue why all eliminations are valid. For this, let us examine the situation for j = 1 (so the first round in the third phase): As c 1 i 1 is ranked first by the four global rankings and the first color ranking for color 1, c 1 i 1 has Plurality score five and eliminating it is valid. Subsequently, q 1 i 1 is ranked first by the last three global rankings and both color rankings for color 1. Thus, q 1 i 1 has Plurality score five and eliminating it is valid. The same argument also applies for increasing j, establishing the validity of this phase. After the third phase only blocker candidates and d remain.
In the fourth phase, we eliminate b i for i = 1 to i = . The designated candidate d has a Plurality score of five in each round of the fourth phase. Note that after the third phase, b 1 has six Plurality points and is thus the unique Plurality winner. After eliminating b 1 , b 2 has eight Plurality points and is thus the unique Plurality winner. The same reasoning applies until i = . Afterwards, only d is left, which completes the argument.
Backward direction For the backward direction, assume that there is an execution of Seq.-Plurality-Winner such that d is eliminated in the last round. We will now reason about the elimination order of the candidates in this execution of Seq.-Plurality-Winner. Let x be the round in which candidate g 1 is eliminated. We will now prove a series of claims that are the cornerstone of the proof of correctness: Claim C.3. Assume that d is eliminated last and let x be the round in which g 1 is eliminated. Then, 1. Every candidate has at most Plurality score one in round x.
2. All candidates from G ∪ T are present in round x. Every candidate which is not part of G ∪ T and that is present in round x has a Plurality score of at least five in the round in which it is eliminated.
3. All candidates from F have been eliminated before round x.
For each
such that c j i and q j i are present in round x.
For each j ∈ [ ],
there is some i ∈ [ν] such that both c j i and q j i are present in round x.
Proof. Proof of Statement 1. As g 1 is only ranked once before d and d is eliminated last, g 1 has a Plurality score of one in round x. Thus, all candidates have a Plurality score of at most one in round x.
Proof of Statement 2. We prove the following from which Statement 2 directly follows, as d is the last candidate to be eliminated: In rounds x to x + 8 exactly the candidates G ∪ T are eliminated. The Plurality score of d is at least five in every round after round x + 8.
Assume that no candidate from G ∪ T has been eliminated before round x. Then, after g 1 is eliminated, in round x + 1 g 2 has Plurality score two and all other candidates have a Plurality score of at most one. Thus, g 2 will be eliminated. Following this reasoning, g 3 will be eliminated in round x + 2, g 4 in round x + 3, t 1 in round x + 4, t 2 in round x + 5, t 3 in round x + 6, and t 4 in round x + 7. Afterwards d is ranked first in the five dummy rankings.
For the sake of contradiction, assume that some candidate from G ∪ T is eliminated before round x. Let h be the first candidate from G ∪ T that is eliminated. Then in the round where h is eliminated h has a Plurality score of one, as in all rankings where h is not ranked first and appears before d it is ranked after some candidates from G ∪ T . As the elimination of h distributes one Plurality point to some other candidate from G ∪ T , the elimination of h triggers the above described elimination procedure from this candidate onwards ultimately leading to d gaining at least one Plurality point. Accordingly, d has a Plurality score of at least two in round x in this case, a contradiction to Statement 1.
Proof of Statement 3. Assume that for some e ∈ E, f e is eliminated after round x − 1. Observe that each edge candidate appears only in four rankings before d (the four color rankings of colors of its endpoints). Thus, before d is eliminated the Plurality score of f e is at most four, a contradiction to Statement 2 and d being eliminated last.
Proof of Statement 4. Fix some j ∈ [ ]. We prove the statement in three steps by first excluding that no vertex candidate for color j is present, then excluding that only vertex candidates of the form q j i are present, and finally excluding that only vertex candidates of the form c j i are present. From these three parts the statement directly follows. First, assume for the sake of contradiction that for all i ∈ [ν], c j i and q j i have been eliminated before round x. By this and Statement 3, it follows that either some blocker candidate or d is ranked first in the two color rankings for color j in round x. However, this implies that this candidate has Plurality score at least two in round x, a contradiction to Statement 1. Second, assume for the sake of contradiction that for all i ∈ [ν] c j i has been eliminated before round x but that there is some i ∈ [ν] such that q j i is present in round x. We make a case distinction based on whether q j i is the only present vertex candidate for this color or not: We claim that if there is an i ∈ [ν] − i such that q j i is present in round x, then the Plurality score of both q j i and q j i is at most four in any round before d is eliminated. To see this note that there are only five rankings in which q j i and q j i are ranked before d and that in fact these are the same five rankings for both. However, for the two color rankings for color j it holds that in one of them q j i is ranked before q j i and in the other q j i is ranked before q j i . Thus, as long as both q j i and q j i are not eliminated, both of them can have at most four Plurality points. As q j i and q j i are present in round x, using Statement 2 it follows that none of q j i and q j i can reach a Plurality score of five before d's elimination, a contradiction to d being eliminated last.
Otherwise for all i ∈ [ν] − i q j i has been eliminated before round x (and by our initial assumption for all i ∈ [ν] c j i has also been eliminated before round x). Using Statement 3, it follows that q j i is ranked first in both color rankings for color j in round x, a contradiction to Statement 1.
Third, assume for the sake of contradiction that for all i ∈ [ν] q j i has been eliminated before round x but that there is some i ∈ [ν] such that c j i has not been eliminated before round x. We make a case distinction similar as above. We claim that if there is a i ∈ [ν] − i such that c j i is present in round x, then both c j i and c j i have at most four Plurality points in any round before d is eliminated. Assume without loss of generality hat i < i . Both c j i and c j i are only ranked before d in the two color rankings for color j and the four global rankings. However, in four of these six rankings c j i is ranked before c j i (i.e., the first color ranking for color j and the last three global rankings), while in the other two c j i is ranked before c j i . Thus, as long as c j i , c j i , and d are present, both c j i and c j i have at most four Plurality points. With the help of Statement 2, we again reach a contradiction to d being eliminated last.
Otherwise for all i ∈ [ν] − i , c j i has been eliminated before round x. Using Statement 2, it follows that c j i is ranked first in both color rankings for color j in round x, a contradiction to Statement 1.
Proof of Statement 5. Fix some j ∈ [ ]. Let i be the smallest i such that c j i is present in round x (form Statement 4 we know that such an i needs to exist). Assume for the sake of contradiction that there is some i < i such that q j i is present in round x. We argue that in this case neither c j i nor q j i can ever reach five Plurality points before d is eliminated, a contradiction to Statement 2 and d being eliminated last: The only rankings in which one of c j i and q j i appears before d are the two color rankings for color j and the four global rankings. However, in the first color ranking and the last three global rankings q j i appears before c j i , whereas in the other two rankings c j i appears before q j i . Thus, as long as the other candidate is present none of the two can get a Plurality score of five.
Finally, assume for the sake of contradiction that q j i is not present in round x, implying that it was eliminated in round y for some y < x. Together with our above observation that excludes the presence of q j i for all i < i in round x and Statement 4, this implies that there is some i > i such that q j i is present in round x. Clearly, q j i was ranked in first position by at least one ranking in round y. As all candidates from G ∪ T are still present in round x (Statement 2), q j i is not ranked first in one of the global rankings in round y. The only remaining two rankings where q j i appears before d are the two color rankings for color j. However, as c j i and q j i are present in round y and q j i is ranked behind c j i in the first color ranking and behind q j i in the second color ranking for color j, q j i is also not ranked first in these rankings in round y, a contradiction.
By Statement 5 of Claim C.3, we get that for each color j ∈ [ ], there is some i j ∈ [ν] such that c j i j and q j i j are present in round x. We claim that V = {v 1 i 1 , . . . , v i } is an independent set in the given graph. Assume for the sake of contradiction that there are two colors j = j ∈ [ ] such that e = {v j i j , v j i j } ∈ E. From Statement 3 of Claim C.3, we get that f e has been eliminated before round x. The only rankings where f e is ranked before d are the four color rankings for colors j and j . In each of these rankings f e is ranked between c j i j and q j i j or between c j i j and q j i j . As all these four candidates are still present in round x, it follows that f e was never ranked in the first position in some ranking before round x, a contradiction to f e being eliminated before round x. Thus, V is an independent set, which is clearly multicolored. [Main] Theorem 6.7. Position-k Determination for Sequential-Veto-Loser is in XP with respect to the number n of voters.
Assume we are given a set C of m candidate and a ranking profile P = ( i , . . . , n ) of n voters. Knowing for each round in an execution of Sequential-Veto-Loser the bottom-list is clearly sufficient for us to reconstruct the selected ranking. Even more, if we only know the bottom-list in some round, then we can reconstruct which candidates have been deleted in previous rounds. These are exactly the candidates that appear behind the currently bottom candidate of a voter in its original vote (note that they clearly need to be deleted and that not more candidates could have been deleted because they have never appeared in the last place and thus never had the lowest Veto score). To formalize this, for a tuple x = (x 1 , . . . , x n ) ∈ C n , let D(x) be the set of candidates that are ranked behind Intuitively speaking, we need to delete all candidates from D(x) to make x a bottom-list of our profile. However, in case that D(x) contains some of the candidates appearing in x, x will actually not be the bottom list of the resulting profile. Accordingly, we call x valid if x is the bottom list of P | C\D(x) .
Using this notation, we solve the problem via dynamic programming. For this we introduce a table T [i, c, c 1 , . . . , c n ] for i ∈ [k] and c, c 1 , . . . , c n ∈ C. Moreover, we add a dummy cell T [0, ∅, cand( 1 , m), . . . , cand( n , m)], which we set to true. An entry T [i, c, c 1 , . . . , c n ] is true if there is an execution of Sequential-Veto-Loser resulting in ranking such that c is ranked in position i in and the bottom list of the profile after round i is (c 1 , . . . , c n ).
For increasing i ∈ [k], we fill table T by setting T [i, c, c 1 , . . . , c n ] to true if (c 1 , . . . , c n ) is valid and there exists candidates c , c 1 , . . . , c n ∈ C such that • D((c 1 , . . . , c n )) = D((c 1 , . . . , c n )) ∪ {c} • no candidate appears more often than c among c 1 , . . . , c n , and After T is filled, we can simply check whether there are candidates c 1 , . . . , c n ∈ C such that T [k, d, c 1 , . . . , c n ] is true in which case we return yes and no otherwise.
C.3. Borda
We now prove that Winner Determination for Sequential-Borda-Loser is NP-hard, even for a constant number of voters. We begin with a useful observation for reasoning about Sequential-Borda-Loser.
Remark C.4 (Weighted majority graph, C2-Borda scores). Every ranking profile induces a weighted majority graph (aka the C2-graph) which is an edge-weighted directed graph whose Figure 9.: A bilevel graph as described in Lemma C.5.
vertex set is the set of candidates, and for c, d ∈ C, the weight of the edge c → d is w cd = |{i ∈ N : c i d}| − |{i ∈ N : d i c}|. Given a ranking profile and its induced weighted majority graph, the C2-Borda score of an alternative c ∈ C is d∈C\{c} w cd . It is well-known that the Borda score of an alternative is an affine transformation of its C2-Borda score (indeed, the C2-Borda score is equivalent to the difference between the candidates' Borda score and the average Borda score of candidates). Hence, to obtain the output of Sequential-Borda-Loser at a given profile, we only need to know the profile's weighted majority graph.
In the construction of our reduction we build a weighted majority graph to reason about the scores of candidates. Using well-known results of McGarvey [1953] and Debord [1987], any arc-weighted digraph (in which all arc weights have the same parity; in our reductions we only ever use weights 0 and 2) can be realized as the weighted majority graph of a ranking profile, and this profile can be constructed in polynomial time. In some of our reductions, we will need to prove that a particular digraph can be constructed using a small number of rankings. For this, we use the following lemma: Lemma C.5 (Erdős andMoser, 1964, Bachmeier et al., 2019). We call an edge-weighted digraph G = (V, A) bilevel if we can partition its vertex set as V = (C 1 ∪ · · · ∪ C s ) ∪ (D 1 ∪ · · · ∪ D s ), where all subsets are pairwise disjoint but some of them may be empty, such that with all arcs having weight 2 (see Figure 9).
If G is bilevel, then it can be induced as the weighted majority graph of a 2-voter profile.
Proof. Consider the sets C 1 , . . . , C s , D 1 , . . . , D s as sets that are linearly ordered in some arbitrary way. Then construct the following two rankings 1 and 2 : It is easy to check that these two voters induce D as their weighted majority graph.
We are now ready to prove the theorem. [Main] Theorem 6.8. Let n ≥ 8 be a fixed even integer. Then Winner Determination for Sequential-Borda-Loser (aka Baldwin), restricted to instances with exactly n voters, is NPcomplete. In addition, if the ETH is true, then the problem cannot be solved in 2 o(m) · poly(m) time.
Proof. We will give the proof for the case n = 8. It can be extended to any larger even number of voters by repeatedly adding 2 opposite rankings (one the reverse of the other) to the profile constructed in the reduction. Adding opposite rankings does not change the induced weighted Figure 10.: An illustration of the reduction of Theorem 6.8. All arcs have weight 2. Red superscripts denote the difference between the weight of outgoing and ingoing arcs for a candidate. majority scores. Thus, the C2-Borda scores of the candidates do not change, and hence this does not change the result of Sequential-Borda-Loser. We reduce from Cubic Vertex Cover. (Then the ETH-based claim follows using Proposition C.1.) Let G = (V, E) be a graph with q vertices where each vertex v ∈ V is incident to exactly 3 edges, and let t be the target vertex cover size. We construct an instance of the Winner Determination problem as follows.
The candidate set consists of one candidate for each vertex, one candidate for each edge, a designated candidate d, and dummy candidates The ranking profile will be constructed so as to induce a desired weighted majority graph, where all arcs will have weight 2. The arcs are as follows: The constructed weighted majority graph is depicted in Figure 10. We now describe how to write the weighted majority graph (C, A) as a sum of 4 bilevel graphs (as defined in Lemma C.5). For each vertex v, we label the three edges incident to it arbitrarily as e 1 v , e 2 v , e 3 v . Consider the following arc sets: It is clear that each of these sets describe bilevel graphs, that they are pairwise disjoint, and that By invoking Lemma C.5, we get a profile P containing 8 voters with the depicted weighted majority graph. The C2-Borda scores (see Remark C.4) in this profile are: • d has score −8 (since it is beaten by 4 H-candidates) • each b ∈ B has score 2t − 16 • each v ∈ V has score −14 (since v is beaten by 3 edge candidates, and by 4 B-candidates) • each e ∈ E has score −6 (since e beats 2 vertex candidates but is beaten by 4 H-candidates and 1 G-candidate).
• each h ∈ H has score 2|E| − 8 (since it beats |E| edge candidates and d, but is beaten by 5 K-candidates) • Candidates in F , G, and K will always have non-negative scores (since they are beaten by no candidates) Suppose that T = {v 1 , . . . , v t } is a vertex cover of G. Then the following is a valid elimination ordering according to Seq.-Borda-Loser, where d is eliminated last and thus ranked first in the corresponding selected ranking.
• Eliminate T : In the first t rounds, vertex candidates have the lowest C2-Borda score (−14), so we can eliminate T in an arbitrary ordering. Each time we eliminate a vertex candidate, the score of the B-candidates goes down by 2, and the score of all incident edge candidates also goes down by 2.
• Eliminate B: Starting in round t + 1, candidates in B have the uniquely lowest C2-Borda score of −16, so we can eliminate B in an arbitrary ordering. As we do so, the scores of the remaining vertex candidates go up.
• Eliminate E: The remaining vertex candidates in V \ T currently have score −6 because B has been eliminated. Edge candidates have a C2-Borda score of either −8 or −10 depending on whether T contains one or both of the endpoints of the edge (note that for each edge at least one endpoint has been deleted because T is a vertex cover). These are the lowest C2-Borda scores, so we can eliminate all edge candidates (in an arbitrary ordering except that the −10 edges get eliminated first). As we eliminate edge candidates, the scores of H-candidates go down.
• Eliminate H: Just after we finish eliminating E, the score of H has dropped to −8, which is the lowest C2-Borda score. So we can eliminate H in an arbitrary order.
• Eliminate F ∪ G ∪ K: At this point, all remaining candidates have C2-Borda score 0 (and they have no arcs between them, so eliminating candidates does not change the scores). So we can eliminate all candidate except d in an arbitrary order.
Conversely, suppose there is a ranking selected by Seq.-Borda-Winner where d is eliminated last. In the first t − 1 rounds, only vertex candidates can be eliminated since only they have the lowest C2-Borda score of −14. But then, in round t, the B-candidates also have score −14. We distinguish two cases: whether another vertex is eliminated, or whether a B-candidate is eliminated.
• Case 1: In round t, a vertex candidate is eliminated. Define T to be the set of vertices whose candidates were eliminated in the first t rounds. Starting in round t + 1, the B-candidates have score −16, which is uniquely lowest, so they will then all be consecutively eliminated.
• Case 2: In round t, a B-candidate is eliminated. Then the C2-Borda score of the remaining vertex candidates goes up to −12 while the remaining B-candidates have score −14, which makes them the candidates with the uniquely lowest C2-Borda score. Thus, in the following rounds, all the B-candidates will be consecutively eliminated. Define T to be the set of vertices whose candidates were eliminated in the first t − 1 rounds.
In either of the two cases, we have now reached a stage where for a set T of at most t vertices (to be precise, either t or t − 1 vertices), the corresponding vertex candidates have been eliminated, followed by the elimination of set B. We will prove that T is a vertex cover. After the above-described eliminations, since B is eliminated, the remaining vertex candidates in V \ T have score −6. Candidate d has score −8. Edge candidates have score −6 if neither of their endpoints were contained in T , and otherwise they have a score of −8 or −10. All other candidates have score −6 or higher. Thus, in the next rounds, edges that were covered have their candidates eliminated. While this happens, the score of H-candidates goes down, but they do not become eliminateable before all covered edge candidates are eliminated. If in this way all edge candidates are eliminated, then T was a vertex cover and we are done. If there is an edge that is not covered by T , then after all covered edge candidates are eliminated, we end up in a situation where d has score −8, but all other candidates have score −6 or higher, so d would need to be eliminated next, a contradiction.
D. Additional Material for Section 6.3 We start by observing that for each scoring system s our general problem Position-k Determination is in XP, as we can simply guess which candidates are ranked on the first k positions (in which ordering) in the selected ranking and then verify whether it gives rise to a valid execution of Sequential-s-Winner.
Observation D.1. For every scoring system s, Position-k Determination for Sequential-s-Winner is in XP.
D.1. Plurality
In this section, we consider Seq.-Plurality-Winner. We start by showing that Top-k Determination is NP-hard and W[1]-hard with respect to k.
Proposition D.2. Top-k Determination for Sequential-Plurality-Winner is NP-hard and W[1]-hard with respect to k.
Proof. We reduce from Independent Set, which is W[1]-hard when parameterized by the solution size. Given a graph G = (V, E) with |V | = ν, and an integer , Independent Set asks whether there is an independent set of size in G (we assume without loss of generality that ν > 2, > 2, |E| > 1, and < ν). From an instance of Independent Set, we construct an instance of Top-k Determination as follows. We add a candidate c v for each vertex v ∈ V . Moreover, we introduce 2ν 3 ( − 1) + ν 3 dummy candidates, a designated candidate d, a blocker candidate b, and an edge candidate e.
We now turn to the description of the ranking profile. We first add the following rankings: Moreover, for each v ∈ V , as long as c v has less than 2ν 2 ( − 1) + ν 2 Plurality points in the current profile, we add a ranking where c v is ranked first and some so far never in second place appearing dummy candidate is ranked second. We set k := + 2.
Note that dummy candidates will clearly not be eliminated in the first k rounds so we can ignore them.
Assume that V = {v 1 , . . . , v } is an independent set. Then, we eliminate in the first − 1 rounds candidates c v 1 , . . . , c v −1 . Note that by this eliminations only the Plurality scores of dummy candidates, of b, and of e have changed. After round − 1, e has score 2ν 2 ( − 1). Moreover, b has score at most 2ν 2 ( − 1) + ( − 1) · ν. Thus, the vertex candidates still have the highest score and we can eliminate c v . After round , candidate b clearly has the highest Plurality score, so we eliminate b. The elimination of b redistributes 2 ν 2 points to d, at most ν points to each vertex candidates, and no point to e (because V is an independent set and thus in each of the voters of the third type at least one vertex candidate ranked before e is still present). Thus, we can eliminate d in the next round + 2.
Conversely, assume that there is an execution of Seq.-Plurality-Winner such that d is eliminated in round + 2 or before. As argued before clearly in the first rounds vertex candidates need to be deleted. Let V ⊆ V be the subset of vertices corresponding to these vertex candidates. We claim that V needs to be an independent set. As argued above, b will be eliminated in round + 1. Thus, d has a Plurality score of 2 ν 2 in round + 2. In round + 2, candidate e is also ranked in the first position in the 2 ν 2 votes of the second type. Moreover, if there is an edge {v, u} ∈ E with v, u ∈ V , then we have eliminated all candidates ranked before e in the corresponding vote, giving e a Plurality score of at least 2 ν 2 + 1. This leads to a contradiction, as this implies that e has a higher score than d in round + 2.
Moreover, also when we parameterized by the number of n of voters we still get W[1]-hardness by reusing some ideas for the reduction for Winner Determination for Seq.-Veto-Loser from Theorem 6.6. Theorem D.3. Top-k Determination for Sequential-Plurality-Winner is W[1]-h. with respect to the number n of voters.
Proof. We prove hardness by a reduction from the W[1]-hard Multicolored Independent Set problem parameterized by the solution size .
Construction In Multicolored Independent Set, we are given an -partite graph (V 1 ∪ · V 2 ∪ · . . . ∪ · V , E) and the question is whether there is an independent set X of size with X ∩ V j = ∅ for all j ∈ [ ]. To simplify notation, we assume that V j = {v j 1 , . . . , v j ν } for all j ∈ [ ]. We refer to the elements of [ ] as colors and say that a vertex v has color j ∈ [ ] if v ∈ V j . Moreover, let |E| = µ.
We construct an instance of our problem by setting k := µ + · (ν − 1) + 1. We start by describing the candidate set. For each j ∈ [ ] and i ∈ [ν + 1], we introduce a vertex candidates c j i (notably candidates c j ν+1 do not correspond to a vertex but act more like a dummy candidate). Moreover, for each edge e ∈ E, we introduce an edge candidate f e . For j ∈ [ ] and i ∈ [ν], let F j i be the set of all edge candidates corresponding to edges incident to v j i . Further, for j = j ∈ [ ], let F j,j be the set of all edge candidates corresponding to edges between a vertex of color j and a vertex of color j (note that there are no edges between vertices of the same color). Moreover, we introduce k blocker candidates B = {b 1 , . . . , b k }. Lastly, we add our designated candidate d.
For a subset C of candidates, let [C ] be the lexicographic strict ordering of the candidates in C .
Turning to the input rankings, we introduce for each color j ∈ [ ], 2 copies of two types of color rankings: Moreover, for each pair of colors j = j ∈ [ ], we introduce the following critical rankings: We complete all rankings arbitrarily.
Forward Direction For the forward direction, assume that we are given a multicolored independent set V = {v 1 i 1 , . . . , v i }. We now describe a valid elimination order of the candidates for which d is eliminated in round k. For j = 1 to j = , we do the following: We delete all candidates that are ranked before c j i j in the first type of color rankings for color j in the order in which they appear in this type of ranking. Afterwards, we delete all candidates that are ranked before c j i j +1 in the second type of color ranking for color j in the order in which they appear in this type of ranking. Thus, all candidates ranked before the blocker candidates in the color rankings, except c j i j , c j i j +1 , and candidates from F j i j got deleted. Thus, as V is an independent set, all edge candidates got deleted. As for each color two vertex candidates remain, this implies that µ + · (ν − 1) = k − 1 candidates got deleted. The resulting profile looks as follows: Notably, d is a Plurality winner in this profile so we can eliminate d in round k.
Backward Direction For the backward direction, assume that there is an execution of Seq.-Plurality-Winner for which d is ranked in one of the first k positions in the selected ranking. Let k * ≤ k be the round in which d is eliminated. Notably, as there exist 2 identical rankings, the Plurality winner in each round, needs to have Plurality score at least 2 . Moreover, as d only appears in the 2 critical rankings in one of the first k * positions, this implies that d needs to be ranked first in all critical rankings in round k * and that all candidates have a Plurality of score at most 2 in round k * . From the first part of this observation it follows that all edge candidates need to be deleted before round k * . Using this and that all candidates have Plurality score at most 2 in round k * , we get that for each color j ∈ [ ], at least two vertex candidates corresponding to vertices of this color are present in round k * : If there is a color with no remaining vertex candidates in round k * , then some blocker candidate will be ranked in the first position in the 2 2 rankings corresponding to this color in round k * . If there is a color with only one vertex candidate from this color remaining, then this candidate is ranked first in the first position in the 2 2 rankings corresponding to this color in round k * . Now, for each color j ∈ [ ], let i j be the smallest i such that c j i is present in round k * and let t j = i j be some other index such that c j t j is present in round k * (by our above observation both of these need to exist). We claim that {v 1 i 1 , . . . , v i } is an independent set in the given graph. Assume for the sake of contradiction that there are We claim that in this case f e has not been eliminated before round k * : Recall that a candidate can only get eliminated if its Plurality score is at least 2 . Moreover, as f e only appears in one of the first k * positions in one of the critical rankings, this means that in the round in which f e is eliminated it needs to be ranked in the first position in one of the color rankings for either color j or j . However, note that in all of these rankings f e is either ranked between c j i j and c j t j or between c j i j and c j t j . As all these four candidates are still present in round k * (and thus also in all previous rounds), it follows that f e was never ranked in the first position in one of the color rankings, a contradiction to all edge candidates being deleted before round k * .
From Theorem 6.7, using the equivalence between Seq.-Loser and Seq.-Winner from Lemma 3.5, we can directly conclude the following: Proof. We reduce from Hitting Set, where given a universe U and a family of sets S and an integer , the question is whether there is an -subset of the universe containing at least one element from each set from S, i.e., U ⊆ U with |U | = and S ∩ U = ∅ for all S ∈ S. Hitting Set is W[2]-hard when parameterized by . Let ν := |U | and µ := |S|. For an element u ∈ U , let S u denote the family of sets in which u appears. That is, S u = {S ∈ S | u ∈ S}.
We construct an instance of Top-k Determination as follows. For each element u ∈ U , we introduce an element candidate c u . For each set S ∈ S, we introduce a set candidate e S . Lastly, we introduce a blocker candidate b and the designated candidate d. We set k := + 1.
For convenience (to avoid talking about negative numbers), we say that the bottom count of a candidate is the number of times the candidate is ranked in last position. Thus the bottom count is the negative of the veto score, and Sequential-Veto-Winner proceeds by eliminating candidates with the lowest bottom count.
We now turn to the description of the ranking profile. We first add the following rankings: · · · b e S c u , ∀u ∈ U and S ∈ S u · · · b c u c u , ∀u ∈ U and u ∈ U \ {u} · · · b e S , ∀S ∈ S and i ∈ [ν + µ + − 1] · · · d, ∀i ∈ [ν + µ + ] · · · b, ∀i ∈ [ν + µ + + 1] Note that with these rankings, for each element u ∈ U , the candidate c u has bottom count at most ν + µ. For each element u ∈ U , we add several copies of the ranking · · · b c u until c u has bottom count exactly ν + µ. Thus, the bottom counts of the candidates in the initial profile are as follows: • For each u ∈ U , c u has a count of ν + µ.
• For each S ∈ S, e S has a count of ν + µ + − 1.
• The designated candidate d has a count of ν + µ + .
Initially, all element candidates c u are Veto winners, having the lowest bottom count. Note that the bottom count of uneliminated candidates can only increase over time. Thus, for d to be a Veto winner in some round, each element candidate needs to be either deleted or ranked last in at least additional rankings, and also each set candidate needs to be either deleted or ranked last in at least one additional vote.
Let U = {u 1 , . . . , u } be a hitting set of size . Then, in the first round we eliminate c u 1 , increasing the count of each other element candidate by 1, and increasing the score of each set candidate corresponding to a set from S u 1 by 1. All remaining element candidates are still veto winners and we continue eliminating c u i for i = 2, . . . , . After round , each remaining element candidate has count ν + µ + . Moreover, as U is a hitting set, each set candidate also has count at least ν + µ + . This means that d is a Veto winner in round + 1 = k and we can eliminate it.
Conversely, assume that there is an execution of Seq.-Veto-Winner such that d is eliminated in round + 1 or earlier. As each element candidate either needs to be ranked last in additional rankings or deleted, in the first rounds element candidates need to be eliminated. Let U ⊆ U be the subset of elements that correspond to the eliminated element candidates. Then in case U does not form a hitting set, there is a set candidate that still has count ν + µ + − 1 in round + 1, and thus in particular a lower count than the designated candidate d, a contradiction. By applying Lemma 3.5 to Observation 6.4, we get that Position-k Determination for Seq.-Veto-Winner is fixed-parameter tractable with respect to n.
D.3. Borda
We conclude by studying Seq.-Borda-Winner. Recall Remark C.4 which showed that it suffices to reason about the weighted majority graph induced by a profile.
Theorem D.8. Top-k Determination for Sequential-Borda-Winner is NP-complete for n = 8.
Proof. We reduce from Cubic Vertex Cover. Let G = (V, E) be a graph with q vertices where each vertex v ∈ V is incident to exactly 3 edges, and let t be the target vertex cover size. We construct an instance of the Top-k Determination problem as follows.
The candidate set consists of one candidate for each vertex, one candidate for each edge, a designated candidate d, and dummy candidates f , candidates B = {b 1 , . . . , b 6 }, and candidates H = {h 1 , . . . , h q+3−t }. Let C = V ∪ E ∪ {d} ∪ {f } ∪ B ∪ H. We set k = t + 7. The ranking profile will be constructed so as to induce a desired weighted majority graph, where all arcs will have weight 2. The arcs are as follows: The constructed weighted majority graph is depicted in Figure 11. We now describe how to write the weighted majority graph (C, A) as a sum of 4 bilevel graphs (as defined in Lemma C.5). For each vertex v, we label the three edges incident to it arbitrarily as e 1 v , e 2 v , e 3 v . Consider the following arc sets: .: An illustration of the reduction of Theorem D.8. All arcs have weight 2. Red superscripts denote the difference between the weight of outgoing and ingoing arcs for a candidate.
It is clear that each of these sets describe bilevel graphs, that they are pairwise disjoint, and that A = A 1 ∪ A 2 ∪ A 3 ∪ A 4 . By invoking Lemma C.5, we get a profile P containing 8 voters with the depicted weighted majority graph. The C2-Borda scores (for a definition, see Remark C.4) in this profile are: • d has score 2 • each b ∈ B has score 6 − 2t • each v ∈ V has score 6 (since v is incident to 3 edges, and beats 6 b-candidates) • each e ∈ E has score 4 (since e is incident to 2 vertices).
• each h ∈ H as well as f have negative scores (and can only have non-positive scores throughout the elimination process because they do not beat any candidates) and will not be selectable in the first k rounds.
Suppose that T = {v 1 , . . . , v t } is a vertex cover of G. Then the following is a valid start of an elimination ordering, with d eliminated in round k = t + 7: First, we eliminate all candidates from T in some arbitrary ordering, then all candidates from B in some arbitrary ordering, and then d. Explicitly, in the first t rounds, the maximum Borda score of a candidate is 6 and all vertex candidates have a Borda score of 6 (no other candidates have a Borda score of 6 in these rounds). Thus we can select members of the vertex cover T in each of these rounds. In round t + 1, the remaining vertex candidates and the candidates in B have the maximum Borda score of 6. Thus, we can eliminate candidates in B (while doing so, the Borda scores of vertex candidates decrease, so candidates in B continue having the maximum Borda score). After all 6 candidates in B are eliminated, we are in the following situation with respect to the remaining candidates' C2-Borda score: • d has score 2 • each e has score at most 2 (since we have eliminated a vertex cover, and thus have eliminated at least one candidate that e beats) • each remaining v ∈ V has score −6; each h ∈ H and f have non-positive scores.
Hence, at this point, candidate d has the highest Borda score and can be eliminated. Conversely, suppose there is a ranking selected by Sequential-Borda-Winner where d is eliminated in round k = t + 7 or earlier. As observed above, in the first t rounds, only vertex candidates can be eliminated. Let T = {v 1 , . . . , v t } be the set of vertices whose candidates are eliminated in these rounds. From round t + 1 until t + 6, all candidates in B have score at least 6 (it cannot go lower because the candidates that B beats, namely H, cannot be eliminated). Because d has score only 2, all the 6 candidates in B are eliminated before d. This brings us to round t + 7 where by assumption d is eliminated. Hence at this point, no candidate has score higher than 2. In particular, for every edge e ∈ E, its score is less than 4. This can only have happened if at least one of the vertices incident to e has been eliminated and is thus part of T . It follows that T is a vertex cover.
Theorem D.9. Top-k Determination for Sequential-Borda-Winner is W[2]-hard with respect to k.
Proof. We reduce from Hitting Set, using a similar construction as in Theorem D.8. Let U be a given universe of elements and let S be a given family of subsets of U . We are also given an integer t, and the question is whether there is a t-subset of the universe containing at least one element from each set from S, i.e., U ⊆ U with |U | = t and S ∩ U = ∅ for all S ∈ S. Let q = |U |.
We construct an instance of the Top-k Determination problem as follows.
We first give an incomplete description of the constructed instance. Later, we will add some dummy candidates that have no influence except that they increase the Borda scores of some of the candidates to a desired level. The candidate set consists of one candidate for each element, one candidate for each set S ∈ S, a designated candidate d, and a set B = {b 1 , b 2 } of two blocking candidates. Let C base = V ∪ E ∪ {d} ∪ B (again, we will add to this set later). The ranking profile will be constructed so as to induce a desired weighted majority graph, where all arcs will have weight 2 (using standard arguments; McGarvey, 1953, Debord, 1987. The arcs are as follows: A base ={(S, e) ∈ S × U : e ∈ S} ∪ (U × B) The constructed weighted majority graph is depicted in Figure 12. The C2-Borda scores (for a definition, see Remark C.4) in this profile are: • d has score 0 • each S ∈ S has score 2|S| • each e ∈ U has score 2|{S : e ∈ S}| − 4 • each b ∈ B has score −2|U |. no candidate has score higher than α − 2. In particular, for every set S ∈ S, its score is less than α. This can only have happened if at least one of the elements of S has been eliminated and is thus part of T . It follows that T is a hitting set. | 2022-09-20T01:16:01.154Z | 2022-09-19T00:00:00.000 | {
"year": 2022,
"sha1": "36814dc10e6f759c87520b080a6dd339f74c32be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "36814dc10e6f759c87520b080a6dd339f74c32be",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252392771 | pes2o/s2orc | v3-fos-license | Efficacy of Levobupivacaine Versus Levobupivacaine Plus Dexmedetomidine Infiltration for Post-Tonsillectomy Analgesia: A Randomized Controlled Trial
Background The study evaluated the analgesic effects of levobupivacaine infiltration in the tonsil bed, and a combination of levobupivacaine and dexmedetomidine in patients undergoing tonsillectomy. Methods Ninety children (ages 3 to 7 years) who were scheduled for a tonsillectomy were allocated randomly into two groups. (L Group): peritonsillar infiltration with 0.25% levobupivacaine (2 ml + 0.5 ml saline 0.9% per tonsil). (LD Group): levobupivacaine 0.25% (2 ml) plus dexmedetomidine 1 μg/kg diluted in 1 ml saline 0.9% (0.5 ml in each tonsil), and administered by peritonsillar infiltration (2.5 ml per tonsil) following intubation 3–5 minutes before operation. To avoid bias, infiltrate a total volume of 2.5 ml in each tonsil. The first analgesic request time was the primary outcome, with postoperative pain score, total analgesic consumption, total oral intake, sedation, and side effects as secondary outcomes. Results The first rescue analgesia time in the LD group was longer (644.31 ± 112.89 min) than in the L group (551.51 ± 146.16 min, P-value <0.001). The number of patients who required >1 analgesic dose in the L group (n = 13) was higher than in the LD group (n = 5). The LD group consumes a lower total dose of IV paracetamol in the first 24 hours postoperatively (321.89 ± 93.25 mg) than the L group (394.89 ± 183.71 mg, P < 0.00-value < 0.050). On the first day postoperatively, patients in the LD group had a higher total oral intake (P < 0.001). Except for a slight increase in laryngospasm in the L group, there were no side effects. Conclusions The Children's peritonsillar infiltration of levobupivacaine and dexmedetomidine improved postoperative pain after adenotonsillectomy. The topically applied levobupivacaine and dexmedetomidine were concomitant with no systemic effects, greater total oral intake on the first day postoperative, and higher family satisfaction.
Introduction
Tonsillectomy is one of the most popular surgical interventions in children [1]. Patients commonly experience pain with swallowing after this treatment [2]. Sore throats postoperatively are a serious issue since they can cause decreased oral intake and dehydration, both of which are hazardous [3].
Late postoperative risks from a sore throat include epithelial loss in the surgical site and necrosis in soft tissue, bleeding in dehydrated persons, acute pain, and a delay in recovery. Several tonsillectomy studies have demonstrated that a local anesthetic (LA) injection that is, routinely given before procedures to reduce pain stimulation during the procedure is effective [4,5].
Various treatments have been attempted with varied results to minimize postoperative discomfort and boost patient and parent satisfaction following adenotonsillectomy. As a result, peritonsillar local anesthetic infiltrations, especially long-acting bupivacaine, are also employed [4]. e ability of local anesthetic is to not only block the transmission of peripheral pain receptors following tissue damage but also to avoid central nervous system sensitization is why it is used in the perioperative stage [6].
Levobupivacaine is an amide-type long-acting local anesthetic that is gentler on the cardiac and nervous systems [7].
Dexmedetomidine (DEX) is utilized in pediatric patients for analgesia and sedation, as well as in the intensive care unit and during noninvasive (MRI) and invasive (endoscopy and cardiac catheterization) procedures [8]. It can also reduce the use of opioids and anesthetic requests, as well as decrease developing delirium [9] and postanesthesia shivering, according to adult studies [9][10][11]. e objective of this study was to compare the analgesic effects of levobupivacaine infiltration in the tonsil bed with a combination of levobupivacaine and dexmedetomidine in patients undergoing tonsillectomy.
Materials and Methods
e Medical Research Ethics Committee of the Faculty of Medicine provided ethical permission for this randomized, prospective, double-blind comparative study on September 29, 2019. (Approval no: 17300316). It was registered on Clinical-Trials.gov (NCT04113720) and tracked the Helsinki Declaration guidelines. All the patients' guardians gave written informed consent when the study's purpose was outlined.
e study enrolled 90 patients (ages 3 to 7 years) who were designated by the American Society of Anesthesiologists (ASA) I-II for elective tonsillectomy with or without adenoidectomy (possibly via surgical retraction and bipolar diathermy).
Previous peritonsillar abscess, obstructive sleep apnea syndrome (whether or not established by a polysomnography test), cardiovascular, liver, or kidney disease, unsatisfactory preoperative peripheral arterial oxygen saturation, coagulation disorders, relevant drug allergies, neurological or psychiatric illness, patients who take analgesics regularly or who have taken analgesics 24 hours before surgery, and finally, patients who have difficulty perceiving anesthesia.
Randomization and blinding: randomization occurred the morning of the procedure before general anesthesia was administered.
Ninety patients were randomly assigned into two groups (n � 45) via a computer-generated randomization technique. e first group (Group L) received levobupivacaine 0.25% via peritonsillar infiltration (2 ml + 0.5 ml saline 0.9% per tonsil) following intubation 3-5 minutes before operation. e second group (Group LD) received levobupivacaine 0.25% (2 ml) plus dexmedetomidine 1 μg/kg diluted in 1 ml saline 0.9% (0.5 ml in each tonsil) and administered by peritonsillar infiltration (2.5 ml per tonsil) after intubation 3-5 minutes before the beginning of the operation. To avoid bias, infiltrate a total volume of 2.5 ml in each tonsil. e selected dose of dexmedetomidine (1 μg/kg) was consistent with prior studies showing the analgesic efficiency of dexmedetomidine at 1 μg/kg instead of 0.5 μg/kg and 0.75 μg/kg confirmed [12]. e research medicines were prepared in an identical syringe by a nurse who was not participating in the study. All of the syringes had numbers ranging from 1 to 90 and these numbers were saved in opaque envelopes. Only one anesthesiologist who packed the envelopes had access to the codes on the envelopes. All study personnel, including patients' guardians, were unaware of the treatment assignment.
2.1. Anaesthetic Technique. All children were required to fast for at least 6 hours before surgery, with clear fluids permitted until 2 hours before anesthetic induction. Blood pressure, heart rate, oxygen saturation, and end-tidal CO 2 were assessed in the operating room. e anesthetic protocol was fixed. All subjects were preoxygenated with 100% oxygen for 3 minutes using a facemask. To produce anesthesia, incremental 1.5% sevoflurane dosages up to 7% were utilized in a 70% oxygen/air mixture. Dexamethasone (0.2 mg/kg, maximum dose of 8 mg), and an intravenous antibiotic were given. ere were no NSAIDs, opioids, or paracetamol used throughout the procedure.
After achieving neuromuscular block with cis-atracurium 0.3 mg/kg, the endotracheal tube was introduced, and anesthesia was continued with sevoflurane at 2.5% in a 70% oxygen/air mixture. Before the surgery, the study medicines were injected pericapsularly via the tonsil bed and peritonsillar tissue in a fan-shaped pattern from the top to the lower pole of the tonsil fossa via a syringe with a 25-gauge spinal needle. When the anesthetic gases were switched off at the finale of the surgery, the neuromuscular blockade was countered with 0.05 mg/kg neostigmine and 0.02 mg/kg atropine, and the patients were turned away in the recovery position. e children were extubated awake and transported to the postanesthesia care unit (PACU) after the protective airway reflexes were confirmed to have returned. Supplemental oxygen was withheld if the child could maintain a SaO 2 >95% in ambient air for 5 minutes. After earning an Aldrete score of 9 or higher, participants were discharged from PACU to the ward [13].
Assessment Parameters
(i) e patient's demographic and clinical data are age, sex, weight, height, and ASA class. (ii) Operative room data include; (i) Vital signs, such as noninvasive arterial blood pressure, heart rate, and peripheral arterial oxygen saturation were continuously monitored and noted before, during, and after the administration of study drugs, as well as at 15, 20, 25, and 30 minutes during operation. (ii) Time of anesthesia (from initiation of anesthesia till extubation). (iii) Operation time (from the beginning of the operation to the end of the bleeding control).
Pain Research and Management
(iv) Time to extubation (from the cessation of anesthesia to extubation).
(iii) PACU and ward data include; (i) Hemodynamic parameters: heart rate, mean arterial blood pressure, and peripheral arterial oxygen saturation were measured and noted in the PACU (time of PACU arrival is 0 min) and at 15, 30, 45, and 60 min postoperative. (ii) Pain assessment: via the Children's Hospital of Eastern Ontario Pain Scale (CHEOPS) [14], on arrival to PACU at 0, 30, 60, 90 min, 2, 6, 10, 12, and 24 h after recovery from anesthesia. IV paracetamol 15 mg/kg was given for rescue analgesia if two double notes separated by a 5 min waiting period produced CHEOPS >6. All parents received a call from the same convalescent nurse the day after surgery, asking if they had seen any after effects. e patients were then observed for a week to see if any problems emerged. Upon discharge, all children were given oral paracetamol (20 mg/kg) as needed (a maximum of four times in 24 hours).
Surgical Work.
Complete bed dissection was conducted using cold instruments in all patients, with no radiofrequency, diathermy, or LASER usage. In all cases, the lower pole was ligatured (with 2-0 silk), and hemostasis was obtained with bipolar cautery. e surgeon used sharp adenoid curettes to do the adenoidectomy (if necessary), and he palpated the adenoid bed. e curettage was repeated if necessary to ensure complete eradication [16]. A single otolaryngologist surgeon did all procedures.
Outcomes.
e assessment of the first analgesia rescue call was the primary outcome. Secondary outcomes involved the effect of peritonsillar infiltration on postoperative recovery in children undergoing tonsillectomy and adenoidectomy, such as pain scores, total analgesic consumption, hemodynamics, total oral intake, and sedation, as well as recording any adverse effects over the 24-hour trial period.
Statistical
Analysis. Power of the study: the trial's primary outcome was the period of postoperative analgesia as measured by the first call for analgesics. A target sample size was determined based on a pilot study's findings. According to a power analysis, a sample size of 41 patients in each group would have 95% power to detect a difference of 0.8 effect size in the time to the first request for rescue analgesics between the two groups at the 0.05 level of significance. To account for patient dropout, a total of ninety individuals were registered.
Data Analysis.
e Shapiro-Wilk test was performed to define the baseline variable distribution. To examine continuous variables reported as mean, the Student's t-test and one-way analysis of variance (ANOVA) test with posthoc multiple comparisons were utilized (SD). e nonparametric data from the two groups, reported as medians, were compared by the Mann-Whitney U test (range). Categorical data, described as numbers and percentages, were examined with the Chi-square or Fisher exact test. A statistically significant P value of 0.05 was used. IBM SPSS Statistics Version 20 (SPSS Inc, Chicago, IL, USA) was utilized for all statistical studies.
Results
Amongst 99 participants who were screened for eligibility, ninety patients were recruited for the study, each group contained forty-five patients (Figure 1). ere were no significant differences among the groups of participants regarding age, weight, height, gender, time of operation, and anesthesia (Table 1).
3.1.
e Rescue Analgesia and Analgesic Consumption. CHEOPS score was higher in the (L group) requiring rescue analgesia at (551.51 ± 146.16 min), whereas in the (LD group), the CHEOPS score started to increase and required rescue analgesia at (644.31 ± 112.89 min).
Not only was the time to the first rescue analgesic dose significantly shorter in the (L group) (P value 0.001), but the number of patients who required more than one rescue analgesic dose was also higher in the (L group) (n � 13, 28.9%) than in the (LD group) (n � 5, 11.1%) (P value 0.050).
Over 24 hours, postoperative rescue analgesia was provided with IV paracetamol bolus 15 mg/kg as needed or if the CHEOPS score was >6. When compared to the L group, the mean total paracetamol dose of rescue analgesia taken in the first 24 hours postoperatively was considerably lesser in the LD group (321.89 ± 93.25 mg) than in the L group (394.89 ± 183.71 mg, P < 0.001 value < 0.050) ( Table 2). e extubation time was significantly longer in (the LD group) (6.2 ± 0.7 min) when compared with (the L group) (5.1 ± 0.8 min, P < 0.001), leading to slight prolongation in Pain Research and Management anesthesia times in (the LD group) (42.8 ± 3.8 min), more than that in (the L group) (41.8 ± 3.1 min) but with no significant difference (Table 1). e maximum mean values for total oral dose were achieved for 24 hours for liquids and semisolids in the LD group (725.33 ± 95.12 ml and 630.9 ± 139.39 ml, P < 0.001), compared to the L group (570 ± 131 ml and 481.6 ± 123.28 ml), respectively ( Table 2).
Hemodynamics: no significant differences were recorded among groups in the mean MAP at other time points, or in the mean heart rate or SPO 2 at any studied time point (data not displayed). e HR and NIBP were stable during the whole process.
In the Postoperative Period.
e pain was evaluated using the CHEOPS score to assess the necessity for rescue analgesia. e CHEOPS scores were significantly lesser in (the LD group). We found that during the initial 120 min, i.e., from baseline to 120 min, a P value > 0.050 was insignificant. e difference in CHEOPS scores between the two groups is significant at 6 th and 10 th postoperatively with P � 0.007 and 0.000, respectively. LD group had lesser scores of CHEOPS at almost all-time intervals (Table 3).
Postoperative Sedation.
It was evaluated using the Ramsay sedation score in the first 240 minutes after surgery, reduced with time in both groups. e mean sedation values in the LD group were higher than those in the L group at almost all points in time but there was no significant difference (Table 3).
Side Effects.
Of the 90 patients, 11 vomited, 6 vomited one time (2 in group L and 4 in group LD), and 5 suffered from excessive secretion (3 in group L and 2 in group LD) with no statistical differences between the groups. No active intervention was performed. Laryngospasm was significantly higher in (the L group). e occurrence of laryngospasm was observed in five children in (the L group). No active interference was done, and it was self-limiting. However, no child had laryngospasm in the LD group (Table 4). No patient reported prolonged additional oxygen demand, respiratory depression, tachycardia, arrhythmia, hypo or hypertension, or tonsil bed hemorrhage.
One week of follow-up, no postoperative bleeding was reported, primary or secondary infection with a tonsillar bed that healed optimally one week after the operation without any complaints from the patient.
Patients' Satisfaction.
Assessed with ''Likert scale'' was adequate (very satisfied, satisfied, and neutral) in almost 97.7% of the LD group as equated to 86.6% in the L group, P < 0.050 (Table 4).
Discussion
e most important finding of this study was that an intraoperatively administered dose of 1 μg/kg of a peritonsillar combination of levobupivacaine and dexmedetomidine (LD group) before the start of the adenotonsillectomy operation improved postoperative analgesia, increased the time to the first analgesic call, and decrease the must for postoperative analgesia. In addition, equaled to levobupivacaine alone, this mixture resulted in higher total oral intake and increased family satisfaction on the day after surgery without raising the risk of problems.
In children, determining the severity of pain is critical for treatment and follow-up. It is difficult to adequately quantify pain in children because their cognitive and verbal communication skills are lacking. As a result, employing established criteria to monitor findings should lead to accurate pain diagnosis and treatment [17]. In this study, we attempted to offer a precise evaluation using CHEOPS.
According to Jebeles et al., post-tonsillectomy pain is assisted by the harmful motivation of C-fiber imports in the peritonsillar area [18] and it is induced by nerve inflammation and frustration, as well as spasms of the bare pharyngeal muscles. After surgery, the pain does not disappear entirely until the mucous membrane covers the muscles [19]. ey were adding dexmedetomidine to ropivacaine for local anesthesia infiltration enhanced analgesic efficiency, Data presented as mean ± SD and number (%). Group L: levobupivacaine and Group LD: levobupivacaine plus DEX. Independent sample t-test and Chisquare test. * A statistically significant difference (P < 0.050). * * A statistically significant difference (P < 0.001). Pain Research and Management and increased the degree of pain relief after tonsillectomy and adenoidectomy, according to Hao et al. [20] which is similar to our findings. Others found that managing postoperative pain following adenotonsillectomy infiltration with local anesthetics reduced morbidity and improved satisfaction [21]. However, their limited assessment only lasted for the first 24 hours, and they did not continue to track patients or repair surgical techniques and instruments, which are critical factors in postoperative pain assessment, as this study did. In the present study, the same procedure (total bed dissection ± curettage adenoidectomy) was used in all patients to avoid unfair results. e α 2 -adrenergic agonist's analgesic properties might be facilitated over supraspinal, spinal, and peripheral actions [22]. e decline in analgesic requests in this current study was in agreement with earlier studies in adults [23] and pediatrics [24] which settled that IV dexmedetomidine intraoperatively significantly reduced the postoperative need for opioid analgesics. e alteration in this study is the usage of IV paracetamol as rescue analgesia because our institute protocols favor nonopioid analgesia for post-tonsillectomy pain.
In this trial, laryngospasm was significantly greater with levobupivacaine alone, showing the potential advantage of dexmedetomidine in upper airway surgery when paired with local anesthesia. e smooth muscle relaxation induced by local dexmedetomidine infiltrations supports this. e cholinergic EFS-induced contractions and acetylcholine release were reduced by the 2-adrenoceptor agonist dexmedetomidine, indicating the existence of inhibitory 2-adrenoceptors on the prejunctional side of the postganglionic junction between cholinergic neurons and smooth muscles. Exogenous acetylcholine-induced contraction and C-fiber-mediated contraction were both reduced by dexmedetomidine, indicating a direct influence on airway smooth muscle and an underlying mechanism for cough suppression, respectively [25]. e previous study conducted by El-Anwar et al. also found that laryngospasm was higher in the levobupivacaine group [26]. e absence of systemic effects in this study's peritonsillar injections of levobupivacaine and dexmedetomidine suggests that a direct local effect is possible. However, we could not rule out a central analgesic impact due to systemic absorption, the reason for the thickening of the blood vessels in this location. We could not determine plasma levels of levobupivacaine and dexmedetomidine to compare with clinical outcomes that may have established local effects because we did so incorrectly. More research is needed to discover the appropriate amount of analgesics for levobupivacaine and dexmedetomidine in children and to explain local side effects. e lack of a preoperative gag reflex measurement is one of the study's limitations. It is possible that the lack of a gag reflex after surgery is due to a lack of one before surgery, which can happen in some people. However, assessing swallowing difficulties and parental satisfaction was subjective and could be influenced by other factors, such as the patient's effort.
In future studies, we need to determine optimal dosage requests for other pediatric subpopulations.
In Conclusion
In children, peritonsillar infiltration of levobupivacaine and dexmedetomidine extremely improved postoperative pain after adenotonsillectomy. e topically applied levobupivacaine and dexmedetomidine were concomitant with no systemic side effects, a higher net oral intake on the first day after surgery, and a better level of family satisfaction.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2022-09-21T15:02:36.105Z | 2022-09-19T00:00:00.000 | {
"year": 2022,
"sha1": "6d9e324049f874b771c8d9f8e0776d816329f1fd",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/prm/2022/9958668.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abf4621bb05e1d749bc6fc09c44159872d0df7a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252783849 | pes2o/s2orc | v3-fos-license | “Balancing within a closed family system”: a grounded theory study of how family life is affected by having a family member with an eating disorder
Background This qualitative study explores how having an adult daughter or sister with anorexia nervosa or bulimia nervosa affects the family’s daily life. Previous research has focused on the family’s role in the development of an eating disorder, while more recently the focus has been on the illness’s impact on the family. Caring for an individual with an eating disorder can involve distress, guilt, extra burdens, and unmet needs. By interviewing the family members of adults with eating disorders this study aims to gain insights into how the family members experience the impact of the eating disorder over time. Methods A constructivist grounded theory approach was comprised to gather and analyse data to develop a theory on how families experience life with an adult daughter or sister with an eating disorder. Semi-structured individual interviews were conducted with 21 parents and siblings from across Norway. Results The participants reported a lack of professional help and the families had to handle the challenges related to the illness themselves, hence being left to themselves as a family was identified as the participants’ main concern. To manage this concern the families had to balance a pattern of care within the closed family system, and this comprises the core category of the study. Families balanced a pattern of care by shifting roles, adapting meal routines, and maintaining openness within the family. Conclusions The findings implicate that families need clarification of roles and responsibilities in relation to the health services in caring for the individual with an eating disorder. More research on taking a family perspective concerning adults with eating disorders is needed.
Background
The family's role in the development and maintenance of eating disorders has been researched for decades [1][2][3][4]. In more recent years focus has also been on the impact that eating disorders have on the family. In 1978 Minuchin and colleagues identified characteristics of enmeshment, overprotection, rigidity, and avoidance or non-resolution of conflict in families where one member developed anorexia [1]. Familial influence on anorexia and bulimia was reviewed in 1987 by Strober and Humphrey, who found that no single mechanism or pathway of influence was associated with eating disorders, although certain genetically determined personality factors might predispose the individual to greater sensitivity and vulnerability [5]. Polivy and Herman identified similar findings in a review in 2002, by claiming that familial characteristics could be viewed as not causative in themselves but as additional vulnerability factors in the development of eating disorders. In other words, certain personality features and sociocultural pressure were involved in the development of the illness [6].
Whitney and Eisler [7] reviewed family experiences and processes concerning caring for an individual with an eating disorder and found that blaming the family for the development of eating disorders was poorly supported by empirical evidence. However, the families could be stuck in unhelpful interactions and lose sight of their own resources, which might unintentionally contribute to the maintenance of the eating disorder [7]. Further research on treatment for eating disorders supported this view. Eating disorders should be seen as the 'common enemy' by family members. Family members need to help detach themselves from behaviours that enable or accommodate the eating disorder, and this has accordingly become an important part of treatment for eating disorders across the age and diagnostic spectrum [8,9]. Help and support for families to ease their burden in coping with an eating disorder should be prioritized over accusations and blame [10]. In a review on family functioning from 2014, it was found that families of individuals with eating disorders were reported to have worse family functioning than control families, although little evidence has been found for a typical pattern of dysfunction. The authors suggested that the poorer family functioning could be present before the onset of the eating disorder and become worse when the family had to cope with having a member with this illness [11].
Depression, anxiety, and worry are primary emotional reactions in families affected by an eating disorder. The caregiver burden on families affected by an eating disorder can be immense due to carer tasks and negative impact on everyday life and the relationship between the family members [12][13][14][15][16]. The all-consuming demands of the disorder impact the carer's ability to attend to and maintain relationships and activities outside the family unit. And the lack of understanding about the illness and support provided by both other family members and the health services exacerbates the family's feelings of being alone [15]. The family system can be characterised as being closed when there is little contact with others outside the family, while the family members are very close and loyal to each other [17]. Families experience a need for professional help and assistance in relation to their own emotional reactions, and parents report difficulties in responding to their own needs and the needs of other family members [12][13][14][18][19][20]. Siblings in families with eating disorders report issues such as insufficient care and negative changes in family life, and the illness takes up considerable time and space [21][22][23].
Those caring for individuals with eating disorders have reported extensive unmet needs, mainly regarding support and counselling by health services [12,14,16]. An association has been found between the duration of the illness and the caregiving experience, as the caregivers must adapt to the illness to a greater degree when it persists [12,16,24]. Hopes of recovery and external assistance tended to be higher during the early phases of the illness [12,[25][26][27]. Individuals with eating disorders are reported to find independence and adult responsibility difficult, hence they often end up living with their primary family longer than their peers [26,28,29].
This study is part of a larger project on the experiences and strategies of being a parent or sibling of an adult woman with an eating disorder. The current article is based on findings from two respective sub-studies on parents' and siblings' experiences [30,31]. By impact of the eating disorder over time, we, therefore, wanted to interview families of adults with eating disorders. The participants reported a lack of professional help and they were left to themselves as families having to handle the challenges related to the illness. The families established a system for taking care of the family member with the eating disorder. Both the parents and the siblings assumed the role of caregiver, the whole family adapted their meal routines and the families managed to keep an openness within the family. These findings show that families need clarification of roles and responsibilities in relation to the health services in caring for a family member with an eating disorder.
interviewing the family members of adults with eating disorders we aim to gain insights into how the participants have experienced the impact of the eating disorder over time. Moreover, as argued above, most of the focus has been on families with children and adolescents with eating disorders, hence more knowledge about the adult group is needed. The aim of this study is to generate a theory about how family life is affected by an eating disorder. How does having a daughter/sister with an eating disorder affect the family?
Design
This qualitative study used a constructivist grounded theory approach to gather and analyse data to develop a theory on how families experience life with a daughter/ sister with an eating disorder [32]. A grounded theory approach was chosen because it is considered appropriate when exploring actions and interactions in particular settings when the aim is to explain rather than describe [32,33]. The avoidance of a pre-formulated hypothesis means that the explanation is generated from the data, that is, the analysis is 'grounded' in the data. The simultaneous collecting and analysis of data help the authors focus on developing concepts about the data, and the analyses influencing further data collection [32,33]. Memos, reflections on the research process, and ideas for an initial analysis, were used actively throughout the process. According to the principles of grounded theory, an iterative process of data collection, coding, and analysis was employed. Participants' main concern was identified, and their 'solution' to the main concern is the content of the core category [32]. In constructivist grounded theory it is acknowledged that what one discovers in data is part of one's own perspective [32]. This version of grounded theory differs from the original grounded theory, which emphasized that the researcher should have an objective approach to data [34,35]. According to Charmaz, researchers are part of what is studied, and theories are constructed through researchers' involvement and interaction with the participants, as well as research practice. The constructivist approach offers an interpretation, not an exact picture, of what is studied [32].
The data collection method should be based on the research question, as should access to data [32]. For this study individual semi-structured interviews were considered appropriate for gathering data. The data in this study is based on the interviews with parents and siblings from the sub-studies by N.N. et al. [30,31]. The data were reanalysed with the main research question: How does having a daughter/sister with an eating disorder affect the family? (The complete interview guide is included under data collection).
Co-researchers
Two co-researchers (N.N,N.N) were involved as consultants in the project: one with experience of having had an eating disorder, the other with experience of being the mother of a daughter with an eating disorder. The co-researchers were involved in developing the research questions and the interview guide and participated in discussing and planning the project as a whole. N.N participated in the final analysis of the results. Their ability to view the results from a different angle, based on their experience, was a valuable contribution throughout the research process.
Participants
Participants were parents, and siblings, of women (over the age of 18) with anorexia or bulimia. The onset of the illness was for the majority in their early teens; for a few, it was their early twenties. The daughters/sisters were between 20 and 32 years, and their mean age was 24.71. All the daughters/sisters had long duration (± 10 years) of illness and all the participants described their daughter/sister as fluctuating between better and worse periods during their eating disorder. Four of the daughters/sisters were perceived as now having control over their eating disorder and half of the group were currently under treatment. Twenty-one individual interviews were conducted, with parents and siblings from eleven families. The participants were asked about age, living situation, family, and marital status. Parents in six of the eleven families were married couples, while the remaining were divorced. Participants were recruited by one inpatient and three outpatient eating disorders and general psychiatric units, and two organisations providing support to patients and families regarding eating disorders. The units and organisations conveyed information about the project to prospective participants, who voluntarily signed up to participate. Participants were recruited from different counties within Norway. Table 1 presents characteristics of the participants.
Data collection
Semi-structured interviews were conducted by asking a few interview questions so that participants had the opportunity to steer the direction of the interview [32]. An interview guide was used, with a few predetermined open-ended questions. The questions are presented in Table 2.
The guide was adjusted as new themes were brought up [32]. By following up on codes from the early interviews, more pointed questions were developed [36]. One example is the theme of changed family dynamics, which was brought up by both parents and siblings in the first interview. They talked much about the family becoming changed as a result of everyone's attention being placed on the family member with the eating disorder. In further interviews questions regarding new family patterns and roles were then asked. As the findings evolved, further interviews were conducted to refine the categories [32,36], hence more participants were recruited from the recruiting units and organisations.
Thirteen interviews were conducted face-to-face and the remainder by telephone (due to . The participants preferred telephone interviews to the option of video conferencing interviews. All interviews were audio recorded and transcribed verbatim. They each lasted between 25 and 150 min, with a majority lasting about one hour. Interviews and transcriptions were conducted by the first author. The quotes from the participants, used in this article, were translated from Norwegian.
Data analysis
The data were analysed following the principles of constructivist grounded theory [32]. First, the transcriptions were initially coded, sticking closely to the data by studying the words, lines, and segments. In vivo codes, participants' own terms for words or sentences, were used where appropriate. The next stage of the analysis was focused coding. Here the initial codes were studied and compared. The focus was on what the initial codes said and the comparison between them. The focused codes were constructed based on the frequency of the initial codes or their significance for the aim of the study. Decisions needed to be made about which initial codes made the most analytic sense to clearly categorize the data. This phase more clearly represents the voice of the researcher, as the focused codes are based not only on the participants' statements but on relevance to the topic. The focused codes were then compared and merged into preliminary categories, and these categories consisted of groups of focused codes with similar characteristics [32].
Several memos were written during the process of analysis and the development of the categories. One example is a memo regarding parents' and siblings' expressions of feelings of responsibility for the family member with the eating disorder: Many of the siblings also expressed that they felt a great deal of responsibility for taking care of the sister with the illness. Did this imply that the parents and the siblings both perceived the same level of responsibility? It remained however unclear what the emerging categories were explaining, as they needed to be elaborated and refined. Hence more interviews were conducted. After further data collection, a main category with three related subcategories was identified. The last interviews did not add any particularly new information to the categories, other than some additional details. The, mostly, long interviews yielded rich data, which provided indepth insights into the phenomenon being explored. • What kind of experiences do you have as part of the family of a daughter/sister with an eating disorder?
• Who comprises your family, and what is the ED diagnosis of your daughter/sister?
• How does it affect/have affected your family living close to a family member with this illness?
• Are there/have there been any main challenges in your family?
• Are you using/have used certain kinds of strategies for handling the situation as a family of a daughter/sister with an eating disorder?
• Do you have experience with health services in this context? In that case, what experiences do you have?
First author conducted the coding process. After constant comparison between data, codes, and preliminary categories, a core category was developed. To ensure a consistent process, the development of the core category was conducted in collaboration between the authors N.N,N.N,N.N,N.N.
Ethical considerations
Participants gave informed written consent to participate voluntarily in the project and had the opportunity to withdraw at any time. The consent for participation had to first be given by the daughter/sister affected by an eating disorder, as requested by the ethical committee. Participants' confidentiality was protected by anonymizing the data. Information about their identities was stored securely in accordance with data legislation and university procedures.
Balancing a pattern of care within a closed family system
All the participants talked about the family member's eating disorder as a process, from the evolvement of the illness to a still ongoing situation. Though the majority no longer lived together with the family member with the illness, their daily life was still influenced by the daughter/ sister. The family being left to themselves was identified as the participants' main concern. When the daughter/sister developed an eating disorder the family tried to handle the challenges related to the illness themselves, because of the lack of professional help. The families experienced being left to themselves in this situation: They (the health services) were so dismissive, I got the impression that it was not worth spending money on eating disorders… It was not a priority. We were left astray (Mother fam. 7). By adapting to the family member with the illness the family's social life was also narrowed. Due to generally limited knowledge and understanding of eating disorders, the families felt insecure talking to others outside the family.
To manage the main concern the family had to balance a pattern of care within the closed family system; this comprises the core category of the study. The families found themselves in a demanding situation balancing several caregiving tasks: I do not think the health services have any idea how much work it is, they put a huge responsibility on the parents. Nutrition, activities to be followed, logged, talked about… it is a full-time job (Mother fam. 7). The family system that was being established took care of the family members with the illness as well as made it bearable for the rest of the family members: It is difficult in relation to the ill one, and in relation to her siblings, it can be challenging in relation to your partner… (Father fam. 1). The degree of facilitating the illness, which could imply accommodating the eating disorder, was also perceived as a balancing act: Maybe we use too much energy in facilitating and we do not get any response from her (the daughter). So there is a little balancing act as I perceive it (Father fam. 2).
The family balanced a pattern of care by shifting roles, adapting the meal routine, and maintaining an openness within the family. Roles became shifted as both the sibling and the parents adapted to the family member with the illness. The siblings took on responsibility far beyond being a sibling, while the parents took on extensive caregiving tasks, like professional health carers, to help their ill daughters. Considerable time was spent on facilitating meals trying to meet the demands and needs of the family member with the eating disorder. Maintaining an openness within the family was elementary for making it possible to function within the closed family system. Figure 1 presents main concern and core category with subcategories from the analysis.
Shifting roles
The dynamic in the family changed after the daughter/ sister became ill, and both the parents' and siblings' roles were shifted: When someone in the family is ill, the whole Main concern: Core category:
Adapting the meal routine
Adapting meals comprised a significant part of the families' daily life. Several of the parents tried adapting the meals so that the daughter would eat something. Thus there often unfolded a constant struggle that could entail several conflicts: All the time these negotiations when it comes to food (Mother fam 6). The siblings also became involved in the food routine that was adapted to their sister with the eating disorder. Some experienced this food routine as having an actual impact on the rest of the family's eating habits: We have always eaten what we wanted, but suddenly there was such a focus on food, generally, the topic of food was consistent in every dinner at home. We all got more careful about what we actually ate, which we had never been before (Sister fam. 3). Being a witness to bulimic behaviour during family meals was found especially challenging: We eat together, and you see that she always leaves the table first and you know that she is going to vomit, and often you hear it… (Sister II fam. 1).
As time went by the families perceived facilitating around meals as exhausting, but some had learned to live with the situation and had accepted it: I just let it go, I did not even bother to be annoyed about it, just thought, ok; we all just have to eat vegetables and chicken then. … just have to put up with it (Sister fam. 2). The families did the best they could to ensure a certain food intake for the family member with the illness. At the same time, they did not get much confirmation concerning whether what they did was right or wrong. The parents were especially often left to uncertainty concerning whether they should or could have pushed their daughter more when it came to varieties of food and intake: Our concern was, how much can we push her, can we require something of her? (Mother fam. 5).
Maintaining openness within the family
Even though the family became isolated, openness within the family was considered a strength that made it bearable for the participants to maintain the pattern of care: We are a strong family, we are very attached to each other and talk about both the unpleasant and pleasant things, so the openness in our home has been a strength of ours (Mother fam. 2). Closer cohesion and better communication were considered positive outcomes in handling the eating disorder. Some of the family members were perceived as becoming more caring and understanding during this process, as this sister experienced her father: I notice that he has become much more gentle. And much more responsive to feelings, that maybe did not get that much space in our upbringing (Sister fam. 8). Some of the families had participated in multi-family therapy, and they acknowledged that this intervention helped the family to communicate better and more openly. By acquiring information about the illness and advice on how to handle it, the family members felt they received a common While multi-family therapy, support groups for eating disorders, and a few other interventions helped openness within the family, the family remained a closed system. Due to a lack of or inaccessible help for the family, as well as for the individual with the eating disorder, the caring pattern was maintained within the family. Some of the siblings eventually left their family home because they needed to distance themselves from the illness and the caring pattern; the remaining family members then maintained the pattern. The majority of the participants would have wanted more general information about the eating disorder from the health services, family therapy, or conversations that included the whole family, as well as individual support for themselves. The families' feeling of being left alone in their situation, despite several attempts to get help, is reflected by a quote from this mother: My experience over the years is that we stood very much alone, the feeling of loneliness in everything concerning having a daughter with that illness, there was very little understanding from others (Mother fam. 3).
Discussion
Our findings suggest that the families of individuals with eating disorders experienced that their family became a closed system, as they had to handle the challenges related to the illness themselves because of a lack of professional help from outside. The families had to balance a pattern of care: the parents and the siblings shifting roles regarding responsibility and care, adapting the meal routine, and maintaining openness within the family by having open communication.
The parents in the current study used most of their resources on the daughter with the eating disorder, and as reported in previous research, parents had difficulties in responding to the needs of other family members [12][13][14][18][19][20]. Several of the siblings in the study assumed new roles entailing more responsibility in the family, and they perceived themselves positioned at the same level as the parents within the family. A study on siblings' of adult sisters with anorexia revealed that siblings felt a strong sense of loyalty to their parents, who they perceived as being heavily burdened by the illness, while also wanting to protect their affected sibling from family conflict, perceiving themselves as mediators in the family [21]. When a family member becomes severely ill, the whole family often needs to restructure [37]. In an attempt to achieve balance and peace within the family the rest of the family members often forsake one's own needs and feelings [37].
Families impacted by eating disorders have been observed to have more cohesion, a lower family hierarchy, more constraining family rules, and less conflict compared with control families [11]. The families in the current study appeared to have features of enmeshment, as defined by Minuchin et al., where sub-system boundaries are poorly differentiated [1]. For example, children can become elevated in the hierarchy to join the parent sub-system [1], which appeared as a pattern in the families in the current study. The boundaries that keep family members overinvolved with each other and separated from the world are typically well-defined and strong, while the boundaries within the family often are diffuse [1]. For the participating families, the dynamics changed as a result of the family having to restructure to handle the eating disorder. The family members did not describe their family patterns as previously being characterised by enmeshment, in contrast to Minuchin who was referring to a long-standing pattern that was seen as part of the transactional characteristics that were a prerequisite for the eating disorder [1].
The family's lack of identified professional support is very likely to play into these dynamics more than preexisting dysfunction. Because of the lack of professional support the families were left to themselves to handle the challenges related to the eating disorder. Whitney & Eisler [7] reported similar findings on how the families reorganise during the early stages of the illness. Often families know little about the illness itself and how to access the appropriate help. The families made concerted efforts to try to understand the illness and support the ill family member. This required much, especially of the parents' resources, and siblings found themselves sidelined and did not receive the attention needed [7].
To balance a pattern of care within the family adapting meals became a central part of the families' daily life. Previous research has reported that interactions around food and meals increasingly dominate the relationships between family members. Parents of individuals with eating disorders expressed particular concern for siblings and other family members, who witnessed the confrontation and struggles around mealtimes [7]. The siblings from the Dimitropoulos et al. study experienced a lack of normalcy in their relationship with their affected sibling and their family, especially surrounding meals, and they perceived that the parents accommodated meals and food to the sibling with the illness [21]. This is similar to the findings in the current study, and though some of these siblings even adapted the new eating routine themselves, they resigned to keep a calmer atmosphere within the family. A study on accommodation of symptoms in anorexia nervosa found that the carers ended up in a cycle, vacillating between accommodation and confrontation. For those caring for an individual with an eating disorder over a longer duration, attempting to adopt a decreased focus on food and a less confrontational approach were reported [28]. Helping families detach from behaviours that enable or accommodate the illness is an important part of treatment for eating disorders across the age and diagnostic spectrum [9]. Whether the accommodations in the current study led to maintenance of the illness remains an unanswered question. These families described being in a cycle and maintained the accommodated pattern to cope with the demanding situation.
When a family member suffers from an eating disorder the illness often becomes the central organising principle that changes the daily routines [38]. How the family handles these challenges depends on its resilience and resources [38], as well as the right care and support [15]. The participants in the current study considered open communication within the family a coping strategy in handling daily life, and this was seen as a positive outcome of the eating disorder. The families that had participated in multi-family therapy and support groups said it helped the family to better communicate and gain a common understanding and approach to the illness. However, due to a lack of continued support interventions, this openness was mainly kept within the family. Multifamily therapy for young adults with eating disorders, including parents and siblings, has generally reported positive experiences of connecting to others in similar situations and improving family members' understanding of the illness, helping them to understand each other [39][40][41]. Support groups have also been reported to be highly valued by carers for providing understanding and exchanging experiences [14]. Practical information adapted to the carers' needs, from healthcare professionals, and family-based treatment and training in how to relate to and deal with the illness, are reported interventions that led to moderate reduction in emotional stress and burden for the caregivers. It is considered good practice to work jointly with the carers and provide them with appropriate information regardless of the patient's age [8]. A variety of psychoeducational interventions can improve carer coping and reduce distress, burden, and expressed emotions, and these changes are sustainable over time [42]. To make such changes sustainable over time, one could imagine that consistent interventions, as requested by the participating families, are crucial.
However, research on families of individuals with eating disorders have generally reported a lack of understanding and limited available formal support, which increased feelings of social isolation and frustration [14]. The openness further strengthened the unity within the participating families and thus their capability to care for the family member with the eating disorder, which in turn led to maintenance of the pattern of care.
Strengths and limitations
A potential limitation of the study is that some of the participants were from the same families, while others were single participants from different families. One could imagine that participants from the same family would share some main viewpoints, or even feel restricted in terms of sharing their views during the interviews. Nevertheless, a high degree of consistency was found in the sibling group, as well as among the parents, regardless of which family to whom they belonged. A second limitation could be that the consent for participation had to be given by the family member affected by the eating disorder, so it is possible that these participants do not represent families affected by major conflicts or trauma. Third, the group of individuals with eating disorders mainly comprised individuals with anorexia, and this could have limited the diversity in the material.
By interviewing both parents and siblings we were able to present a family view, as previous research has recommended [11,14]. The study's inductive and open approach encouraged the participants to emphasize what was important to them [32]. The co-researchers' involvement throughout the research process, with their ability to view the results from a different angle based on their own experiences, was considered to strengthen the credibility of the study [43,44].
Implication
The findings implicate that even though the family member affected by an eating disorder is an adult, the family often carries a great caregiving burden, which they need to offload. The health services should be aware that the families perceive the help and support provided as inaccessible and inadequate, for both the family member with the eating disorder as well as for the rest of the family. Intervention should also be adapted individually to the family members, while still protecting the integrity of the adult individual with the illness. Clarification of roles and responsibilities for the families in relation to the health services in caring for the individual with eating disorders is needed.
Previous research indicates that interventions such as multi-family therapy for adults have positive outcomes for both the individual with an eating disorder and the family [39][40][41]. It has been argued that irrespective of the stage of the illness, the impact on the caregivers should determine the type of interventions needed [19,24,45]. Future research should look deeper into how to support the families of adults with eating disorders to disrupt the pattern of becoming left to themselves, and also whether these patterns could be maintaining the eating disorder. More knowledge about the challenges of taking a family perspective concerning adults with eating disorders is needed.
Conclusions
This study contributes to understanding how family life is affected by having an adult family member with an eating disorder. From the participating families' perspectives, the families were being left to themselves caring for the family member with the illness due to the lack of professional help. The families became closed systems having to balance a pattern of care within the family. Roles became shifted as the participants took on extensive responsibility and caregiving tasks. Adapting the meal routine became a major part of the families' daily life. An openness was kept within the family which made it possible to function within the closed family system. The results emphasize that the families need to offload the caregiving burden, and that help from the health services must be accessible. Support interventions should also be individually adapted to the family members. | 2022-10-11T13:35:34.156Z | 2022-10-10T00:00:00.000 | {
"year": 2022,
"sha1": "d5f68f3d9f24012973eafd5d5f362767e70045b1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "f3257ff3b274268c76ad9c44b56f87d0c4d2052b",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15565185 | pes2o/s2orc | v3-fos-license | Dual instrument for in vivo and ex vivo OCT imaging in an ENT department
A dual instrument is assembled to investigate the usefulness of optical coherence tomography (OCT) imaging in an ear, nose and throat (ENT) department. Instrument 1 is dedicated to in vivo laryngeal investigation, based on an endoscope probe head assembled by compounding a miniature transversal flying spot scanning probe with a commercial fiber bundle endoscope. This dual probe head is used to implement a dual channel nasolaryngeal endoscopy-OCT system. The two probe heads are used to provide simultaneously OCT cross section images and en face fiber bundle endoscopic images. Instrument 2 is dedicated to either in vivo imaging of accessible surface skin and mucosal lesions of the scalp, face, neck and oral cavity or ex vivo imaging of the same excised tissues, based on a single OCT channel. This uses a better interface optics in a hand held probe. The two instruments share sequentially, the swept source at 1300 nm, the photo-detector unit and the imaging PC. An aiming red laser is permanently connected to the two instruments. This projects visible light collinearly with the 1300 nm beam and allows pixel correspondence between the en face endoscopy image and the cross section OCT image in Instrument 1, as well as surface guidance in Instrument 2 for the operator. The dual channel instrument was initially tested on phantom models and then on patients with suspect laryngeal lesions in a busy ENT practice. This feasibility study demonstrates the OCT potential of the dual imaging instrument as a useful tool in the testing and translation of OCT technology from the lab to the clinic. Instrument 1 is under investigation as a possible endoscopic screening tool for early laryngeal cancer. Larger size and better quality cross-section OCT images produced by Instrument 2 provide a reference base for comparison and continuing research on imaging freshly excised tissue, as well as in vivo interrogation of more superficial skin and mucosal lesions in the head and neck patient.
Introduction
Head and neck cancer is primarily a mucosal disease of the upper aerodigestive tract with 90% of tumors arising as squamous carcinomas from epithelial membranes of the oral and nasal cavities, the pharynx and larynx. Squamous cell carcinoma of the larynx has been the most frequent malignant tumor of the upper aerodigestive tract in Europe. It is a preventable disease resulting from interplay of numerous etiological factors such as chronic consumption of tobacco and/or alcohol, environmental carcinogens, socioeconomic status, occupational hazards, dietary factors and genetic susceptibility. For early-stage laryngeal cancer, both surgery and radiotherapy are effective treatment modalities, offering a high rate of local control and cure. The introduction of new fiber-optic and rigid endoscopic techniques with stroboscopy has greatly enhanced the diagnostic and dynamic assessment of tumors of the upper aerodigestive tract, particularly the larynx [1].
Optical Coherence Tomography (OCT) is an established non-invasive optical biopsy method, capable of imaging ranges of 2-3 mm into tissue [2][3][4][5]. By using principles of low coherence light interferometry, OCT can be used to distinguish normal from unhealthy laryngeal mucosa in patients [6][7][8]. Miniaturized fiber-optic probes are a key component for emerging clinical applications of OCT and offer new possibilities to image diseased tissue deep within the body. Single-mode optical fibers are a base requirement for endoscopic or catheterized deployment of biomedical fiber sensors and imaging systems [9]. Fiber imaging bundles have been incorporated into different OCT systems using a variety of optical configurations that eliminate mechanical scanning components required for endoscopic OCT applications [10][11][12].
Forward imaging OCT probes have been used, with mechanisms such as scanning microelectromechanical system (MEMS) to redirect the light beam [13][14][15][16], or piezoelectric cantilevers to deflect the fiber [17][18][19]. Different office-based miniaturized OCT probe configurations have been developed and implemented for in vivo examination of the human larynx. Systematic OCT imaging of laryngeal structures has provided information on the thickness of the epithelium, integrity of the basement membrane and structure of the lamina propria. Microstructural features identified included glands, ducts, blood vessels, fluid collection/edema and transitions between pseudostratified columnar and stratified squamous epithelium. Office-based OCT systems as imaging modalities to study the larynx have the potential to guide biopsies, direct therapy and to monitor disease [7,20,21,22].
Most clinical OCT studies outside ophthalmology have involved the use of systems designed and built by research groups focused on enhancing the resolution, image acquisition rates and functionality. Until recently, there have been no commercially available "turn-key" OCT systems for use in the head and neck and furthermore most studies to date have used research devices designed and constructed by associated university optics research laboratories. A commercially available OCT system for use in the head, neck, and upper aerodigestive tract (Niris, Imalux, Cleveland, OH) using a flexible probe (placed in contact or near contact with the area of interest) and inserted through a rigid laryngoscope has been reported [23]. The Niris imaging probe is 2.7 mm in diameter and can work through endoscopes or independently depending on the procedure. The probe can be inserted into the working port of a variety of rigid and flexible endoscopes. The system has a maximum imaging speed of 8 frames per second and a limited useful life-span of approximately 200 patient procedures [24].
Another report refers to an in vivo imaging study of the human larynx in awake patients using a commercially available flexible fiber-optic naso-laryngoscope simultaneously with a flexible OCT probe, where a "slide-on" channel endosheath was used [25].
To overcome the size, speed and limited life time issues, we propose and demonstrate the use of a 1.9 mm diameter forward-viewing flexible OCT endoscope probe in a 1300 nm swept source (SS) -OCT configuration for in vivo endoscopic imaging of human laryngeal mucosa. The OCT probe was designed and fabricated at the Institute of Applied Physics, Russia and consists of, at the distal end, a 13 mm long cylinder made of stainless steel. The OCT probe is housed together with a commercial fiber-bundle endoscope inside a single channel endosheath. The endosheath is supplied by Standard Medical Ltd., UK and is closed with a transparent optic window at its tip (CAG 0266, length 22.5 cm, shaft inner diameter 5.5 mm, distal end inner diameter 5.4 mm).
To our knowledge, this is the first report for the use of a single channel endosheath with an optically transparent window at the tip in a dual channel nasolaryngeal endoscopy-OCT assembly for in vivo human larynx investigation. Because the in vivo investigation requires a sufficiently small diameter OCT probe, the transversal resolution is larger than 20 microns laterally while the image size is less than 1 mm. In order to investigate the possible detrimental effect on diagnosis utilizing images provided by such a thin OCT probe, a handheld probe single channel instrument is produced to provide comparatively, images from excised tissue. The hand-held probe uses a galvo-scanner and can collect OCT cross sections with a lateral size larger than 5 mm, however with better transversal resolution (<10 μm).
OCT imaging instrumentation
The system is schematically presented in Fig. 1. Light from a swept source (Axsun Technologies Ltd, 1310 nm center wavelength, 12 mm coherence length, 106 nm FWHM bandwidth in the range (1256.6 nm-1362.8 nm) with an average output power of 18 mW and a scanning rate of 50 kHz) is provided at its FC/APC receptacle. Either the FC/APC fiber input, FC/APC1 of the OCT1 channel (Instrument 1), or the FC/APC2 of the OCT2 in the single channel hand held probe (Instrument 2) can be connected to the swept source. The fiber from the FC/APC1 end leads to a fiber optical circulator (C) coupled with a 2 x 2 directional coupler (DC) having 95/5 splitting ratio, with 95% of the power from the swept source directed to the sample arm where the endoscope OCT probe connects to.
Because the OCT head in the OCT channel of the dual channel Instrument 1 operates based on principles of common path interferometry, only a single output fiber is sufficient, FC/APC outOCT1 connected to the photo-detector unit, PhD. The fiber from the FC/APC2 end leads to a triple splitter coupler array, where the first splitter is a 660/1310 nm Wavelength Division Multiplexer (WDM) coupler. The other input from the WDM of the three splitter array is connected to the red aiming beam. The output of the WDM is connected to the 2 x 2 optical coupler architecture (first coupler 80/20 splitting ratio, with 20% of the power directed to the sample arm, where the OCT probe is connected, and 80% to the reference arm and second coupler 50/50 splitting ratio for balance detection).
The signal collected by the photo-detector, PhD (Santec Model BPD-200 DC 80 MHz) is digitized by a 12 bit waveform digitizer (Alazar ATS9350 -500 MS/s, 12 bit PCI Express Digitizer) while "in-house" created software in Labview (National Instruments, Austin, Texas) has been used to produce, display and record the images. For our study on the human larynx, which has similar thermal and photochemical properties to skin, the optical power directed onto the tissue was around 7 mW at 1300 nm, and was achieved by using an extra patch cord and two fixed in-fiber attenuators (20 dB). The optical power of 7 mW is below the American National Standards Institute (ANSI Z136.1) [26] safe occupational exposure level for the skin. Similar power levels for in vivo imaging of the human epiglottis and human vocal fold were used by other groups [19,26,27].
Problems connected with guidance
The image to be collected with a probe of less than 2 mm diameter is less than 2 mm lateral size. By simply looking at the OCT image it is difficult to orientate and identify the specific area/sub-site of tissue imaged. Therefore, for reference a guidance image needs to be provided to the OCT channel and this is enabled using the camera in a commercial endoscope system paired with a visible illuminating source. The 1300 nm beam is scanned together with the visible beam such that the lateral scanner projects a visible line on to the tissue, which is then picked up by the camera in the commercial endoscope. A red diode laser at 635 nm operational wavelength was used to illuminate the object under investigation. This provided less than 0.5 mW on the tissue. Figure 1(b) shows a photograph of the distal end of the OCT endoscope probe unit (diameter of 1.9 mm, length of the rigid part of 13 mm). The OCT endoscope probe has a cylindrical form and is composed of stainless steel containing copper wire, NdFeB magnets and a GRIn lens, with an output window composed of quartz, followed by a Teflon tubing, which leads and connects to the OCT engine. The OCT probe is built in a common path configuration where reference light is derived from the output quartz window. Detailed description of the probe operation can be found in [28]. Figure 1(c) shows a photograph of the entire OCT probe consisting of the optical fiber and driver for the magnetic scanner in the OCT probe head.
A Karl Storz flexible fiber-optic nasendoscope of 3.5 mm outer diameter (Karl Storz, 11101 RP2 Rhino-Pharyngo-Laryngo-fiberscope) was considered fit for this purpose. The endoscope provides an en face image while the OCT channel in the dual channel instrument delivers a cross-section image. In order to simplify image display arrangements, a software program was devised to allow simultaneous monitor display of the two channel images, one produced by SS-OCT and the other by the commercial endoscope.
In order to obtain ex vivo OCT images on tissue specimens immediately following excision, a second system was co-assembled (Instrument 2), equipped with a "hand-held" OCT probe imaging assembly. The handheld probe is composed of a rigid, hollow plastic tube assembly, measuring ~20 cm in length, with an outer diameter of 15 mm and contains a Cambridge Technology 6110 transversal scanner with a lens-associated collimator, of 30 mm focal length. These are connected via Thorlab microbench devices with a hand-held platform assembly, shown in Fig. 1(d), and then via a protected fiber to the OCT engine.
The two instruments remain permanently connected to the red laser, whilst the swept source, photo-detector unit and imaging computer are all sequentially switched from one instrument to the next. To switch from the dual channel OCT/endoscope to the "hand-held" OCT channel, the input fiber FC/APC1 or FC/APC2 is inserted into the FC/APC output of the Axsun source and the output fiber FC/APC outOCT1 is connected to an input of the balanced detector. When using Instrument 2, the two output fibers of the hand held probe OCT2 channel, FC/APC O1OCT2 and FC/APC O2OCT2, are inserted into the FC/APC connectors of the photo-detector unit. The switch from one instrument to the other only requires for fiber connections to be changed. The two instruments are all assembled inside a rigid metallic box with a locking lid, which in turn is placed on a trolley with castors to allow rolling and portability of the dual instrument arrangement as required in the busy ENT clinic.
Two different Lab software programs have been created for the two instrument arrangements; to serve either the dual channel instrument, or the single channel "hand-held" OCT instrument. The dual channel instrument produces simultaneous en face endoscopic images with cross section SS-OCT images in vivo whilst the "hand-held" OCT instrument produces SS-OCT images only, either ex vivo of excised tissue or in vivo from accessible surface skin and mucosal lesions on the scalp, face, neck and oral cavity.
Imaging procedure of the dual channel instrument
The platform used for the nasendoscope OCT application (via nasopharyngeal tract) involves a slight modification of the current standard nasendoscopy procedure practiced by clinicians in the ear, nose and throat (ENT) clinic under topical local anesthesia. The current procedure uses an endoscope consisting of a multi-fiber bundle producing an image on a proximal camera (Xion Medical, Compact Camera). The bundle is introduced into a disposable, singleuse protective medical grade endosheath (consisting of a single closed channel and optically clear window at the distal tip), in order to cover almost the entire insertable working tube length of the flexible endoscope. Flexible nasendoscopy is performed routinely this way on patients presenting for ENT consultation and is a simple procedure completed usually within 1-2 minutes once topical local anesthesia has taken effect and requiring no more than verbal consent from the patient. The nasendoscopy modification involves placement of our own assembled novel miniaturized OCT probe to be run in parallel with a 3.5 mm outer diameter commercially available multi-fiber endoscope (typically used for this investigation). At present, commercially available and routinely used flexible nasendoscopes vary widely in their outer diameters from 2.2 mm to 6 mm or more.
The rigid part of the OCT probe is 13 mm long and 1.9 mm wide. The OCT probe is placed distally alongside the standard endoscope bundle inside a single channel, closed system endosheath (Fig. (2)). The use of endoscope sheath covers is generally popular amongst many ENT clinicians as the sheath are quick to change between patients (allowing quick patient turnaround in busy clinics) and easy to use. There is a very small risk of endoscopic contamination if the endosheath is breached in vivo which is an exceptionally rare event. The disposable endosheath has been advocated by manufacturers and national specialty bodies (ENT-UK) to be a safe and effective alternative to chemical disinfection systems for simple diagnostic nasendoscopy, effective against bacterial and viral contamination and having been shown to maintain their integrity after patient use [29]. A photo of the disposable endosheath closed with a transparent optic window at its tip is shown in Fig. 3(a). The two combined probe heads sit enclosed within the medical-grade disposable endosheath, as presented in the photos of Fig. 3(b). Essential for the operation of the system was the correct orientation of the scanning direction of the OCT probe. Looking at the projected line on a target, the OCT probe was rotated inside the endosheath until the line crossed the projection of the endoscope end. The correct orientation is shown in Fig. 2(e). This ensured that the visible red reference line is seen by the endoscope camera for a wider range of distances between the tissue and the cap.
The two probe heads cannot be pushed together into the endosheath with tips flush aligned, as the sum of their external diameters is exactly the internal diameter of the cap. We have found that the most suitable technique was to insert the endoscope along the sheath with the OCT probe lagging behind by a distance longer than the cap length, usually 5 mm to 2 cm ( Fig. 3(b) top). With the fiber-optic endoscope inserted almost flush with the cap, the OCT probe head is then maneuvered/pushed forward with slight bending of the distal endosheath so that the two probe heads then sit together flush inside the cap (Fig. 3(b) bottom).
Ex vivo testing
Ex vivo OCT images of human larynx taken in a laboratory-based 1310 nm SS-OCT benchtop system were obtained and compared with those provided by the single channel Instrument 2 and are presented in Fig. 4. It is clearly seen that comparable OCT images of fine tissuerelated structures are possible using both the bench top and portable endoscopic systems.
Following NHS R&D/NRES Research and Ethics Committee approval (Central London REC1, Reference no: 10/H0718/55) and informed patient consent, formalin-fixed laryngeal tissue biopsies taken from laryngectomy patients at NWLH were examined ex vivo using a desktop OCT imaging system [30] and comparison made to images taken of the same (as well as other fresh non-formalin fixed laryngeal tissue specimens) with the portable single channel Instrument 2. For laboratory imaging, laryngeal tissue samples were fixed in 10% formalin solution, dissected into small portions of about 3 mm maximum dimension and then sealed in custom-made plastic imaging chambers containing 10% formalin solution. The samples were sealed in the chamber by a 170 microns thick flat sheet of borosilicate glass with the seal being provided by a layer of high vacuum silicone grease between the plastic and the coverslip, then oversealed with DPX mounting medium.
Phantom model testing of dual channel instrument
Ex vivo testing of the OCT probe/nasendoscope assembly on an anatomically correct, adult human airway model (utilized for training junior doctors and simulating a standard ENT nasendoscopy office procedure) was performed as a prelude to the in vivo investigations and laryngeal application.
The AirSim Advance Larynx model, (Trucorp Ltd) acted as an appropriate phantom model for ex vivo testing of the dual channel endoscopic OCT arrangement. This model allows the necessary nasendoscopy simulation exercise in safety and the phantom exhibits an external "laryngeal" window opening that allows a direct external view through the "neck tissue" into the "endolarynx." Initially, there was a learning curve with some operator exercise necessary for appropriate handling of the combined endoscopic OCT probe head and sheathed fiber optic endoscope assembly. The two probe heads, not being collinear, resulted in the projected red reference line for OCT incident imaging at the target tissue, going out of the endoscopic field of view when the capped tip of the arrangement was placed too close to the tissue. In order to reduce the minimum distance at which the red line would disappear from view, the scanning OCT probe head was rotated inside the endosheath so that the scanned line crossed the projection line of the endoscope bundle, as explained above.
The sheathed combined dual channel probe was passed along the floor of the nasal cavity, navigated around the contours of the soft palate and nasopharynx (taking a maximal 30 degree arc) to enter the larynx and hover a few mm above the vocal cords (Figs. 5(a) and 5(b)). The dual channel OCT probe/endoscope inserted into the disposable endosheath for in vivo investigation on a volunteer is shown in Fig. 5(c). The transparent cap closing the endosheath is approximately 100 microns thick. This comes up in the OCT image, so the probe has to be placed at a sufficient distance from tissue to display the useful part of the tissue in the image below the lines due to the cap surface. This was not difficult as the working distance of the OCT probe was ~3 mm, much larger than the cap thickness. During our preliminary ex vivo tests, the reflection from the endosheath tip in the OCT image was deemed useful, as it has indicated the distance between the OCT probe and the cap. Several times, the OCT probe had slipped axially inside the sheath and this could immediately be seen by the surface in the image corresponding to the cap, moving to larger depths in the OCT cross-section image. In such situations, the OCT probe was pushed back inside the endosheath until the reflections were moved to the top of the image.
Initial tests on the anatomically correct rubber phantom, have shown that the integral distal endoscope tip tilting/leverage mechanism is still able to function with the OCT probe sitting adjacent to the commercial endoscope probe head. Our tests have shown that this is sufficient to allow the OCT probe to be inserted together with the nasendoscope housed within the endosheath, the camera providing guidance as the tip is progressed through the nasal cavity and flexed around the contour of the soft palate and nasopharynx to visualize the larynx below.
Preliminary in vivo investigations
Following NHS R&D/NRES Research and Ethics Committee approval (Central London REC1, Reference no: 10/H0718/55) a feasibility in vivo study was performed using the novel dual channel OCT endoscopy arrangement. Clinical investigation of the optical imaging arrangement was performed on 2 volunteers initially (Fig. 5(c)) followed by clinical recruitment of 3 patients attending Northwick Park Hospital ENT-Head & Neck service who were noted to have suspicious larynx appearances on endoscopy. Following informed consent, these patients had endoscopic OCT images taken in the ENT out-patient department under topical local anesthesia before going on to have biopsies of the same larynx tissue under general anesthesia.
In vivo B-scan OCT images of laryngeal lesions were acquired using Instrument 1 (commercial fiber bundle endoscope appearance of left vocal cord lesion with simultaneous acquisition of B-scan images for the same lesion through the novel endoscopic OCT probe) (Figs. 6 (a) and 6(b)).
After use of the tool, between each patient, the endosheath was disposed of and a new one used to cover the dual probe head assembly before its use on the next patient. Figure 6(c) demonstrates the histology for the same laryngeal lesion (invasive squamous cell carcinoma (SCC) of the left true vocal cord; stain is H&E). Figure 6(d) displays an ex vivo OCT B-scan image of the same left true vocal cord lesion freshly laser excised 20 minutes earlier in the operating theatre under general anesthesia, but the image having been taken using Instrument 2 in the assembly (i.e. with the hand-held probe) so as to allow comparison with the in vivo endoscopic OCT B-scan image taken with Instrument 1. Comparing the in vivo endoscopic OCT image using the dual channel Instrument 1 of the same lesion shows a subjectively improved contrast over ex vivo Instrument 2. Although the spatial pixel resolution appears worse with the in vivo Instrument 1 than with ex vivo Instrument 2, this is compensated to some degree by the improved tissue contrast and interesting patterns of light and dark areas that are seen. These interesting complex contrast features present in the in vivo images, together with surface keratin formations appear visible occasionally depending on where the probe has approached or touched the target area.
A significant problem noted in this preliminary work with the in vivo image acquisition is the lack of stability when maneuvering the distal endoscopic tip assembly towards a lesion of interest, particularly in the context of an awake, moving and swallowing patient. This is compounded by a lack of orientation for the operator when the red reference light beam disappears out of the distal endoscope direct field of view. It was not possible, even approximately, to correlate the position of various aspects of the lesion to the OCT B-scan image and it was entirely taken on trust that the lesion itself was imaged at the time the capped tip approached and was directed onto it.
Furthermore the glimpses of the vocal cord lesion seen in the OCT video sequence are fleeting and freeze-frame analysis of the video post-procedure was necessary to try and discern the contrast features of relevance. This has adverse implications for real-time in vivo OCT interpretation. Although there was potentially relevant structural contrast seen, the lack of ability to accurately correlate this to the excised lesion plus the lack of a reference in vivo endoscopic OCT data set (normal vocal cords) mean that it is not possible at this stage, to be able through endoscopic OCT, diagnose the vocal cord lesion as an invasive carcinoma. Further work shall be required for acquisition of a large and thoroughly studied in vivo reference data set prior to any definite conclusions being drawn about the overall usefulness of OCT as a form of histo-diagnostic tool.
Conclusions
A portable dual-function instrument was assembled and deployed in a busy ENT department to serve two goals: (a) to perform dual-channel simultaneous in vivo nasendoscopic white light and OCT tissue investigations and (b) to provide a single channel "hand-held" OCT system, to provide in vivo OCT images of accessible skin and mucosal lesions in the head and neck region, as well as demonstrate ex vivo images of tissue biopsies immediately following their excision. The two elements to the assembly complement each other in terms of functionality and portability to the clinical environment. In instrument 1, due to its small size the novel nasendoscopic OCT probe exhibits lower transversal resolution and smaller image size. Instrument 2 has better transversal resolution and larger image size, useful for both ex vivo OCT imaging of excised tissue and in vivo OCT imaging of directly and superficially accessible areas of skin and mucosa; on the scalp, face, neck and oral cavity.
The performance of the sheathed dual channel nasendoscopy-OCT instrument for in vivo endoscopic human laryngeal imaging is presented as a pilot study and preliminary step towards the development of endoscopic OCT as a laryngeal cancer screening tool.
Together, the two instruments satisfy the needs of a busy ENT clinic engaged in both translatable clinical research and advancement of clinical care. The swap of the 5 fiber ends and the change of the software program takes less than 5 minutes to enact and may also be implemented using three optical switches with little penalty in terms of power loss, due to the high transmittance of modern optical switches (such as Thorlabs 1x2 Switch, OWS12-1310-SM, insertion loss <0.7 dB, back-reflection 55 dB). Alternative arrangement possibilities could include the use of an optical switch only for connecting the swept source to either of the two systems, individual photo-detectors on the two instruments and an electronic switch between the two photo-detector units. | 2018-04-03T02:47:38.073Z | 2012-11-28T00:00:00.000 | {
"year": 2012,
"sha1": "dce97077caacedfa21716958d3a625e84a0c2657",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1364/boe.3.003346",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dce97077caacedfa21716958d3a625e84a0c2657",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54876371 | pes2o/s2orc | v3-fos-license | Exploring Influence and Autoethnography: A Dialogue Between Two Counselling Psychologists
This article utilises a dialogical approach to explore the potential of autoethnography as a research method for counselling psychology while using the method to reflect on what it means to have influence as a researcher. We use a collaborative autoethnographical approach to explore the themes of influence, curiosity, rich insight and sincerity. We attempt to bring honesty and transparency to our collaborative dialogue about our previous work on vicarious trauma (VT) and secondary traumatic stress (STS), as well as how our themes are revealed in the different paths we have taken as counselling psychologists since our earlier collaboration. We consider what it means to influence, to be influential, and to be influenced. Through our dialogue, we try to speak with authenticity about our experiences as colleagues, counselling psychologists, scientist practitioners, and human beings. We discuss both the potential contribution of autoethnographical approaches and the challenges of using these methods, for counselling psychologists.
Introduction
Autoethnography is a methodological approach that is relatively new to counselling psychology, despite having been developed and used extensively within the field of sociology.
As O'Riordan (2014, p. 3) describes, autoethnography is: "an autobiographical genre of writing and research that displays multiple layers of consciousness, connecting the personal to the cultural" (Ellis & Bochner, 2000, p. 739). It is a reflexive form of ethnography in which the researcher's interaction with the object of study is taken into account (Davies, 2008) and in which the author's gaze is turned and bent back upon itself (Babcock, 1980). Its main purpose is to link the micro and the meta (Boyle & Parry, 2007); more specifically, it is used to describe and systematically analyse (graphy) personal experience (auto) in order to understand cultural experience (ethno) (Ellis, 2004;Holman Jones, 2005). As such, it involves the production of highly personalised accounts in which authors draw on their own experiences to extend the understanding of a particular culture (Reed-Danahay, 1997).
As an established method in other social sciences, we, as counselling psychologists, have an opportunity to consider what the methodological approach offers to our field and how it can be best used. ejcop.psychopen.eu | 2195-7614 In the spirit of exploring new methods, we were invited by the Editorial Team to write an autoethnography for this Journal about our influence in the field as our article Vicarious traumatization and secondary traumatic stress: A research synthesis published in 2006 in Counselling Psychology Quarterly (Baird & Kracen, 2006) has been one of the journal's most frequently cited articles. The Editorial Team recommended addressing quality criteria (specifically, sincerity, rich insight, resonance, and contribution) as detailed by O'Riordan (2014). For a detailed description of these quality indicators, interested readers are referred to O'Riordan (2014). In this paper, we chose to prioritise some of the criteria over others, specifically sincerity and rich insight. Therefore, the three themes we address throughout this article are these quality indicators, as well as the issues of influence and curiosity.
The European Journal of Counselling Psychology
While we (Katie and Amanda) have experience conducting qualitative research, we had not had the opportunity to engage in autoethnography before being invited to write this article. Therefore, we looked to the research literature for advice about how to proceed, especially with two writers working together. As discussed by Chang, Ngunjiri, and Hernandez (2013), we developed a collaborative autoethnography using a dialogical approach. As summarised by the authors, collaborative autoethnography is "…engaging in the study of self, collectively; it is a process and product of an ensemble performance, not a solo act" (Chang, Ngunjiri, & Hernandez, 2013, p. 11).
Using a dialogical approach, our discussions took place in numerous verbal and written exchanges. We spoke by Skype regularly between July through October 2015 and then, again, between February and August 2017 and we posed questions to each other in drafts of this manuscript. We revised the manuscript numerous times, including after getting reviewer and editorial feedback, which continued the process of data collection, collaborative dialogue, analysis, and modification. Our conversations and revisions led to a deeper understanding of our topic "[for]…the dialogical process enables co-researchers to go deeper and also to discover areas of similarities and differences" (Chang, Ngunjiri, & Hernandez, 2013, p. 48). This enabled us to reflect together and independently on the themes of this paper; the quality indicators, what it means to have "influence" and the role of curiosity in driving our research and our development as counselling psychologists.
We are aiming to both explore autoethnography as a research method and use the method to explore the impact (and our response to the impact) of our earlier collaboration. Our dialogue attempts to capture both of these purposes, through responding to each other and to the questions we were asked to consider. Our dialogue starts in Katie's voice, and, thereafter, we use headings to clarify who is speaking.
Our Autoethnography
Katie: It's a rainy, cool spring day in Dublin, and I come to my office in the hospital where I work. I've just finished co-facilitating a three-and-a-half-hour group with people who have experienced complex trauma. Even though I debriefed with my co-facilitator, my mind feels full with the pain, frustration, and sheer courage of the people with whom I've spent the last hours. I'm both saddened by their experiences and hopeful that our group is going to be helpful for them. I sit down, take a few breaths and then decide to try and catch up on my email before I leave for the day. There is an email from an address I don't recognise, that I read quickly and then have to re-read several times. It seems that someone has decided I have done something "influential" in my career as a counselling psychologist, and they are inviting me to write and submit an article about it. The email references an article (Baird & Kracen, 2006) that I wrote over 10 years ago, and I am surprised to learn that it is Googling the definition of influence and find that the Oxford dictionary defines it as "the capacity to have an effect on the character, development, or behaviour of someone or something, or the effect itself" ("Influence," n.d.).
The email I've received asks me to consider the invitation, noted previously, to write an autoethnographical piece. As I didn't do the work that has been credited as influential alone, I want to try to include my co-author and friend, Amanda, as I consider this opportunity. Amanda and I had met when we worked together in a university counselling service, and as two Americans living and working in Dublin, Ireland, we discovered that we had many interests in common. In 2001, I asked Amanda to join me in working on a systematic review of the literature on vicarious trauma (VT) and secondary traumatic stress (STS). This review, previously cited, is the subject of the email I've received. Although Amanda and I have lived in different countries for many years, we have remained friends and our shared interest in VT and STS has remained a thread in both of our careers.
I decide to take a chance and find out more and, thus, begins the journey that has resulted in you reading this article that is both about autoethnography and is an autoethnographical piece. Amanda: I was shocked and amused to learn that we had been asked to write this paper in the first place.
Respectfully, it feels a bit absurd. This may be the familiar 'imposter syndrome' (Clance & Imes, 1978) Katie, how do you feel about the proposed research format, specifically co-writing an autoethnography?
Katie: First of all, it's quite pleasurable to think I influenced your decision about counselling psychology. I feel I've done something good for our profession! Even though counselling psychology has made and continues to make remarkable contributions to societies and to psychology, I think it still suffers from being less well known and understood than some other specialties within psychology.
It's interesting that we are both so surprised that our article has been cited by many other researchers working in the areas of vicarious trauma and secondary traumatic stress. If I think about some of the challenges that we encountered when we did our research synthesis, this begins to make sense. Because our originally planned methodology was a meta-analysis, we struggled to find studies that met the criteria for rigour that was needed at that time to include a study in a meta-analysis. Since then, new statistical approaches to the challenges of meta-analysis have been developed, but in 2001, we were faced with a decision to exclude much of the research literature or be innovative and adapt our methods by using a level of evidence approach. The fact that summarised the findings about two issues that are quite important for counselling psychologists and other helping professionals. It seems that many people are still writing about these constructs, (as evidenced by citations in Google Scholar). I am hopeful that they find our early work normalising and empowering.
I had the thought, as Delamont (2007) argued, that I might not be an interesting enough subject for ethnography. Like you, I feel inexperienced at this kind of writing and share many of your concerns. I feel that same wariness of exposing myself and my experiences (as well my institution) as you wrote about above, and am relieved to find that this is something others have felt when doing autoethnography (Wall, 2008).
That said, something feels very true to me about naming the act of co-constructing knowledge (Gergen, 1985) that we engage in when we write from our own voices. Earlier in my career I was so hungry for learning about research that I decided I would commit myself to being proficient in both quantitative and qualitative methods. I would not be the kind of scientist-practitioner limited by positivism, or even post-positivism. I even worked to introduce lectures on qualitative research to those I learned with and taught. Sometime, about 8 years ago, I became more a consumer than a producer of research, and my gaze narrowed to studies that helped me "know" what might be helpful interventions for people. I hadn't read any autoethnographic work until I embarked on this, and had not considered it as a research method in counselling psychology. Coming back to the idea of influence, I think certain kinds of research have been privileged to have quite a high level of influence over the past decades and other methodologies have been relegated (by some) to less influential status; their contribution seen as somehow less important. I like challenging this. I think that autoethnography has the potential to offer a very honest and empowering way of understanding many issues of concern to counselling psychologists, and had I known about it back in the early 2000's, I might have written a very different article; one that took into account my own experiences of vicarious trauma as a young mental health professional working with children in a public setting.
For me, this raises questions about that choice that we made when we decided to try and synthesise the research findings on VT and STS, and how differently we might approach these concepts now. It strikes me that both VT and STS reflect another kind of influence, the kind that arises from a therapeutic relationship. Pearlman and Saakvitne (1995) Writing the paper with you, before starting my clinical work, highlighted potential concerns for me, and I believe it helped me better attend to issues of self-care. Even now, as I shift careers from a hospital-based position to an academic position, I'm proud that I'm making a change as a strategy to enhance my self-care, as I've neglected my health, well-being and relationships were beginning to suffer due to administrative expectations.
Since we published our paper, I have written and researched related issues over the years, but mostly focused on people with cancer, oncology professionals, and psychology trainees in psycho-oncology. I am interested in how people with a potentially deadly disease cope and sometimes even thrive, and simultaneously how healthcare providers manage the interpersonal and occupational demands of their jobs. I have written about and presented on well-being among oncologists (Kracen, 2010), managing compassion fatigue (Kracen & Deshields, 2013), resilience among individuals coping with cancer (Deshields, Heiland, Kracen, & Dua, 2016), and clinical supervision in psycho-oncology (Kracen, 2013 physician burnout is a significant cause of concern (Shanafelt et al., 2015); dare I get too ambitious here, but I believe a special issue of a qualitative journal devoted to autoethnographic accounts of physicians' challenges and resiliency could be a significant contribution to challenging physician shame and understanding physician well-being. When the research is complete, the thesis completed by the student and the Viva passed, another piece of work starts for me and my colleagues as we work with the student to turn the research into an article so that our findings are disseminated beyond ourselves. A challenge for me would be finding additional time to write, as I find that it is difficult to find the time and focused 'head space' for writing when working in a very busy hospital. I would also like to pursue research into the opposite side of the coin from that we looked at: vicarious posttraumatic growth (VPTG) (Cohen & Collens, 2013) and vicarious resilience (VR; Hernández, Gangsei, & Engstrom, 2007). I find myself thinking along these lines and feeling this happening much more these days. I feel quite inspired by the strength and resiliency of the people with whom I work, and even as I find it painful at times to bear witness to the horrible things we can do to each other as human beings, I feel privileged to be able to see the wisdom and healing that we are capable of. Not many people in the world get to see and be influenced by that side of humanity. I'd like to learn more about what fosters VPTG and VR in trainees and supervisees (and myself!) and perhaps explore what supervisory interventions can lead to this kind of influence.
These topics, VPTG and VR, may easily lend themselves to autoethnography, as the topics seem to call out for the rich personal accounts that could illustrate most clearly what it means to be, for example, more resilient, as a result of exposure to working with people who have lived through trauma.
Amanda, you mentioned that you left your primarily clinical role partly in order to attend to your own needs. Bravo! What currently informs your research agenda?
Amanda: Thanks for the "bravo," Katie. As we discuss influence here in this paper, it truly was the effects of working as a counseling psychologist with my patients who helped me clarify what is most important to me. Like you suggested, I find it to be a privilege to work so closely with people. In my specialty, I have the opportunity to be with people diagnosed with cancer as they face potentially life-threatening diseases and, as does happen, to be with them as they approach death. I have never heard of the specific concepts you mentioned (VPTG and VR) but they are consistent with my experiences of working in oncology as well as what I have read in the literature (e.g., Granek et al., 2012) and heard anecdotally from colleagues. Actually, just an hour ago, I learned of the natural death of one of my patients, who I worked with regularly for over 3 years. She had a previous history of trauma and also survived over 10 years with a cancer diagnosis and numerous treatments. She, like others do, chose to live in a meaningful way despite ongoing suffering. This patient, as well as many others, inspire me to try to live fully. I often think of my favorite quote from the poet Mary Oliver (1992) pretending to be a ghost, and so I take a moment and play with him. This constant seeking of balance and meaning will likely lead me to do less research ultimately, but hopefully, I can be content with my small contributions.
Additionally, I want research to be enjoyable. Actually, it was another patient, an esteemed researcher, who shared with me that he does not engage in research unless it is "Fun and with FUN people." He influenced me in many ways, but that comment shifted how I evaluate potential projects. Katie, you make the cut, of course! And so finally, continuing this theme of influence, I want to keep exploring, especially in rich qualitative approaches, how we, as individuals and as healers (therapists, medical providers, etc.) influence each other.
Since we've started working on this article, I've collaborated with two counselling psychologists at the University of Denver on a qualitative study of career development of psychologists working in cancer care in the United States. Additionally, I'm pleased that we recently received a small grant to conduct another qualitative study; specifically, we will be interviewing postdoctoral fellows in oncology about their experiences in clinical supervisory relationships. These projects are meaningful and enjoyable to me. the "truth" is meant to be (Ellis & Bochner, 2000). (Muncey, 2005), resisting dominant discourses (Ellis & Bochner, 2000), or promoting dialogue (Ellis, 2000)." As Wall (2008) suggests, authoethnography provides opportunities that are consistent with the philosophy of counseling psychology, particularly from empowering the disempowered, being culturally sensitive, advocating for social justice concerns, and championing feminist and intersectional voices. Having said that, it is important to retain the need for our research methods to be suited to the questions we are trying to ask. Many of our questions are not about our own experience, but about the experiences of people who are suffering, who are oppressed, who are in need of help, and who are trying to learn. The methods we use should be methods that will make contributions and give voice to these people perhaps more often than to ourselves.
As we close the article, I find myself going back to the idea of influence, as engaging in this process seems to have influenced us both in several ways. Perhaps through stimulating our thinking in new ways, ripples of influence and contribution will continue.
Amanda: Katie, I agree. Throughout this article and the process of writing it together as a dialogue, we have explored the concepts of quality indicators (sincerity and rich insight), curiosity, and influence. I came across this fascinating article by Dwayne Custer (2014, p. 8), and he writes about autoethnography: Each individual human being creates society and culture. We are the world both in the literal and figurative sense. Responsibility becomes more than just a duty to family or country-it becomes a living reality, first to ourselves and then outward to others. Autoethnography communicates the "self" to the world and the world is deeply augmented.
I believe that we have strived in this article to expose ourselves, thus, creating 'society and culture' through the act of writing together. My hope is that we have shared our experiences as counselling psychologists living in different countries, pursuing research for different goals, and ending up at different 'places' in our careers. With honesty, sincerity, and insight, we have offered what we believe we know about ourselves and our roles as counselling psychologists. While from my perspective it may not seem that the "world is deeply augmented" as a result of our efforts, I do hope that we have generated a ripple of influence that will contribute to a larger tide.
As we've recently learned by virtue of being involved in this process, it is hard to predict how much influence any piece of work will have. Perhaps someone will read this and decide to embark on an autoethnographical project that truly will create a lasting change or deeply augment the world. As I finish up this article on a cool autumn day in St. Louis, Missouri, I welcome readers to be curious themselves and determine the value and influence that our autoethnography provides.
Funding
The authors have no funding to report. | 2018-12-14T18:23:09.003Z | 2018-01-30T00:00:00.000 | {
"year": 2018,
"sha1": "62cfb4e7023fa163ec324cda16701db6f221f438",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5964/ejcop.v6i1.122",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "62cfb4e7023fa163ec324cda16701db6f221f438",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
92981823 | pes2o/s2orc | v3-fos-license | How Elastic Demand Affects Bidding Strategy in Electricity Market : An Auction Approach
The deepening of electricity reform results in increasingly frequent auctions and the surge of generators, it becomes difficult to analyze generators’ behaviors. Since it’s hard to find analytical market equilibriums, approximate equilibriums were obtained instead in previous studies by market simulations, which are strict to initial estimations and simulation results are chaotic in some cases. In this paper, a multi-unit power bidding model is proposed to reveal the bidding mechanism under clearing pricing rule by employing auction approach, for which initial estimations are nonessential. Normalized bidding price is introduced to construct generator's price-related bidding strategy. Nash equilibriums are derived depend on the marginal cost and the winning probability which are computed from bidding quantity, transmission cost and demand distribution. Furthermore, we propose a comparative analysis to explore the impact of uncertain elastic demand on the performance of the electricity market. The result indicates that, there exists market power among generators leading to social welfare decreases even under competitive conditions but elastic demand is an effective way to restrain generators’ market power. The feasibility of the models is verified by a case study. Our work provides decision support for generators and a direction for improving market efficiency.
Introduction
Following the attempts in the U.S., Britain, Australia and Russia, many other countries are deepening the massive reforms of the electricity market [1].Since the significant document 'Several Opinions on Further Deepening the Reform of Electric Power System (No.9 document)' was issued in March 2015, China's electric power market is gradually introducing competition and establishing a market-oriented power trading platform, allowing new kinds of participants to emerge [2].Consumers exhibit higher sensitivity for price and market demand varies from fixed demand to uncertain elastic demand with the participation of large bargaining customers [3].On the supply side, many small and medium-sized electricity generators are involved in the competition, entailing fierce competitions and frequent auctions among them [4].Governments in various countries, such as China, England, Spain and others perform wholesale electricity transactions via Uniform Clearing Price (UCP) auction mechanisms to balance the supply and demand and improve the economic efficiency [5][6][7].Therefore, with the increasing open electricity market, it is crucial to effectively analyze the behavior of electricity generators and seek market equilibrium prices accurately.
To study this related issue, many scholars focus on modeling and analyzing generators' behavior patterns in a competitive electricity market since 2000 [8][9][10][11][12][13][14][15][16][17][18][19].Due to the difficulty in obtaining the analytical market equilibrium, the approximate market equilibriums concerning generators' behaviors are sought by market simulations, such as agent-based simulation [9,10], evolutionary simulation [11], and hybrid iterative simulation [12,13].For instance, Wang et al. [11] proposed an evolutionary game approach to analyze the bidding process based on price-responsive demand.The generators update their beliefs of opponents' strategies and optimize their bid based on the updated information.By repeated adaptive learning, the electricity market eventually converges to the equilibrium, where no players can increase their profit by changing their strategies unilaterally.However, market simulations do not provide systematic approaches to building bidding strategies [16].In addition, market simulations have become a complex process to seek market equilibrium in the frequent large-scale auction due to the surge of participants and the increase of market uncertainty.Some experiments indicated that the equilibrium may not be found and the results were directly related to initial estimations in some cases [11,17].In other words, most iterative algorithms are strictly connected to the initial estimations since the results would be chaotic if the initial estimations were improper.
To fill these gaps, auction models provide analytical rationale and explanation about how market equilibrium can be decided via strategic bidding behavior.Unlike simulation technology which emphasizes learning processes, auction models solve the optimal bidding problem by considering the interaction of generators, and achieve the economic equilibrium of power market through a Nash equilibrium.Auction approaches avoid initial estimations and the time limit of simulation technology.The existing researches related to bidding strategies by auction models, always concentrate on bidding behaviors based on the assumption that demand is fixed and inelastic [20][21][22][23][24][25][26][27][28], which does not always hold in practice, especially in the market-oriented market.For instance, Hao [20] modeled bidding behaviors and assumed that demand, which is known to all electricity generators, was fixed.The results showed that those bidders would exert their market power to bid below their marginal cost so as maximize the expected profits.Similarly, Li and Shahidehpour [23] proposed a novel bidding model to discuss the Nash equilibrium in the electricity market.Based on this proposed model, Banaei et al. [24] discussed wind generator's bidding strategy and Rahimiyan and Baringo [25] researched ISO's scheduling problem.However, the literatures on the key role of the market demand in the bidding strategy are insufficient.Characteristics of electricity demand, a key factor to strategic behaviors, such as seasonality, time-fluctuation and price-responsiveness, has received less attention [13].Compared to other commodities, the demand elasticity of electricity is low, but even a low demand elasticity can result in a noticeable difference of the market performance [18].
Given all of the above, this paper applies auction theory to model the generators' optimal bidding strategy based on the uncertain elastic demand and to explore the Nash equilibrium under clearing pricing rules, for which initial estimations are non-essential.In UCP auction, all participants submit their bids to Independent System Operators (ISOs) simultaneously and independently according to their demand information and expected profit.The lower price participants are assigned with the demand quantity first and the Market Clearing Prices (MCPs) are the highest prices that produce demand.Due to the information asymmetry, the bidding process is a non-cooperative oligopoly game with incomplete information.Anticipated MCPs, transaction cost, cost distribution of opponents (public information) and own true marginal cost (private information), are all considered in our model.With efforts to introduce transmission cost into the bidding strategy, normalized bidding price is applied innovatively.It ensures that the generators providing a large quantity are more likely to win the auction when bidding prices are equal.Results show that generators would exert market power to bid higher than their marginal cost in order to maximize expected profits.The optimal bidding price is the true marginal cost plus the winning probability, which is computed from bidding quantity, transmission cost and market demand distribution.Our work contributes game theoretic models to the auction theory literature and generates novel insights for generators seeking profits.
The intended contributions of this paper are listed as follows: (1) to propose a simple and effective auction model that provides a systematic approach to building bidding strategies under the uncertain elastic demand.A unique analytical Nash equilibrium is obtained, which solves the time limit and initial estimations problem; (2) to provide a comparative analysis to assess the impact of the demand elasticity on the performance of the electricity market (UCP auction VS.complete competition VS. fixed demand auction), which is rare to conventional wisdoms; (3) to disclose the market power among generators under competitive conditions and prove that elasticity of the market demand is an effective way to restrain generators' market power.
The rest of this paper proceeds as follows: in Section 2, the model is established and the optimal bidding strategies are presented.In Section 3, we discuss how uncertain price-responsive demand influences the performance of the electricity market, and case studies are undertaken to verify the proposed model.The conclusions are given in Section 4.
Electricity Market and MCP
In an electricity market game, due to the information asymmetry that exists in the bidding process, such as opponents' marginal cost and opponents' bidding behavior, the bidding process can be described as an asymmetric information game of divisible object.We suppose a model for generators i (i = 1, 2, . . ., m) with different marginal costs, which compete to sell homogenous goods to the market.In period t, the sequence of events in classical UCP can be seen in the following paragraphs [29,30]: (1) The auctioneer releases market information according to market operation rules, including the demand information D t (p, ε t ) and the bidding history of participant generators.(2) Each generator simultaneously and independently submits a bidding price b i,t at which it is willing to supply its maximum production up to quantity q i,t .(3) These bids are ranked in terms of the bidding price b i,t .(4) The lower price generator is assigned with the demand quantity first.If his quantity q i,t cannot satisfy the demand, the higher price bidders produce the residual demand.If the bidders submit equal bids, then they split the market equally.(5) MCP is the highest price that produces the demand.Generators would not participate in a bid if their bidding prices were higher than the MCP.
The bidding curves are composed of all generators' pair of quantity-price (Figure 1), The day-ahead market demand function in period t is represented by D t (p, ε t ), which is a function of random shock ε t and market price p.It is assumed to satisfy the following standard assumptions: D t (p, ε t ) is strictly increases in ε t , and strictly decreases and concaves in p. Demand shock ε t is a random variable with a differentiable cumulative distribution function φ t (ε t ) and a continuous density function ϕ t (ε t ).The bidding curves are composed of all generators' pairs of quantity-price (Figure 1).
Energies 2018, 11, x 3 of 12 limit and initial estimations problem; (2) to provide a comparative analysis to assess the impact of the demand elasticity on the performance of the electricity market (UCP auction VS.complete competition VS. fixed demand auction), which is rare to conventional wisdoms; (3) to disclose the market power among generators under competitive conditions and prove that elasticity of the market demand is an effective way to restrain generators' market power.The rest of this paper proceeds as follows: in Section 2, the model is established and the optimal bidding strategies are presented.In Section 3, we discuss how uncertain price-responsive demand influences the performance of the electricity market, and case studies are undertaken to verify the proposed model.The conclusions are given in Section 4.
Electricity Market and MCP
In an electricity market game, due to the information asymmetry that exists in the bidding process, such as opponents' marginal cost and opponents' bidding behavior, the bidding process can be described as an asymmetric information game of divisible object.We suppose a model for generators i (i = 1, 2, …, m) with different marginal costs, which compete to sell homogenous goods to the market.In period , the sequence of events in classical UCP can be seen in the following paragraphs [29,30]: (1) The auctioneer releases market information according to market operation rules, including the demand information (, ) and the bidding history of participant generators.(2) Each generator simultaneously and independently submits a bidding price , at which it is willing to supply its maximum production up to quantity , .(3) These bids are ranked in terms of the bidding price , .(4) The lower price generator is assigned with the demand quantity first.If his quantity , cannot satisfy the demand, the higher price bidders produce the residual demand.If the bidders submit equal bids, then they split the market equally.(5) MCP is the highest price that produces the demand.Generators would not participate in a bid if their bidding prices were higher than the MCP.
The bidding curves are composed of all generators' pair of quantity-price (Figure 1), The day-ahead market demand function in period is represented by (, ), which is a function of random shock and market price .It is assumed to satisfy the following standard assumptions: (, ) is strictly increases in , and strictly decreases and concaves in .Demand shock is a random variable with a differentiable cumulative distribution function ( ) and a continuous density function ( ).The bidding curves are composed of all generators' pairs of quantity-price (Figure 1).As shown in Figure 2, from the perspective of the generators, the decision-making process can be divided into five steps in the specific process of the UCP auction: Step 1: Information acquisition.The generators get related information from the auctioneer, including the demand information and the bidding history of participant generators.
Step 2: Price normalization.The generators normalize the bidding prices integrating the transaction cost.
Step 3: Winning probability.The generators calculate the winning probability based on private costs.
Step 4: Optimal strategy.The generators submit the optimal bids.
Step 5: Nash equilibrium.Market equilibrium is the intersection of the demand function and the bidding curves.
As shown in Figure 2, from the perspective of the generators, the decision-making process can be divided into five steps in the specific process of the UCP auction: Step 1: Information acquisition.The generators get related information from the auctioneer, including the demand information and the bidding history of participant generators.
Step 2: Price normalization.The generators normalize the bidding prices integrating the transaction cost.
Step 3: Winning probability.The generators calculate the winning probability based on private costs.
Step 4: Optimal strategy.The generators submit the optimal bids.
Step 5: Nash equilibrium.Market equilibrium is the intersection of the demand function and the bidding curves.The generator 's truly variable marginal cost , is considered as a constant, which is private information only precisely known to itself and independent of each other.The generator can estimate others' marginal cost by probability distribution, which can be described with density function () and cumulative distribution function ().In practical, the cost can be estimated from manufacturers or from market reports [31].
With the progress of the electricity market liberalization, the ISO is just a network provider who has withdrawn from power trading.The generators, on the other hand, need to pay wheeling costs and auxiliary service costs, which account for a large proportion of the whole electricity cost [28,32].Therefore, in addition to the cost of generating electricity, wheeling cost and auxiliary service cost are also indispensable costs to generators and consider as the transmission cost in this paper.Generally, for a certain quantity of electricity, transaction costs will increase as the number of transactions increases.Therefore a standardized bidding price is introduced into the mechanism to ensure that generators with a large supply quantity will be more likely to win the auction when the bidding prices are equal.
Suppose the transaction cost function is (), where () > 0, () < 0, is the bidding quantity.Therefore (1) denotes the transaction cost of the generator when one unit quantity is traded.If the generator bidding , unit with bidding price , and wins the auction, his ex post profit is * − , , − ( , ).Generator can be viewed as , virtual generators who bid one The generator i's truly variable marginal cost c i,t is considered as a constant, which is private information only precisely known to itself and independent of each other.The generator can estimate others' marginal cost C by probability distribution, which can be described with density function f (C) and cumulative distribution function F(C).In practical, the cost can be estimated from manufacturers or from market reports [31].
With the progress of the electricity market liberalization, the ISO is just a network provider who has withdrawn from power trading.The generators, on the other hand, need to pay wheeling costs and auxiliary service costs, which account for a large proportion of the whole electricity cost [28,32].Therefore, in addition to the cost of generating electricity, wheeling cost and auxiliary service cost are also indispensable costs to generators and consider as the transmission cost in this paper.Generally, for a certain quantity of electricity, transaction costs will increase as the number of transactions increases.Therefore a standardized bidding price is introduced into the mechanism to ensure that generators with a large supply quantity will be more likely to win the auction when the bidding prices are equal.
Suppose the transaction cost function is β(q), where β (q) > 0, β (q) < 0, q is the bidding quantity.Therefore β(1) denotes the transaction cost of the generator when one unit quantity is traded.
If the generator i bidding q i,t unit with bidding price b i,t and wins the auction, his ex post profit is (p * t − b i,t )q i,t − β(q i,t ).Generator i can be viewed as q i,t virtual generators who bid one unit with the same virtual bidding price b i,t .So each generator's ex post profit is (p * t − b i,t ) − β (1).Because the final profit is the same, according to the principle of equivalent profit, the relationship between the actual bidding price b i,t and the virtual bidding price b i,t is shown by Equation (1): By normalized bidding price, we convert m generators into N = ∑ m i=1 q i,t virtual generators, each virtual generator bidding 1 unit quantity to the auctioneer.So generator i (b i,t , q i,t ) is converted to q i,t virtual generators, whose bidding information is (b i,t , 1).In the operation of actual electricity market, a generator does not change its production schedule q i,t frequently, because it leads to excessive operational inefficiencies for the generator [5].Therefore, although the generation quantity affects the bid price, it is not a decision variable.
As mentioned above from the UCP rules, if the virtual generator' bidding price b i,t is lower than the MCP p * t , he produces one unit as his bid quantity.If his bidding price b i,t is equal to the MCP, he produces the residual demand R However, the bidding prices of different virtual generators may be the same.In this case, if the bidding price b i,t is below the MCP p * t , the distribution rules do not change and all the virtual generators produce one unit as their bid quantity.If the bidding price b i,t is equal to the MCP b i,t , the distribution rules are different.Each generator with the same bidding price produces the residual demand equal to R , k denoting the quantity of bidders with the same bidding price.Since we assume that the cost distribution of the generators follows a continuous distribution, the probability that generators with the same bids (the same private cost) is 0, that is k = 1.The following work is to solve the virtual optimal bidding price b i,t .
Winning Probabilities
When all generators participate in a bidding game, there exist three type of generators: those winning in the margin, who bid the same as MCP; those winning below, who bid below the MCP; and those losing the game, who bid above the MCP, and the probabilities of the three outcomes are important for deriving the bidding strategy [20].Assume that the probability of generator i winning the game by bidding below the MCP is R i,t , for those winning in the margin is H i,t , and those losing the game is 1 which is smaller than one unit.If generator i wins the game by bidding below the MCP, he will be assigned first, producing one unit as his bid quantity.The probability of generator i winning the game is not only related to his own strategic choice of b i,t , but related to the market demand D t (p * t , ε t ) and cost distribution of participants: ) of other generators, each generator i chooses its bid price b i,t that achieves its ex post maximum expected profit with respect to period-t day-ahead demand.By doing so, the generator achieves its maximum expected profit by choosing b i,t that satisfies the following: For any realization of bid price b t and random shock ε t : Here, For example, given a random shock ε t , if the generator i's expected profit by choosing price b i,t is higher than that by choosing any other price set b t , then b i,t is an equilibrium.The formal definition of such an equilibrium is as follows.
By Equation ( 4), demand uncertainty is one of the major factors that put MCP at risk.In our analysis, we study an equilibrium in which the same-cost generators have the same strategies behavior.Given demand random ε t , MCP p t and other generators' strategies b −i,t , generator i's expected profit Equation ( 4) in period t is equivalent to:
The Optimal Bidding Strategy of the Generators
In this section, we begin our analysis by characterizing generators' equilibrium strategies.Recall Equation (5), to maximize Equation (5) for any ε t and b −i,t , generator i must choose a bid price b i,t inducing a MCP p * t that maximizes expected profit Equation ( 6): that is, the total demand is equal to the sum of all the bid quantities below the MCP.With such bid behaviors, generator i will achieve the maximum expected profit in period t if it observes the random shock ε t after its decision in period t.Then, the optimal bid price b i,t that maximizes excepted profit satisfies the following first-order condition: Note that dR i,t (b i,t )/db i,t < 0 and the fact that private cost c and random shock ε are independent of each other.Applied the formula of inverse function differentiation, let b i,t = B(c i,t ) to rewrite Equation ( 7) as: Generator i's optimal bidding strategy B(c i,t ) satisfies the ordinary differential equation Equation ( 8) in interval C, C and the boundary conditions B(C) = C, where the generators with the highest cost cannot bid higher than the price ceiling C. Therefore, Equation ( 8) has a unique solution.That is to say, a generator with marginal cost c i,t has one and only one optimal bidding strategy B(c i,t ), which maximizes its expected profit.So we obtain Proposition 1.
Proposition 1. There exists a unique Nash equilibrium B
) that satisfies Equation ( 4) and the optimal bidding price b * 1,t satisfies Equation (8).
Integrating Equation ( 8) from c i,t to C yields the following: We know the fact that the probability of winning in or below the margin is 0 for the bidder with highest cost C: Considering the boundary condition and collecting Equation ( 9) as canonical forms, we obtain the following formal result from Equations ( 9) and (10): B(c i,t ) represents the general bidding strategy given an estimate of the expected MCP p * t .This result shows that a bidder's optimal bid is determined by three components: its real marginal cost of production c i,t , make-up of the probability of winning below or in the margin R i,t (c) dc, and the gap between the marginal cost and the expected MCP [Ri,t(c)] 2 dc.According to Equation (11), the optimal bid price B(c i,t ) is related the expected MCP p * t .In practical, a generator who follows this strategy exposes himself to additional risks if the expected (ex-ante) winning price is very different from the actual (ex-post) winning price.In 2000, Hao [20]'s study demonstrated that this risk can be mitigated when all bidders acted as if they were in the margin, that is, with p t * = B(c i,t ), the expected profit to generator i is no worse off than that in the best situation in which the ex post winning price is accurately estimated.Recall the notation ε t from previous analysis, and to state Proposition 2, we need to introduce the following notation: ε t represents period-t day-ahead minimum demand shock D(p t = 0) that results in MCP equal to 0 when each generator's bid quantity is q i,t .Then, from Equations ( 11) and (12), we identify generator i's optimal bidding price in period t: Proposition 2. In any period t, generators' optimal strategies are as follows: (1) For t = 1, 2, . . .T, each generator i commits a production schedule S i,t = (b i,t , q i,t ), that satisfies Equations ( 1) and ( 13).(2) Period-t + 1 day-ahead demand shock ε t is realized and day-ahead MCP p * t is determined.Production and profits are fulfilled in period t + 1.
Proposition 2 is remarkably simple yet significant.It leads to the optimal bidding strategy of the generators at each period.Equation (13) shows that the optimal bidding price of the generator is equal to his cost plus a make-up of winning probability that is computed from the bidding quantity, transmission cost and demand distribution.In addition, unlike a fixed demand auction which allocates the same quantity to the generators who win the auction, elastic demand auction has a different way to allocate quantity.Specifically, for a generator whose bidding price is equal to the MCP, the ISOs only allocates the residual demand R i,t (c i,t , ε t ) to them, which gives them an incentive to lower their bidding price.Moreover, for the generators whose bidding price is below the MCP, the higher the MCP is, the higher the expected profits they have, while their bidding price will not affect the MCP.On the other hand, for generators whose bidding price is equal to the MCP, bidding price not only affects the MCP, but also affects his assigned quantity.Therefore, these generators need to find a balance between the bidding price and the allocated quantity.Compared to inelastic demand, uncertain elastic demand market has an incentive to reduce generators' bidding price (restrain the market power of generators), which can also be confirmed in the numerical examples.None of them has the motivation to deviate from the optimal strategic from Proposition 2.
Generators' Optimal Bidding Strategy with Uniform Cost Distribution
Numerical examples are presented to demonstrate applications and salient features of our results in the electricity market.For the ease of exposition, hereafter, we consider a linear demand function D(p, ε t ) = v t − αp + ε t where v t > 0 and α > 0 are constants.v t represents the demand scenario and α represents the demand elasticity.All problem parameters introduced in this section are general knowledge to all firms.Table 1 shows the transaction cost function β(q), with the higher trading quantity q causing the higher transaction cost β(q).Table 1.Transaction cost information [32].
Table 2 shows the results of a case where five generators participate in the bidding game based on the demand function D(p, ε t ) = 4.5 − 0.5p + ε t .The private cost of each generator is a uniform distribution of [1, 2] per MWh.We randomly select five numbers between [1, 2] to represent five generators' true cost, which is private information only precisely known to themself.Demand shock ε t is a random variable with uniform distribution between intervals [−1, 1].Observing Table 2, the higher the private true cost c i,t , the higher the optimal bidding price b * i,t , which is similar to inelastic demand.The optimal bidding price b * i,t consists of the cost c i,t and the probability of winning the game (H i,t , R i,t ).Demand is one of the major factors that we consider affect strategic behaviors in the electricity market.In practice, electricity demand is price-responsive, although the demand slope α is fixed and does not change frequently, but demand scenario v t is seasonal and time-varying.So in this section, we study the influence of demand scenario v t on bidding strategy.Similarly, we compute the optimal bidding strategies for each generators according to Proposition 2 and the results can be seem based on different demand scenario as Table 3 (v t = 3.5 to 6.5).It can be seen from Table 3 that the optimal bid price increases when the demand scenario increases.From the microeconomic view, the increase in demand scenario will lead to higher MCP and more clearing quantity.When demand scenario varies from 4.5 to 5.5, the MCP increases from 1.37 to 1.5615 (if ε t = 0), and the total clearing quantity increases from 3.1850 to 4.7192.From a macroeconomic perspective, due to the demand information released ahead of bid auction, high demand scenario increases the expectations of generators.In other words, if the demand curve moves to right, the bidding curve will move up.
Market Power to Derive Electricity Prices and Social Welfare
This section compares the social welfare under three market structures: UCP auction, complete competition and fixed demand auction.Supposed q A i,t is the quantity actually assigned to generator i in period t, according to the definition in microeconomics, the total social welfare U t is defined as the sum of the generators surplus U g,t = ∑ n i=0 (p * t − c i,t )q A i,t and the consumer surplus . Therefore, the total social welfare U t is as follows: Firstly, we assume a basic scenario of complete information competition.In complete competition market, each generator offers a bidding price B(c i,t ) basic to the auctioneer, and MCP p * basic t is the highest price that produces the demand D(p, ε t ).The generators would not participate in a bid if their bids were higher than the MCP.In this case, there is no information asymmetry and all participants know the true cost of each generator.Therefore, each generator adopts cost bidding strategy, that is B(c i,t ) = c i,t .According to the allocation rules, the MCP p * basic t can be calculated by the optimal strategy of the generators B(c i,t ) basic = c i,t and the market demand conditions D(p, ε t ).Then we obtain the consumer surplus U basic s,t (The blue dotted line in Figure 3a) and the generators surplus U basic g,t (the blue solid line in Figure 3b) under complete information competition by Equation (14).It can be seen intuitively that complete competition is a very beneficial structure for social welfare compared to UCP auction.On the one hand, competition brings more consumer surplus which increases as the demand scenario increases.On the other hand, although generators surplus loss will be caused by complete competition, this loss is a drop in the bucket compared to customer surplus increases.However, due to the characteristics such as asymmetric information, transmission constraints and oligopoly structure, the electricity market behaves more like an oligopoly market.Complete competition is not appropriate in the electricity market.But this comparison shows clearly that even under competitive conditions (UCP auction), there exists market power among generators.
Then, we compare our UCP auction with Hao's fixed demand auction [20].As shown in Figure 3, the impact of demand on social welfare is huge.Compared to fixed demand, there are more consumer surplus and more social welfare based on price-responsive demand, and this phenomenon is more evident as demand grows.For example, when demand scenario = 5, generator surplus equals 0.9806 based on elastic demand and equals 3.113 based on fixed demand.Moreover, market price declines definitely increase consumer surplus.When demand scenario = 5, customer surplus equals 18.1 based on elastic demand and equals 7.2771 based on fixed demand.Therefore, elasticity of demand is an effective means to restrain the market power of generators.This conclusion is similar to the results in Ruddell's research [26], who indicated that price-responsive demands are realized to efficiently exploit the available electricity resources.
Conclusions
The openness of the electricity market results in generators facing fierce competition and frequent auctions and consumers exhibiting higher price sensitivity.However, due to the surge of generators and the increasingly frequent auctions, market equilibrium is difficult to pursue through market simulations and the result would be chaos if the initial estimations were not correct.Besides, seasonality, time-fluctuation and price-responsiveness of day-ahead demand, the major factors that affect strategic behaviors, have received less attention.Given this, based on the uncertain price-responsive demand, an auction model is developed to analyze asymmetric companies' bidding strategies, in which initial estimations are not necessary.We derived the unique Nash equilibrium under clearing pricing rule by introducing normalized bidding price into bidding strategy.In particular, we take into account the effect of the demand on the generators' bidding behavior and numerical examples are provided to show the applicability of the proposed approach.Then, we assume a fixed demand auction scenario (Hao's research [20]).In the fixed demand auction, demand is an inelastic fixed variable and represented by D(p) = v t .The only difference between the fixed demand auction and UCP auction is the allocation of marginal winners.In the fixed demand auction, as long as the generator wins the auction, he always has one unit of electricity allocation, that is R i,t (c i,t ) = 1.But in UCP auction, generators who win the auction in the margin has less than one unit of electricity allocation, that is R i,t (c i,t ) ≤ 1.By substituting R i,t (c i,t ) = 1 in Proposition 2, we can obtain: B(c i,t ) f ixed = c i,t + C c i,t H i,t (c) + R i,t (c)dc B(c i,t ) f ixed denotes the optimal bidding strategy of generator i under the fixed demand auction.Similarly, according to the allocation rules, the MCP of the fixed demand auction p * f ixed t can be calculated by the optimal strategy of the generators B(c i,t ) f ixed and the market demand conditions D(p) = v t .Then we can obtain the consumer surplus U f ixed s,t (the red dotted line in Figure 3a) and the generators surplus U f ixed g,t (the red solid line in Figure 3b) under the fixed demand auction by Equation (14).Proposition 3.Even under competitive conditions (UCP auction), there exists market power among generators.But price-responsive market demand is an effective way to restrain generators' market power compared to inelastic market demand.
It can be seen intuitively that complete competition is a very beneficial structure for social welfare compared to UCP auction.On the one hand, complete competition brings more consumer surplus which increases as the demand scenario increases.On the other hand, although generators surplus loss will be caused by complete competition, this loss is a drop in the bucket compared to customer surplus increases.However, due to the characteristics such as asymmetric information, transmission constraints and oligopoly structure, the electricity market behaves more like an oligopoly market.Complete competition is not appropriate in the electricity market.But this comparison shows clearly that even under competitive conditions (UCP auction), there exists market power among generators.
Then, we compare our UCP auction with Hao's fixed demand auction [20].As shown in Figure 3, the impact of demand on social welfare is huge.Compared to fixed demand, there are more consumer surplus and more social welfare based on price-responsive demand, and this phenomenon is more evident as demand grows.For example, when demand scenario v t = 5, generator surplus equals 0.9806 based on elastic demand and equals 3.113 based on fixed demand.Moreover, market price declines definitely increase consumer surplus.When demand scenario v t = 5, customer surplus equals 18.1 based on elastic demand and equals 7.2771 based on fixed demand.Therefore, elasticity of demand is an effective means to restrain the market power of generators.This conclusion is similar to the results in Ruddell's research [26], who indicated that price-responsive demands are realized to efficiently exploit the available electricity resources.
Figure 1 .
Figure 1.The bidding curves in a uniform clearing price (UCP) electricity market.
Figure 1 .
Figure 1.The bidding curves in a uniform clearing price (UCP) electricity market.
Figure 2 .
Figure 2. The specific process of the UCP auction.
Figure 2 .
Figure 2. The specific process of the UCP auction.
− H i,t − R i,t .Suppose p * t represents the MCP in period t, there are [D t (p * t , ε t )] + 1 generators winning the game, and [D t (p * t , ε t )] represents the integer part of D t (p * t , ε t ).If generator i wins the game in the margin, the generators whose bidding price are lower than b i,t will be assigned first, and generator i produces the residual demand at the MCP p −1 (b i,t ), p * t , ε t ) represents the probability of generator i winning below the MCP and describes the cumulative probability of at most [D(p * t , ε t )] − 2 generators' bidding prices that are less than the price of generator i.In period t, given any b −i,t = (b 1,t , . . .b i−1,t , b i+1,t . . .b N,t In the formula, H i,t (B −1 (b i,t ), p * t , ε t ) represents the probability of the generator i winning in the margin, and describes the probability of [D t (p * t , ε t )] − 1 generators whose bidding prices are less than that of generator i. R i,t (B
Table 2 .
Bidding results for a case study.
Table 3 .
Bidding results with different demand scenario. | 2019-01-06T11:28:26.436Z | 2018-11-26T00:00:00.000 | {
"year": 2018,
"sha1": "2af879400d7cf12a511a8b0ff736640ad0cae9b5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/1/9/pdf?version=1545363584",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2af879400d7cf12a511a8b0ff736640ad0cae9b5",
"s2fieldsofstudy": [
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
238198688 | pes2o/s2orc | v3-fos-license | Acoustic regular black hole in fluid and its similarity and diversity to a conformally related black hole
We address an interesting question in the present paper that whether the acoustic gravity can be applied as a tool to the study of regular black holes. For this purpose, we construct a general acoustic regular black hole in the spherically symmetric fluid, where its regularity is verified from the perspective of finiteness of curvature invariants and completeness of geodesics. In particular, we find that the acoustic interval not only looks like a line element of a conformally related black hole in which the fluid density can be regarded as a conformal factor, but also gives rise to a non-vanishing partition function which coincides with that of a conformally related black hole. As an application, we provide a specific acoustic regular black hole model, investigate its energy conditions and compute its quasinormal modes. We note that the strong energy condition of our model is violated completely outside the horizon of the model but remains valid in some regions inside the horizon, which may give a new insight into the relation between the regularity and strong energy condition. Moreover, we analyze the oscillating and damping features of our model when it is perturbed.
Introduction
Since the Hawking radiation from black holes (BHs) was discovered [1], it has become one of the central subjects to study the quantum behaviors of BHs. However, this thermal radiation is too small to be directly detected by any conceivable experiments. When the Schwarzschild BH with one solar mass is taken as an example, its radiation temperature is approximately 6 × 10 −8 K, while the cosmic background is of 3K microwave radiation. Therefore, the former is completely covered up by the latter. In other words, even if the thermal radiation is emitted, it will be drowned out by the background noise. This situation motivates the researches to shift from astrophysical phenomena to their analogues in laboratories on Earth, which was pioneered by Unruh [2] who proposed an acoustic analogy.
An acoustic black hole (ABH), being one of the realizations of analogue BHs, can be formed in laboratories on Earth when the velocity of moving fluid exceeds the local velocity of sound, where the horizon is located [3] at the junction of the supersonic and subsonic regions. Several attempts have been done in recent decades, including surface waves in Bose-Einstein condensates [4], water flows [5], optical systems [6], quantum many-body systems [7], and so on. For the early progress in analogue BHs, see, for instance, the review article [8] and the references therein. Recently, there have been many theoretical and experimental advances in various aspects of analogue gravity, such as in the Hawking radiation [9][10][11][12][13], the superradiation [14][15][16], the quasinormal modes (QNMs) [17], and the Lyapunov exponent [18], etc. Moreover, ABHs have been generalized [19] to curved spacetimes. In particular, the experimental advances reflect the applicability of analogue gravity.
Although the analogue gravity has been developed and regarded as a tool of gaining insight into general relativity, the first simulation of Schwarzschild and Reissner-Nordström BHs was not realized until 2021 [20]. Prior to this work, some analogue BH models, such as the draining bathtub model [3,21], may contain the necessary features that give rise to the astronomical phenomena, but can hardly have the direct counterparts in the universe. And the differences between astronomical black holes and their acoustic counterparts may appear distinctly in the desired phenomena in the earth laboratory. For instance, in the acoustic simulation of the Painlevé-Gullstrand spacetime [8], the astronomical metric differs from the acoustic one by a conformal factor. Thus, the study of the quasi-normal modes from the acoustic counterpart may not provide the full information of spectra for the Painlevé-Gullstrand geometry, because the conformal factor affects the quasi-normal modes except in the eikonal limit [22]. Moreover, it is widely known that singular black holes (SBHs) suffer [23] from the UV incompleteness at both classical and quantum levels because of the spacetime singularity. Many phenomenological models have been proposed for avoiding the singularity at the center of BHs, see, for instance, the review [24]. These nonsingular solutions of general relativity are called regular black holes (RBHs), being of finite curvature invariants on the entire manifold of spacetime. In fact, Bardeen proposed [25] the first RBH which was recognized [26] later on as a product created by nonlinear electrodynamics (NED). This model is currently dubbed as Bardeen black hole (BBH). The further developments of the BBH have been presented, see, e.g., Refs. [27][28][29].
Our aim in the current work is to construct RBHs in acoustic gravity named as acoustic RBHs (ARBHs) and to investigate their energy conditions and dynamic properties, such as QNMs [33].
Here we note that the energy conditions refer to the constrains on the matters generating the RHBs in the Universe, not on the fluid for simulation in a laboratory. Since we dedicate to study the RBHs with the aid of analogue gravity, we investigate whether the RBHs we construct in fluid are reasonable or not, that is, if their counterparts in the Universe have the possibility of existence. As a by-product, we find that ARBHs have different characteristics from those of RHBs generated by NED, in particular, they should be classified into conformal gravity [34][35][36]. The seeming reason is that the acoustic interval looks like a line element of a conformally related black hole, where the fluid density can be regarded as a conformal factor, but the virtual reason is that the acoustic interval leads to a non-vanishing partition function if it is interpreted in the context of conformally invariant theory. In general relativity, the energy conditions give [37,38] constraints upon the energy-momentum tensor of matter fields, such as positivity of energy density and validity of causality. For instance, one can determine whether the matter field of RBHs created by NED is physically reasonable in terms of the dominant energy condition and whether the superradiance occurs by checking the weak energy condition which is also associated with the second law of BH mechanics [39]. In the context of ARBHs, we define the analogue energy-momentum T µν and thus explore the corresponding energy conditions by supposing the linear relation between the analogue Einstein tensor and energy-momentum tensor. We find that the energy conditions of ARBHs have novel properties when ARBHs are dealt with in the framework of conformal gravity.
As QNMs play an important role in the stability analysis of analogue BHs, see, for instance, an example of optical BHs [40], we focus on the QNMs of ARBHs by studying the propagation of scalar fields in the effective curved spacetime manifested as the acoustic disturbance. As shown in Ref. [2], the equation of motion for the acoustic disturbance is identical to the d'Alembertian equation of a massless scalar field propagating in a curved spacetime. We can thus compute the QNM frequencies of ARBHs by using the WKB method [41][42][43][44][45] as usual.
This paper is organized as follows. We propose a general method to construct ARBHs in Sec. 2, where the regularity is verified in the perspective of finiteness of curvature invariants and completeness of geodesics. We then give one specific ARBH model in Sec. 3. In Sec. 4, based on the complete form of Euler's equation we analyze the importance of an external-force term in the realization of acoustic analogy. The energy conditions of the model are discussed and compared with those of the conformally related Schwarzschild black holes (CRSBHs) [35] in Sec. 5. In Sec. 6, we analyze the effective potential and calculate the QNMs for the ARBH model. Finally, we give our summary in Sec. 7. The Apps. A and B include the detailed analyses of energy conditions of CRSBHs and the repulsive interaction of the specific ARBH model outside the model's event horizon. Throughout this paper, we adopt the units with the speed of sound c = 1 and the sign convention (−, +, +, +).
Acoustic regular black hole in fluid
In this section, we construct a general ARBH in the spherically symmetric fluid. The fluid is assumed to be locally irrotational, barotropic, inviscid, and compressible. The acoustic interval then takes the form [8], which can be obtained by combining the equation of continuity, and Euler's equation, where ρ, v v v, and p are density, velocity, and pressure of the fluid, respectively, and c ≡ |∂ p/∂ ρ| is local speed of sound. In the following discussions c is normalized to unity, 1 and the density ρ and velocity v v v are supposed to be functions of radial coordinate r only. In addition, the last term of Eq. (3) represents [3] an external driving force and ψ is the corresponding potential. This term does not affect [3] the wave equation of sound and the acoustic metric, but it is indispensable in the acoustic analogue of an astronomical black hole because ψ provides an external field for realizing the specific fluid, which will be explained in detail in Sec. 4. If we consider the spherically symmetric fluid with only non-vanishing radial velocity, v r = 0, and perform the following transformation, we rewrite Eq. (1) as follows, or write the metric explicitly, The density ρ plays the role of a conformal factor ifg µν describes a static spherically symmetric black hole. In the above specific setting, ρ and v r are constrained by the relation, 2 which can be derived by integrating Eq. (2) with respect to the radial coordinate, where A is integration constant. Note that ρv r is divergent at r = 0 in the manner of r −2 . This divergence appears at r = 0 in the following three cases: In general, the local speed of sound depends mainly on the temperature of fluid. Here the temperature of fluid is constant, so it is usual to set c = 1. 2 This represents the peculiarity of acoustic intervals which will be utilized to pick ARBHs out. 3 Here "finite" includes zero and nonzero constants.
• (ii) ρ is finite, while v r is divergent; • (iii) Both ρ and v r are divergent.
Such a classification will help us construct ARBHs. In order to check whether g µν , see Eq. (6), together with Eq. (7) describes an ARBH or not, we have to investigate the finiteness of curvature invariants and completeness of geodesics at the center of this ARBH. Next, we discuss the two issues in two separate subsections.
Finiteness of curvature invariants
Using Eq. (6) and the definitions of the three curvature invariants, the Ricci scalar R ≡ g µν R µν , the contraction of two Ricci tensors R 2 ≡ R µν R µν , and the Kretschmann scalar K ≡ R µνρσ R µνρσ , we obtain where the prime denotes the derivative with respect to the radial coordinate. Now let us analyze whether the three curvature invariants are finite or not when r → 0 in the first case mentioned above. Substituting Eq. (7), i.e., ρ = A/(r 2 v r ), into Eqs. (8), (9), and (10), we express explicitly the leading orders of the three curvature invariants, where v 0 ≡ lim r→0 v r . They are obviously finite as r goes to zero. As to the asymptotic behaviors of ρ at r → 0, we know from Eq. (7), ρ(r) ∼ 1/r 2+a with a ≥ 0, where a > 0 corresponds to that v r goes to zero in the manner of r a and a = 0 corresponds to that v 0 is a nonzero constant. Moreover, we have to require the asymptotic flatness of the metric (Eq. (6) associated with Eq. (7)) in the first case. Let us analyze the leading orders of v r (r) and ρ(r). If v r (r) → A/r 2 and ρ(r) → 1 when r → ∞, the asymptotic flatness is ensured. As a result, the models constructed in the first case can be regarded as a candidate of ARBHs. 4 For the second case in which ρ is finite, while v r is divergent at r = 0, we can judge by following the way for the first case that the three curvature invariants are divergent as r goes to zero. In fact, we have a shortcut to reach the goal. If we choose the asymptotic behaviors of ρ and v r , for instance, to be ρ(r) ∼ 1 and v r (r) ∼ A/r 2 as r → 0, respectively, the shape function of Eq. (6) tends to 1 − A 2 /r 4 , which definitely describes a singular spacetime. Thus, no ARBHs can be given in the second case.
As to the third case where both ρ and v r are divergent as r → 0, we can easily determine from Eqs. (8)-(10) that no ARBHs can be constructed in this case, either.
In summary, Eq. (6) associated with Eq. (7) indeed describes an ARBH when the fluid density is divergent while the radial velocity is finite at r = 0, where the fluid density plays the role of a conformal factor, see footnote 4 for a detailed explanation.
Completeness of geodesics
To check the geodesic completeness of the metric Eq. (6), we start with the Lagrangian [34] of a test particle constrained in the equatorial orbit θ = π/2, where the dot stands for the derivative with respect to affine parameter τ. Since t and φ are cyclic coordinates, one has two integrations of motion, where the energy E and angular momentum L are conserved quantities for a free radially infalling particle in static spacetimes. Then replacing the velocities in Eq. (14) by Eq. (15) we obtaiṅ where δ = 0 corresponds to null and δ = 1 to timelike geodesics, respectively. For simplicity, we consider the radial geodesic motion, which implies that the angular momentum vanishes, L = 0. 4 We note that the density ρ can indeed be regarded as a conformal factor due to its asymptotic behaviors: ρ(r) ∼ 1/r 2+a at zero and ρ(r) ∼ 1 at infinity. Based on such asymptotic behaviors, one of the possible forms reads, ρ(r) = , where b ≡ 2+a 4 and L ≡ A 1/(4b) , see, for instance, the conformal factors chosen in Refs. [22,36]. Now we can write down the affine parameter by the following integral, where r i and r f represent the initial and final positions, respectively. For a null geodesic, δ = 0, the integrand of Eq. (17) can be written as follows: Since ρ diverges at r = 0, Eq. (18) implies that the proper time is also divergent. For a timelike geodesic, δ = 1, the integrand can be written as From Eq. (16), we deduce that E 2 − f ρ 0, which means that E goes to infinity if f > 0 inside the innermost horizon. That is to say, the test particle needs infinite energy to reach the center of ARBHs, so there are no particles that can reach the center. Alternatively, considering that f < 0 inside horizons and E is finite but ρ goes to infinity when r → 0, we have 1/ Thus, the integrand is also divergent, i.e., the timelike geodesic is complete as well.
As a matter of fact, Eq. (16) describes a particle that is moving in a negative potential well but has vanishing total energy. Intuitively, this test particle cannot reach the center of ARBHs within finite "time" because V eff vanishes at r = 0.
In this section, we have proven that the Ricci scalar R, the contraction of two Ricci tensors R 2 , and the Kretschmann scalar K are finite at r = 0, and both the null and timelike geodesics are complete in the ARBH spacetimes, which means that the ARBHs we constructed have no spacetime singularity.
A specific model
A direct way to construct a RBH is to substitute a shape function into Eq. (6), which is similar to the case of Schwarzschild BHs, then one can determine ρ and the metric g µν with the help of Eq. (7). Nevertheless, such a RBH is the lack of asymptotic flatness. Therefore, considering the asymptotic behaviors of the fluid density at zero and at infinity together with the constraint between the density and the radial velocity, we give such an ARBH model, where ρ * is a constant with the dimension of density and the integration constant A has been introduced in Eq. (7). As explained in Refs. [35,36,46], L is a typical length scale of this model, such as the horizon radius or the Planck length, and N, a dimensionless constant, determines whether the scalar curvatures are regular at the center of this model. Further, we perform such a transformation, in Eq. (20), and substitute the transformed Eq. (20) into the line element, Eq. (5), and then let the line element absorb ρ * . In this way, we make the new line element look like Eq. (5) but associate with the dimensionless density and radial velocity 5 as follows: We emphasize that the new line element is independent of the constant density ρ * and the integration constant A but dependent only on the parameters L and N. Now we substitute Eqs. (5), (6), and (22) into Eqs. (8)-(10) and thus derive the leading orders of curvature invariants near r = 0. We notice that the leading orders depend on N. When N ≤ 1/2, the leading orders near r = 0 are when N > 1/2, they have the following forms, From Eqs. (23)-(28), we can confirm that the curvature invariants are finite when N ≥ 1/2. To illustrate the finiteness of curvature invariants and completeness of geodesics for the specific model, we take two different cases, N = 1/2 and N = 1, where the former is critical while the latter is a sample of N > 1/2.
In this case, there exists only one horizon whose radius equals r + = √ 1 − L 2 , where the existence of horizons requires L 2 < 1. Eq. (22) reduces to Correspondingly, the leading orders of the three curvature invariants near r = 0 read They are obviously finite. As to the completeness of geodesics, for the null geodesics with δ = 0, substituting Eq. (29) into Eqs. (17) and (18), we obtain the affine parameter, which goes to infinity when the initial position is fixed and the final position goes to zero. Moreover, for the timelike geodesics with δ = 1, Eq. (17) cannot be expressed analytically because of the complicated integrand, but the expansion of the integrand near r = 0 can be written as which implies that the affine parameter diverges when the final position goes to zero.
For this case, the horizon radii are r ± = (1 − 2L 2 )/2 ± √ 1 − 4L 2 /2, where " + " means the outer horizon and " − " the inner horizon, and the existence of horizons gives the condition, L 2 ≤ 1/4. Eq. (22) gives the density and radial velocity as follows: and thus the expansions of curvature invariants near r = 0 read It is obvious that the curvature invariants converge at r = 0. For the completeness of the null geodesics with δ = 0, we derive the affine parameter, which is divergent when r f → 0, i.e., the particles moving along the radial geodesic can never reach the center within a finite proper time. For the completeness of the timelike geodesics with δ = 1, we give the expansion of the integrand of Eq. (17) near r = 0, which diverges at r = 0 as expected.
Now we illustrate the regularity of this specific ARBH model in four figures. We plot the graphs of shape function f (r) in Fig. 1 for the cases of N = 1/2 and N = 1. The three curvature invariants as a function of the radial coordinate are plotted in Fig. 2 for the case of N = 1/2 and in Fig. 3 for the case of N = 1 according to Eqs. (8)- (10) and Eqs. (29) and (33). Moreover, we plot the graph of the affine parameter of null geodesics as a function of the final position in Fig. 4 according to Eqs. (31) and (35).
Potentials of external driving force
Our approach to construct the acoustic metric in Sec. 3 is based on the following assumptions: • The speed of sound is a position-independent constant and can be normalized to unity, c = 1; • The fluid is irrotational, i.e., its vorticity w vanishes, • The fluid is spherically symmetric, i.e., the velocity v v v has only radial component v r and all physical quantities, such as ρ, v r , etc., depend only on radial coordinate r.
Therefore, if Euler's equation Eq. (3) did not involve an external-force term, the above items would lead to a problem on consistency when we are going to establish the acoustic counterpart of a and respectively. Thus, if there were no the external-force term, −∂ r ψ, one would fix v r (or ρ) via the continuity equation when ρ (or v r ) is given to mimic a gravitational metric, but such a treatment would probably contradict to the Euler equation. In other words, we have actually only one unknown variable v r (or ρ) but two dynamical equations, i.e., one redundant condition appears. Nonetheless, this case will never happen when the external-force potential exists. Now we calculate the external potential for our ARBH model established in Sec. 3. The first integral of Euler's equation in Eq. (38) provides where ψ 0 is an integration constant. Then, substituting Eq. (22) into Eq. (39), we arrive at whose asymptotic behaviors at r → 0 and r → ∞ take the forms, respectively. In other words, the external force is asymptotic to −4N/r around the center and vanishes at infinity. It is obvious that the Euler equation of our ARBH model, Eq. (39), has a consistent asymptotic behavior when v r is finite and ρ divergent as r → 0.
Because the external-force term in Euler's equation does not affect [3] acoustic metrics, so it has rarely been drawn much attention [8]. As we have discussed above, this term suggests a way to realize the specific fluid when we study the acoustic analogue of an astronomical black hole, so it is critical.
Energy conditions
As is known, the energy conditions can examine cosmological models and strong gravitational fields, and give restrictions on the forms of energy-momentum tensors of matter fields. In general, the energy conditions are classified [37] into four categories: Null energy condition (NEC), weak energy condition (WEC), strong energy condition (SEC), and dominant energy condition (DEC).
Based on Refs. [35,38], we briefly explain the meanings of the four energy conditions. The NEC requires that both energy density and pressure cannot be negative when measured by an observer traversing a null curve, or if one of them is negative, the other must be positive and its magnitude must be larger than the absolute value of the negative quantity. The WEC states that the energy density of any matter distribution measured by any observer traversing a timelike curve must be nonnegative. The SEC requires where v µ is future-directed, normalized, and timelike vector, T µν is energy-momentum tensor, and T = g µν T µν . The DEC states that the energy flow cannot be faster than the speed of light, i.e., it ensures the causality. The energy-momentum tensor can be written as T µ ν ≡ g µα T αν = diag{−ρ 0 , P 1 , P 2 , P 3 }, see App. A for the derivation and discussion. Thus, the four energy conditions can be expressed in terms of the components of the energy-momentum tensor as follows:
Energy conditions of our ARBH model
Let us investigate various energy conditions for the ARBH model we just constructed. We suppose the energy-momentum tensor is proportional to the Einstein tensor of the acoustic gravity because our strategy is to investigate the physicality of a gravitational BH equivalent to our ARBH, and therefore derive the four components of T µ ν . Using Eqs. (6) and (7) together with Eq. (29) for the case of N = 1/2 or Eq. (33) for the case of N = 1, we can verify the relation, 6 P 2 = P 3 , so there are only six independent inequalities in Eq. (43) that are listed below. 6 In fact, this condition is valid for a general static and spherically symmetric BH.
For the case of N = 1/2, we compute the six independent quantities, The energy conditions require that these quantities should be nonnegative. We plot the allowed regions on the r − L plane in Fig. 5.
Combining the six subfigures in Fig. 5 with the four energy conditions in Eq. (43), we can determine the domains that the energy conditions are satisfied for the case of N = 1/2, which is plotted in Fig. 6. We can see from Fig. 6 that the SEC is completely violated in the entire parameter range and spacetime, L 2 < 1 and r ∈ [0, ∞). This is actually what we expected because the spacetime with f = 1 − 1/r 4 is asymptotic to the metric of our ARBH model, see Eqs. (6) and (29) in the limit of r L, and such a spacetime is of repulsive interaction which breaks the SEC, see App. B for a detailed explanation. However, the situation of our ARBH model is more complicated than usual. We see in Fig. 5d that ρ 0 + ∑ 3 i=1 P i 0 is satisfied in one region outside the horizon, i.e., the ARBH produces an attractive interaction outside the horizon although the SEC is violated based on Ref. [47]. The reason that makes the SEC invalid is that ρ 0 + P 1 ≥ 0 is violated outside the horizon, which is different from the situation in the usual BH models with f = 1 − 1/r 4 . In addition, the NEC, WEC, and DEC are satisfied in a piece of domains inside the horizon (also including the horizon as boundary) for the parameter range 0.8 < L ≤ 1.0.
The energy conditions require that these quantities should be nonnegative. We plot the allowed regions on the r − L plane in Fig. 7. Similarly, the corresponding valid domains of energy conditions are shown in Fig. 8 for the case of N = 1 when Fig. 7 is combined with Eq. (43). We can see from Fig. 8 that the NEC and SEC are satisfied in two pieces of domains for the parameter range 0 < L ≤ 1/2, where one is located inside the inner horizon and the other between the inner and outer horizons (also including the horizons as boundaries). It is worthy to emphasize that the situation of SEC in our model is a counterexample of the work [48] in which the breaking domain of SEC for a regular black hole with metric g µν = diag − f , f −1 , r 2 , r 2 sin 2 θ must be located inside horizon. The reason is that our ARBH model does not satisfy the simple relation, −g tt g rr = 1. Therefore, our situation of SEC becomes complicated. Moreover, the WEC and DEC are satisfied in only one piece of domains between the inner and outer horizons (also including the two horizons as boundaries for WEC and only the outer horizon as boundary for DEC) for the parameter ranges 0 < L ≤ 1/2 (WEC) and 0 < L < 1/2 (DEC), respectively.
Besides the above discussions of energy conditions on the r − L plane, for our ARBH model depicted by Eq. (22), we further investigate its energy conditions by plotting the valid domains on the r − N plane in Fig. 9. The NEC, WEC and SEC are satisfied in two pieces of domains for the parameter range 1/2 < N ≤ 1 and L = 1/2, where one piece is located inside the inner horizon and the other between the inner and outer horizons (also including the horizons as boundaries). However, the DEC is satisfied in only one piece of domains between the inner and outer horizons (also including the outer horizon as boundary) for the parameter range 1/2 < N < 1 and L = 1/2. In particular, Fig. 9c shows that the SEC is completely violated in the entire spacetime r ∈ [0, ∞) in the vicinity of N = 1/2.
Energy conditions of conformally related Schwarzschild black holes
In Sec. 2 and Sec. 3, we have seen that our ARBH model can be regarded as a conformally related BH in the perspective of finiteness of curvature invariants and completeness of geodesics, where the density of fluid acts as the scale factor. It is just a seeming reason that the line element of our ARBH model, Eq. (5) and Eq. (22), looks like that of a conformally related BH. The virtual reason is that the acoustic analog leads to a non-vanishing partition function if it is interpreted in the context of conformally invariant theory. Let us extend this discussion. If the Euclidean action of our ARBH model were constructed [49] bỹ where the ellipsis represents the surface term and matter sectors, it would be divergent since √ −g is divergent at r = 0. As a result, all the thermodynamic variables computed by the path-integral method would be trivial because the partition function Z = e −Ĩ vanishes. Nevertheless, if we construct our ARBH model in the conformal theory [50], i.e., where ϕ is a massless scalar field and ∇ µ covariant derivative, the situation will be improved because the scalar field ϕ can absorb the divergence of the measure √ −g based on the conformal symmetry.
Here we intend to emphasize that this analogue BH has its own specific properties in the energy conditions that are distinct from those of a conformally related BH. We shall take CRSBHs as an example, analyze its energy conditions and compare them with our ARBH model's.
The scale factor of CRSBHs takes [22] the form, whereN andL have the same meanings as those of N and L in Eq. (22), andL andN are independent of each other but L and N are related to each other due to the existence of horizons. We can verify that the regularity of CRSBHs requiresN 3/4. Following the same procedure as that in the above subsection, we plot the valid domains of energy conditions of CRSBHs 7 on the r −L plane in Figs. 10 and 11 for the two cases ofN = 3/4 andN = 1, respectively. We can see that the energy conditions are satisfied only outside the horizon of CRSBHs, which is completely different from the situation of our ARBH model in Figs. 6 and 8. We also notice that the valid domains in Figs. 10 and 11 are located in the areas with a minimum value ofL, and that they expand whenL increases. However, it is obvious that the expansion of domains does not happen in our ARBH model, see Figs. 6 and 8, because L is constrained by the value of N. Especially, the NEC and SEC are satisfied at r = 0 for our ARBH model, see Figs. 8 and 9, which does not appear in the CRSBHs. This feature (the SEC is not violated at r = 0) implies that the interaction is attractive in the vicinity of r = 0 in our ARBH model, which presents the characteristic of this acoustic analog.
In addition, we plot the valid domains of energy conditions of CRSBHs on the r −N plane in Fig. 12. When comparing it with Fig. 9, we find that the domains of energy conditions of CRSBHs are located outside the horizon while those of our ARBH model inside the outer horizon. This is the main difference between the ARBHs and CRSBHs in the energy conditions, and the other differences are similar to those mentioned above between r − L and r −L graphs.
Quasinormal modes of acoustic regular black holes
In this section, we discuss the sound propagation in the spacetime of our ARBH model. As mentioned in Introduction, the equation of motion for an acoustic disturbance is identical [2] to the d'Alembertian equation of a massless scalar field propagating in a curved spacetime. That is, the sound propagation in our ARBH spacetime manifests as the propagation of a massless scalar field in an effective curved spacetime, which is described by the Klein-Gordon equation. As a result, we can analyze the stability of our ARBH model by computing its QNMs in terms of the WKB method [41][42][43][44][45], where the 6th-order WKB method is adopted in order to have the balance between precision and complexity of numerical calculations. The Klein-Gordon equation for a massless scalar field Φ in a curved spacetime can be written as where Φ represents the disturbance to the background fluid, i.e., the potential function of acoustic waves [3]. In order to separate the variables in Eq. (59), the function Φ can be chosen as where Y m (θ , φ ) is spherical harmonic function of degree l and order m, and l is also called the multipole number. Substituting Eq. (60) into Eq. (59), we get the Schrödinger-like equation [22], with the effective potential, where Z ≡ r √ ρ and r * is the tortoise coordinate defined by dr * = dr/ f (r). For our ARBH model, substituting Eq. (29) for the case of N = 1/2 and Eq. (33) for the case of N = 1 into Eq. (62), we write down explicitly the effective potentials, and × (l + 1)l r 2 + 2(L 12 + 5L 10 r 2 + 10L 8 r 4 + 10L 6 r 6 + L 4 (5r 8 + r 4 ) + L 2 r 6 (r 4 − 5) + 2r 8 ) r 2 (L 2 + r 2 ) 6 . The QNMs solved from Eq. (61) together with the effective potential Eq. (62) can be cast in the complex form, ω = Re ω + i Im ω, where the real part, Re ω, represents the oscillation of perturbation, while the imaginary part, Im ω, characterizes the dissipation of perturbation. We use the 6th-order WKB method to provide numerical solutions. It should be noted that the WKB method requires that the effective potential V (r) has one single maximum outside the horizon and that the multipole number l is larger than the overtone number which is taken to be zero for the fundamental mode of scalar field perturbation [44]. We can see from Fig. 13 that our ARBH model meets the requirement.
The QNMs satisfy [45] the following formula in the 6th-order WKB method, where V 0 is the maximum of the effective potential V (r), V 0 = d 2 V (r) Fig. 14 correspond to the change of −Im ω with respect to l for the cases of N = 1/2 and N = 1, respectively. We note that the real parts of two cases have similar behaviors, so do the negative imaginary parts. In the left diagram Re ω depends on l linearly, and the slope is approximately 0.66 and 0.73 for the cases of N = 1/2 and N = 1, respectively. We deduce that the oscillating frequency of case N = 1/2 is smaller than that of case N = 1 for a fixed l, and that the difference of oscillating frequency between the two cases becomes large when l increases. In the right diagram −Im(ω) has a peak at l = 2, where the peak is approximately 0.63 for the case of N = 1/2 and 0.56 for the case of N = 1; in particular, −Im(ω) goes to constant when l ≥ 5, which equals 0.61 and 0.55 for the cases of N = 1/2 and N = 1, respectively. We deduce that the damping time (inversely proportional to −Im(ω)) of the former case is smaller than that of the latter, and that there exists a minimum damping time at l = 2 for the two cases. We further know that our ARBH model is more stable in the case of N = 1 than in the case of N = 1/2 for a fixed l, where the minimum damping time at the peak corresponds to the state with the least stability, and that the stability decreases quickly when l takes values from one to two, and increases slowly when l takes values from two to five, and finally maintains unchanged when l ≥ 5 for the two cases.
In Fig. 15, we draw the results of the QNMs depending on the characteristic parameter L, where l = 3 and n = 0 are set. For the two cases of N = 1/2 and N = 1, the real parts increase while the negative imaginary parts decrease when L increases. For a fixed L, the real part of case N = 1/2 is smaller than that of case N = 1, which shows that the oscillating frequency for the former is smaller than that for the latter after our ARBH model is perturbed; when L becomes large, the difference of oscillating frequency between the two cases becomes large. However, the negative imaginary part of case N = 1/2 is larger than that of case N = 1 for a fixed L, which shows that the damping time for the former is smaller than that for the latter; when L becomes large, the difference of damping time between the two cases also becomes large. In addition, our ARBH model is stable after it is perturbed because the imaginary part is negative, and it is more stable in the case of N = 1 than in the case of N = 1/2 for a fixed L, and on the other hand it is more stable for a larger L in the both cases. Figure 15: QNMs with respect to L, where l = 3 and n = 0 are set. The left diagram represents the real parts of ω with respect to L for the cases of N = 1/2 and N = 1, respectively; the right diagram represents the negative imaginary parts of ω with respect to L for the cases of N = 1/2 and N = 1, respectively.
We also calculate the QNMs of CRSBHs and compare them with those of the ARBH. In Fig. 16, we draw the results with respect to the multiple number l in the cases ofN = 3/4 andN = 1, respectively. In the left diagram of Fig. 16, we find that the real parts of QNMs increase when l increases, which is similar to that of the ARBH. The right diagram of Fig. 16 shows that the negative imaginary parts decrease monotonically when l increases. However, we note that the negative imaginary parts of the ARBH oscillate when l increases and reach the maximum at l = 2, see Fig. 14 for the details.
At last, we investigate the QNMs of CRSBHs with respect toL and compare them with those of the ARBH. We plot Fig. 17 for the two cases ofN = 3/4 andN = 1, where the real parts decrease while the negative imaginary parts increase whenL increases. For a fixedL, the real part of casē N = 3/4 is larger than that of caseN = 1, which shows that the oscillating frequency for the former Figure 16: QNMs of CRSBHs with respect to l, whereL = 0.45 and n = 0 are set. The left diagram represents the real parts of ω with respect to l for the cases ofN = 3/4 and N = 1, respectively; the right diagram represents the negative imaginary parts of ω with respect to l for the cases ofN = 3/4 andN = 1, respectively. Note that the curves of the two cases are almost overlapped.
is larger than that for the latter after a CRSBH is perturbed; whenL becomes large, the difference of oscillating frequency between the two cases also becomes large. On the other hand, the negative imaginary part of caseN = 3/4 is smaller than that of caseN = 1 for a fixedL, which shows that the damping time for the former is larger than that for the latter; whenL becomes large, the difference of damping time between the two cases also becomes large. Comparing Fig. 15 with Fig. 17, we find that the relative positions of the blue and orange curves are just opposite and the changes of them with respect to L andL are opposite, too.
Summary
In the present work, we construct a general ARBH model in the spherically symmetric fluid. Unlike the current ABH model [21] whose velocity of fluid diverges at r = 0, our model has a finite velocity but divergent density, where the density plays the role of the scale factor of a conformally related BH. The fluid flow is realized with the aid of a certain external field, which may offer a possibility to produce ARBHs in laboratory. Moreover, we give the valid domains of various energy conditions. As we have shown in Fig. 9, the violated domains of the strong energy condition are located outside the horizon rather than inside the horizon, which may change our current knowledge on the relation between the regularity and strong energy condition. In addition, we compare our ARBH model with conformally related BHs in the aspect of energy conditions, and find the similarity and diversity between the two types of BHs.
In order to study ARBHs experimentally, it is necessary to analyze the QNMs of ARBHs. Using the WKB method, we calculate the QNMs of our ARBH model characterized by Eq. (22) in the cases of N = 1/2 and N = 1. The results show that the imaginary parts of QNMs are negative, which implies that our ARBH model is stable after it is perturbed. Moreover, the detail features of oscillating frequency and damping time are also given. In particular, we reveal the dependence of stability on the characteristic length of the scale factor (the density of fluid), L, i.e., our ARBH model is more stable for a larger L. When N is larger, the oscillation is faster. In summary, we have shown that the acoustic gravity is able to be employed as a means of studying the scalar perturbation of RBHs.
The simulation method we proposed is suitable for a large class of RBHs and provides a basis for further researches of the Hawking radiation and superradiance. Meanwhile, there is plenty of room for improvement in our method if we strictly follow certain physical principles, such as maintaining the energy conditions, which will be reported soon in our next work.
In addition, our further considerations also focus on the divergence of the classical action in the ARBH model we construct. This issue may lead to a vanishing partition function. Since the metric of our ARBH model has a conformal structure, we try to deal with the issue in the framework of conformal gravity, where the divergence will be improved when a scalar field is introduced. This will be reported elsewhere.
A Energy conditions of CRSBHs
In this appendix, we reanalyze the energy conditions of CRSBHs which have been considered in Ref. [35]. Because the sign of the energy density is wrong in Ref. [35], all the energy conditions related to it have to be reconsidered.
A.1 The difference between T µ ν and e Let us start with the perfect fluid whose energy-momentum tensor takes the form, where g µν U µ U ν = −1. In the rest frame, one can set U µ = (1/ √ −g 00 , 0, 0, 0), thus the diagonalized form can be written as namely, Therefore, one has T µ ν = diag{−ρ 0 , p, p, p}, where the 00 component of T µ ν is negative energy density and the trace of T µ ν equals Tr T µ ν = −ρ 0 + 3p.
Alternatively, one can diagonalize T µν by using orthonormal tetrads. If the metric is diagonal, g µν = diag{g tt , g rr , g θ θ , g φ φ }, the tetrads e
A.2 The correct sign of the energy density of CRSBHs
For the CRSBHs, the metric is g µν = S(r) diag{− f , f −1 , r 2 , r 2 sin 2 θ }, where The 00 component of T µ ν can be computed, namely, the energy density is which is different from Eq. (A.1) of Ref. [35] up to a minus sign. Thus, all the inequalities will be different, e.g., In order to analyze the SEC and DEC outside the event horizon, we analyze the model with the metric, g µν = diag −(1 − r n ), (1 − r n ) −1 , r 2 , r 2 sin 2 θ , where n ∈ R. The SEC is then represented via ρ 0 and P i (i = 1, 2, 3), ρ 0 + P 2 = ρ 0 + P 3 = − (n − 2)(n + 1) 16π r n−2 , From Eqs. (81)-(83), we note that n must satisfy −1 ≤ n ≤ 0 in order to ensure that the SEC is satisfied. For the DEC, one has the following form, which leads to −1 ≤ n ≤ 2 if the DEC is satisfied. For our ARBH model, the metric function is asymptotic to 1 − 1/r 4 at infinity, i.e., n = −4, which means that both SEC and DEC are violated outside the event horizon. From the point of view of Raychaudhuri's equation [47], when the expansion, rotation, and shear can be neglected, one obtains where ξ denotes expansion of geodesics and τ affine parameter. The violation of Eq. (83) implies dξ /dτ > 0, i.e., the gravity is repulsive outside the horizon for n = −4. | 2021-09-29T01:15:50.964Z | 2021-09-28T00:00:00.000 | {
"year": 2021,
"sha1": "410e6cfd6e6505457a635cfd9bb39319f8ddbc09",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "410e6cfd6e6505457a635cfd9bb39319f8ddbc09",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
210908061 | pes2o/s2orc | v3-fos-license | Zeolite Synthesis Using Imidazolium Cations as Organic Structure ‐ Directing Agents
Featured Application: This mini ‐ review presents a case study about the research performed on zeolite synthesis with imidazolium derivatives as OSDAs at the Molecular Sieves Laboratory (LABPEMOL, Federal University of Rio Grande do Norte ‐ UFRN). To do so, we first describe a quick overview of imidazolium derivatives and what is being done in other laboratories, after which we summarize the main results obtained at LABPEMOL. Readers will find a bibliography to deepen their knowledge of synthesis and characterization techniques and to help design new projects involving the synthesis or application of the obtained materials based on the know ‐ how reviewed here. Abstract: Imidazolium ‐ derivative cations are very interesting molecules used as organic structure ‐ directing agents (OSDAs) for zeolite synthesis, widening the possibilities of new materials and applications in this research area. In this review, the studies performed at LABPEMOL using this kind of compound are presented after a quick overview on imidazolium derivatives. The first zeolite synthesis results that started this research study were obtained with 1 ‐ butyl ‐ 3 ‐ methylimidazolium chloride. Then, the design of new OSDAs based on the imidazolium cation, such as 1,2,3 ‐ triethylimidazolium, 2 ‐ ethyl ‐ 1,3 ‐ dimethylimidazolium and 1,2,3 ‐ triethyl ‐ 4 ‐ methylimidazolium, is reported. Afterwards, the structure ‐ direction effect caused by the introduction of heteroatoms with already ‐ published imidazolium derivatives (for example, the Al 3+ insertion into zeolite frameworks with two different OSDAs and the silicoaluminophosphate (SAPO) synthesis using 2 ‐ ethyl ‐ 1,3,4 trimethylimidazolium cations) is discussed. Finally, we also present a quick overview of some achievements of other laboratories.
Introduction
The first synthetic zeolite obtained marked a milestone in the study of porous materials [1,2]. Since then, scientists have attempted to design quick and reliable routes for synthesizing analogues of the existing natural zeolitic topologies [3,4]. However, this scientific marathon did not end here, as another ongoing major effort has been made to find new structures [5][6][7]. The question this poses is how this challenge is faced. To answer this question, we need to go back to that first synthesis procedure performed with hydrothermal technology [3]. This procedure uses a mixture of silicon and aluminum sources that are dissolved in an aqueous medium (hydro, the solvent) to later undergo a high temperature treatment (thermal). These reagents were (and are) difficult to dissolve in water, which is the reason for alkali and alkaline-earth metal hydroxides addition, giving rise to the need for a mineralizing agent and basic media to synthesize zeolites. In addition, different combinations of these reagents were tested, and different zeolitic topologies were obtained [1,8]. At the same time, zeolitic scientists observed that changing the cation of the hydroxide in the same synthesis (e.g., replacing sodium hydroxide with potassium hydroxide) forced variations in the crystal parameters of the final material, even to the point of synthesizing a completely different topology. This behavior justified the incorporation of the concepts templates, structure-directing agents (SDAs), and pore-fillers, which were differentiated based on the type of interactions present in the structure [9].
Over time, scientists began to study the addition of organic compounds to the synthesis procedure [10][11][12][13][14]. However, what popularized these organic compounds was the synthesis with HF (hydrofluoric media) to replace the hydroxides. Zeolites such as pure silica sodalite, ZSM-5, silicalite-1, and silicalite-2 were obtained. Since then and until now, these compounds have been studied for synthesis applications.
A very interesting type of organic compound is ionic liquids. These molecules are characterized by a low melting temperature and low vapor pressure [15]. It is precisely these properties that justify their application for synthesizing zeolites. Ionic liquids can act as an SDA and solvent. Then, we can use ionic liquids in hydrothermal synthesis with water as the solvent and, also, in a new type of synthesis method, the ionothermal, in which the ionic liquid role is twofold, an SDA and a solvent [16,17]. This mini-review is focused on that first situation, but we recommend reading other reviews or book chapters, such as those from the Morris group, regarding the second type of synthesis [18,19].
Among the various ionic liquids there are the imidazolium derivatives, which are usually synthesized via the quaternization of one of the Ns of the corresponding imidazole derivative [20,21]. These compounds have a certain degree of hydrophobicity, which helps to explain the role they represent in synthesis, as for example described in the articles by Rojas et al. and Variani et al. [22,23]. Ionic liquids are also easy to use and possible to recycle. Ionic liquids possess diverse possible substituents and, as a consequence, wide synthetic possibilities [24][25][26]. In the first example, an ionic liquid was used to synthesize an extra-large pore zeolite by adjusting the supramolecular chemistry, possibly thanks to the use of imidazolium derivatives. Subsequently, the second example showed how an imidazolium derivative helped produce a high silica zeolite Y thanks to the interactions of the OSDA-framework. The last example described a fast ZSM-22 synthesis procedure. For these reasons, great interest has been generated.
The use of ionic liquids in the synthesis of zeolites began with the group of Prof. Dr. Morris, and they currently continue to carry out studies in this regard [27,28]. Ultimately, these materials led to the synthesis of more germanosilicates using imidazolium derivatives, as reported by Prof Dr. Davis's group and Prof. Dr. Corma and collaborators [6,29,30], and in turn, these ionic liquids were studied by Prof. Dr. Morris' group to achieve a new synthesis route: ADOR (assembly-disassemblyorganization-reassembly) [31,32]. Through this process, some of the hypothetical structures that were once called unfeasible have been synthesized, which was not possible through traditional methodologies [33].
Returning to the pure silica composition, we find that Prof. Dr. Camblor's group also excels in this type of synthesis, managing to stabilize the chiral topology of STW, which is also thanks to the use of the hydrofluoric medium [7]. Afterwards, we will describe a study on introducing aluminum into this structure, which was performed at LABPEMOL [34]. Note that further studies about the germanosilicate STW zeolite were published by Prof. Dr. Camblor's group [35,36]. Furthermore, this topology led to a major breakthrough in zeolite synthesis by the enantiomeric resolution achieved by Prof Dr. Davis's group [37].
Another study that have also helped to understand the performance of imidazolium derivatives (and other OSDAs) used in synthesis was that conducted by Prof. Dr. Pérez-Pariente's group.
Through computational experiments, this group was able to predict dimeric OSDA aggregates formed in the course of the zeolite synthesis by helping to direct toward one or another topology [23,38,39]. We interject here to highlight the importance of computational studies in the field of zeolites [40]. Computational studies have been applied to try to predict and/or explain some of the behaviors found during synthesis and in applications. There are studies that model nucleation and template mechanisms [41,42], template interactions that occur when forming dimers [39], and adsorption isotherms [43]. It is worth noting there are studies that predict new structures, such as those used to collect the database of hypothetical structures, as mentioned previously [44]. Finally, the latest attempt to explain interzeolitic transformations and intergrowth has recently been published by Schwalbe-Koda et al. using graph theory [45]. In short, these types of studies are important to explain behaviors and help define new experiments.
There are several other groups working with imidazolium derivatives [46][47][48][49], but still, there is much more to do. Due to the results obtained with this kind of compound, Prof. Dr. Corma and collaborators proceeded to study P-containing [50][51][52] and As-containing [53] OSDAs and initiated new lines of study that have much to offer.
Thus, in this review, we will describe a case study about the establishment of a research line on using imidazolium derivatives for synthesizing zeolites at LABPEMOL. Without elaborating on the general aspects of these compounds, which was previously described in a general review from our laboratory [54], we will carry out a comparative description of our work to help the reader, whether they belong to a university or industry, to develop research on the synthesis with and/or application of these compounds.
LABPEMOL Case Study: First Synthesis Method Using Imidazolium Derivatives
The first attempt to synthesize zeolites with compounds derived from the imidazolium cation culminated in the publication of an article in 2009 in collaboration with the Porto Alegre (Brazil) group led by Dr. Roberto F. de Souza and Dr. Katia Bernando-Gusmão [55]. This article described the use of 1-butyl-3 methylimidazole chloride ( Figure 1) as the OSDA and basic media as the mineralizing agent. The ZSM-5 zeolite belonging to the MFI topology, possessing an interesting spherical morphology ( Figure 2) and was synthesized in 3 days using a high Si/Al ratio (SAR, 50 or 100) and high temperatures (180 °C). However, a lower SAR (20) and temperature (150 °C) directed the structure toward a beta zeolite, corresponding to the *BEA topology (Figure 3). Both topologies feature three-dimensional channel systems and similar pore sizes (between 5.1 and 5.6 Å) [8]. However, the MFI topology, in addition to being orthorhombic and not tetragonal like the *BEA topology, has a higher framework density (FD; MFI = 18.4 T/1000 Å 3 , *BEA = 15.3 T/1000 Å 3 ). Precisely, this high FD was favored by the higher temperature used and followed the same tendency observed when synthesizing without an OSDA, as published by Maldonado et al. [56]. In addition to this, when introducing more aluminum cations into the framework, more crystallographic distortion should occur due to the difference between the ionic radii of the aluminum and silicon cations [57]. . Figure 3. *BEA topology represented with the program Mercury from the CIF, which was provided by the International Zeolite Association (IZA) [8,58]. The oxygen atoms are red, and the silicon/aluminum atoms are in blue.
With the same OSDA and fixing the temperature (160 °C) and the H2O/Si ratio (25), the synthesis studies continued to use basic media and obtained the pure silica TON zeolite and the hetero-substituted Ti-TON [59]. The samples were synthesized both in static and in rotation and were completely crystalline after 1 day of synthesis (Figure 4). This work showed for the first time that titanium cations were introduced into the TON framework in basic media, resulting in a breakthrough for LABPEMOL.
In comparison with the MFI structure of the previous publication, we observed a similar FD (18.1 T/1000 Å 3 ) and the same crystallographic system (orthorhombic). However, the TON topology has a structure with a one-dimensional channel system. Most likely, some kind of relationship can be established with the amount of OSDA molecules found per unit cell, but unfortunately we lack the data presented in these two articles to make that comparison. Completing the experiments with this organic cation, Pergher's group decided to combine the knowledge acquired for the first synthesis attempts in hydrofluoric media with the history of LABPEMOL [60]. To do this, the organic cation had to be exchanged with its hydroxyl form. Once this was achieved, the synthesis was performed at 160 °C, and the pure silica TON topology was obtained with a H2O/Si ratio of 14. However, by introducing Ti under the same synthetic conditions, only amorphous products were identified, even when using seeds. A zeolitic material, Ti-MFI, was only obtained by decreasing the H2O/Si ratio to 7. This result was explained with Villaescusa's rule, which states that concentrated gels lead to less dense zeolitic topologies [61]. Even so, there remains an unknown if Ti-TON would have been obtained in a hydrofluoric medium with the same ratio that was used in the basic medium (H2O/Si = 25).
As a novelty, this last work also incorporated the first study in cooperation with Dr. Luis Gómez-Hortigüela, showing that the more stable location of the OSDA positioned the aromatic ring at the channel intersections, and that the alkyl chains were located in the sinusoidal channels (interaction energy −328.2 kcal/mol per unit cell).
LABPEMOL Case Study: New OSDAs Based on the Imidazolium Cation
The latest work carried out with the OSDA 1-butyl-3-methylimidazolium opened a plethora of synthetic possibilities at LABPEMOL, which led to a change in course, now focusing on synthesizing zeolites in hydrofluoric media. The direction experiments were started using pure-silica zeolite conditions and concentrated gels, and the hydroxylic forms of the OSDAs 1,2,3-triethylimidazolium, 2-ethyl-1,3-dimethylimidazolium, and 1,2,3-triethyl-4-methylimidazolium ( Figure 5) were studied [23]. The STF topology was primarily obtained with all of the OSDAs, while the MFI topology was only identified when using the cation 1,2,3-triethylimidazolium ( Figure 6). Dense tridymite-like phases were also found with the cation 1,2,3-triethyl-4-methylimidazolium. These results also followed Villaescusa's rule; however, in this case, what was acting as the second variable was the OSDA (where the first variable was the H2O/Si ratio). [8,58]. The oxygen atoms are in red, and the silicon/aluminum atoms are in blue.
Another difference that was observed was the number of OSDA molecules per unit cell. Regardless of the chosen cation, the STF topology featured 2 imidazolium cations per unit cell, while for the MFI topology there were 4 imidazolium per unit cell. With these data and again in collaboration with Dr. Luis Gómez-Hortigüela, a more comprehensive computational study was carried out. This study began by studying the different isolated cations to determine the most stable conformations. Then, the interactions of the different cations with the crystalline MFI and STF topologies were studied, regardless of whether or not that phase was obtained with all the OSDAs. An interesting result was how the symmetry required by each cation in the MFI structure varied, allowing some combinations to be discarded based on the incompatibility of some of the symmetries with this topology. This was not found when studying the cation-STF topology interactions, which explained the direction strength of each OSDA based on their sizes.
Following the initial reasoning of the works with 1-butyl-3-methylimidazolium, the effect of titanium incorporation on the synthesis method was studied with the OSDA 1,2,3-triethyl-4methylimidazolium, which was chosen for directing only the STF topology [62]. The resulting products also belonged to the same zeolitic topology, regardless of the chosen Si/Ti ratio. Nevertheless, we cannot conclude that Ti only directs toward the STF, as the only H2O/Si ratio tested was 3.5. Further study is still needed.
LABPEMOL Case Study: Al Insertion into the Zeolitic Framework
Zeolitic aluminosilicates were studied again, but this time from a different perspective. The research works that followed focused on the introduction of Al 3+ cations into known structures synthesized with imidazolium derivatives in hydrofluoric media.
The tests initially started with the STW topology, particularly the HPM-1 zeolite [34]. The great interest generated by this chiral zeolite led us to perform a more advanced characterization through the reaction of ethanol dehydration. The great advantage of this model reaction is that the catalysis products indicate the acidic and/or basic characteristic of the zeolite used [2]. If the zeolite has acidic sites, ethylene and diethyl ether are obtained, whereas a basic material produces acetaldehyde. In our work, which introduced Al 3+ into HPM-1, we only observed products from acidic sites and a greater selectivity to diethyl ether (Figure 7). This work also marked the beginning of a collaboration with Dr. Enrique Rodríguez-Castellón, a specialist in X-ray photoelectron spectroscopy (XPS) and HPDEC MAS NMR. Thus, we managed to carry out some interesting studies on differentiating the SAR between the bulk (SARNMR) and surface (SARXPS). The work describing the introduction of Al 3+ into the STF structure was where we most clearly appreciated the differences (Table 1), and we noted that much of the tetrahedral aluminum that was located in the outermost layer of the zeolitic crystals did not observe large differences between the calcined and non-calcined samples, in this regard.
LABPEMOL Case study: SAPO Synthesis Using Imidazolium Derivatives
Until that moment at LABPEMOL, aluminosilicates, titanosilicates, and pure silica zeolites were studied in both basic and hydrofluoric media. As shown in the literature, there were some studies with imidazolium derivatives that gave interesting results in the synthesis of silicoaluminophosphates (SAPOs), and we performed a series of experiments in this regard [64]. As the OSDA, we chose the compound 2-ethyl-1,3,4-trimethylimidazolium hydroxide, and we chose hydrofluoric acid as the mineralizing agent. This combination already produced interesting results when a pure silica composition was selected, as the chiral zeolite STW was obtained [7,37]. However, as an SAPO, we obtained two different topologies: CHA and LTA. The SAPO-CHA possesses t-hpr tilings, among others, and SAPO-LTA possesses t-cub tilings, also among others. Both types of natural tilings are directed by fluoride anions. Apart from that fact, these two SAPO types have no other structural similarities. Then, to understand the results, we performed a statistical study, showing some results regarding the SAPO-CHA in Figure 8. In the Pareto chart, we observed that the temperature (T) was a decisive parameter for obtaining the SAPO-CHA and had the strongest effect when the water ratio (xH2O) was combined with the synthesis time (t). For the SAPO-LTA, we were unable to quantify the effects due to the small amount of material that was obtained.
Returning to the SAPO-CHA, an interesting fact was the presence of hexacoordinated aluminum in the structure (Figure 9), which was probably due to the interactions of the aluminum cations with the fluoride anions. Concluding this study, it was shown that there is a need to deepen the understanding of the LTA topology, including when the LTA contains phosphor, and the statistical studies used for the structure-direction analysis were very useful.
Conclusions and Prospects
Imidazolium cation derivatives are promising compounds both in the search for new zeolitic structures and in the synthesis of the already known ones. The ease of using and recovering imidazolium cation derivatives highlights their potential in zeolite synthesis research. In this review, we have shown several study possibilities that were carried out by establishing this research line at LABPEMOL. There is a need for systematization in order to study the products resulting from the different synthesis methods with imidazolium derivatives. The use of statistical methods, such as that performed with the SAPOs, or of computational studie (for example those used to study the new OSDAs), may help in this sense. Further studies of the zeolite chirality are needed to translate this knowledge into industry. It is also interesting to perform experiments in relation to the recovery of the OSDAs and the use of cheaper alternative silica and aluminum sources. Finally, new OSDAs and new synthetic methods are needed to achieve other hypothetical topologies that are still waiting to be synthesized. We hope that showing the history of the laboratory, in this regard, will help other research groups begin similar studies. | 2020-01-02T21:45:39.672Z | 2019-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "7df7074d61299acce696d7709e379de9f578e836",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/1/303/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d342781e8ac37e8c86989c57b7ebdfbe043ba6d5",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
110031107 | pes2o/s2orc | v3-fos-license | THE DESIGN OF DATA DISASTER RECOVERY OF NATIONAL FUNDAMENTAL GEOGRAPHIC INFORMATION SYSTEM
: With the development of information technology, data security of information system is facing more and more challenges. The geographic information of surveying and mapping is fundamental and strategic resource, which is applied in all areas of national economic, defence and social development. It is especially vital to national and social interests when such classified geographic information is directly concerning Chinese sovereignty. Several urgent problems that needs to be resolved for surveying and mapping are how to do well in mass data storage and backup, establishing and improving the disaster backup system especially after sudden natural calamity accident, and ensuring all sectors rapidly restored on information system will operate correctly. For overcoming various disaster risks, protect the security of data and reduce the impact of the disaster, it’s no doubt the effective way is to analysis and research on the features of storage and management and security requirements, as well as to ensure that the design of data disaster recovery system suitable for the surveying and mapping. This article analyses the features of fundamental geographic information data and the requirements of storage management, three site disaster recovery system of DBMS plan based on the popular network, storage and backup, data replication and remote switch of application technologies. In LAN that synchronous replication between database management servers and the local storage of backup management systems, simultaneously, remote asynchronous data replication between local storage backup management systems and remote database management servers. The core of the system is resolving local disaster in the remote site, ensuring data security and business continuity of local site. This article focuses on the following points: background, the necessity of disaster recovery system, the analysis of the data achievements and data disaster recovery plan. Features of this program is to use a hardware-based data hot backup, and remote online disaster recovery support for Oracle database system. The achievement of this paper is in summarizing and analysing the common characteristics of disaster of surveying and mapping business system requirements, while based on the actual situation of the industry, designed the basic GIS disaster recovery solutions, and we also give the conclusions about key technologies of RTO and RPO.
INTRODUCTION
With rapid developments of information technology, aerospace technology and network technology, China's surveying and mapping is seeing a unprecedented evolution.With satellite positioning system, RS, GIS and computer technology as the main all-digital mapping technology system, it completely replace the traditional simulation system of surveying and mapping technology.The rapid development from informatization surveying and mapping technology system (Liu, 2012), at the same time, the data security of the geographic information system is facing more and more challenges.
The geographic information of surveying and mapping is a fundamental and strategic resource, which is applied in all areas of national economic, defence and social development.
Especially, the classified geographic information directly concerns Chinese sovereignty, and its national and social interests.It is especially vital to national and social interests when such classified geographic information is directly concerning Chinese Sovereignty.Once data cannot recovered, it will cause great harm (Zhai, Liu, 2008).
At present, the GIS storage of the classified geographic information included in the database is of 1:4,000,000 scale, 1:1,000,000 scale, 1:250,000 scale, 1:50,000 scale, mapping of western china, island surveying, geodetic datum and so on.During more than 30 years, it is accumulated gradually to form the geographic information that is the basis of the industry of surveying and mapping geographic information.
Several urgent problems needs to be resolved in this field which is mass data storage and backup, establishing and improving disaster backup system especially after sudden natural calamity accident, and ensuring all sectors rapidly restored on information system will operate correctly.For overcoming various disaster risks (Xu, 2013), protect the security of data, and reduce the impact of the disaster, it's no doubt the effective way is needed to design and establish a data disaster recovery system.
NECESSITY
With the advent of the era of big data, data is increasingly important.According to IDC predicts that by 2020, the global amount of data will reach 35ZB (Gong, 2012).Large amounts of structured data, unstructured data, semi-structured data will come to pervade every aspect of our lives.Subsequently, the issue of data protection is becoming increasingly prominent, a variety of catastrophic events given industry a wake-up call.
The American "9.11" incident caused the collapse of the trade building, after the incident, the Deutsche Bank and Bank of New York had quite distinct outcome.Deutsche Bank business recovered rapidly because of establishing the center of remote disaster recovery.On the contrary, after a few months Bank of New York forced liquidation due to the loss of data.From "9.11" incident, Kobe earthquake in Japan, Southeast Asia Tsunami and 5.12 Wenchuan earthquake, previous disasters that constantly waken people the importance of disaster backup system (Wei ,2013).
As early as 2003, state authorities on issued a "National Informatization Leading Group on strengthening information security work advice", explicitly requested not only to fully consider disaster recovery capabilities of important information system, but also develop and constantly improve the information security emergency response plans (Wei, 2013).
At present, national fundamental GIS has not yet built an effective disaster recovery mechanism, once the system suffer the unpredictable incidents it will result in software and hardware crash of database management system, and lead to serious consequences, causing irreparable damage to the country.In order to protect the data security of fundamental geographic information, establishing disaster recovery system is necessary, and ensure the smoothness of dissemination service and emergency works.
ACHIEVEMENTS ANALYSIS
Currently, a variety of achievements total 562TB, including national geodetic surveying achievements, national fundamental geographic information data achievements, other thematic data and high-current potential and high access frequency RS achievements.Among them, the RS data reach 549TB.Frequent use of such data than other areas, covering an area of 3.5 million square kilometres, current potential from 2007 to 2013, covers the majority of China's economically developed areas, populated areas, disaster-prone areas and other key areas.In addition, according to the geographical national conditions surveillance scheduled, the next five years will increase by an annual average of the various types of achievements about 300TB, such as the geographical national conditions achievements and the basic scale database.About 120TB achievements that is focal monitoring area of high-resolution image, grid and vector are using online storage.The rest 180TB of data with near-line storage.Therefore, the new online data achievements in the next five years a total of 600TB.Based on mapping production, support for emergency mapping and requirements for distribution services, total over 1162TB (562TB +600 TB) data outcomes using online storage.
According to the actual demand, need online disaster recovery system to protect the core foundation of GIS data achievements, include achievements of 1:250,000 scale, 1:50,000 scale, island surveying, geographical national conditions and geodetic datum and so on.As of early 2013 data about 20TB, expected to increase to 40TB after five years.The above core data achievements mainly using Oracle relational database for storage management.
Other data are mainly renewable strong RS achievements, using offline remote storage and recovery.
Design Principle
Design of Disaster Recovery System stick to the economic and advanced principles: (1)Oriented mature network technology, correctly handle the contradictions between WAN private network bandwidth narrow and spatial data storage capacity.
(2)Oriented advanced storage technology, designed economic data remote transfer mode, to meet a wide range of spatial data and high requests of I / O.
(3)Oriented system integration technology, designed scientific technology integrate, configuration reasonable related hardware and software products.
System Architecture
Database construction mode of national fundamental geographic information database using foliation dataset, the spatial data warehouse need to establish a spatial index and the calculated property relations.For example in database storage of NGCC, according to the core data into foliation dataset way daily average data about 4GB, herein as a basis for the analysis.
In order to ensure fundamental geographic information data integrity and service continuity, established a hot standby site in remote which can backup and recovery data via IP network.That is, through the IP network to a combination of synchronous and asynchronous way to back up local site data to a remote site, when a disaster occurs in the local site, you can choose to restore data from a remote site or service from the remote site to replace the local site.System architecture shown in Fig. 1.RPO (Recovery Point Objective): the data recovery-point objective, mainly refers to the business system that can tolerate data loss.RTO (Recovery Time Objective): the recovery time objective, mainly refers to the service that can be tolerated the maximum time out of service, or recover from disaster to service the minimum time period required by the service.
RPO and RTO is computed as follows: (1) RPO is approximately 100MB-4GB The system using two steps to recover data are synchronous within local sites and asynchronous between the local and remote sites.The achievements need to replicate in accordance with offsite replication strategy after every 100MB achievements storage, combined with the secrecy WAN bandwidth to 20Mbps (actual transfer rate is about 2MBps), then: 1) single remote data replication time = 50 sec 2) single-day remote data replication time 4GB/2MBps=2000 sec The remote data transmission based on 20Mbps secrecy WAN bandwidth, single day can be transmitted data 168.75GB (2MBps×3600×24), more than 4GB data generated by the construction of single days.Therefore, the Wan is able to guarantee completely transfer a single-day storage data at the same day, storage data transmitted does not form the delays and accumulation, loss data due to disasters not more than one day of storage data.
Based on the above analysis and calculation, it can calculate the RPO 100MB-4GB.
(2)RTO is approximately 15-30min The system adopt CDP (continuous data protection) technology, increased data in remote maintenance.Meanwhile, you can separate the data copy relationship between the local site and the remote site when the disaster happened in local site.Spend about 15-30 min in database recovery from the remote site which achieve take over the service quickly.After the local site hardware and software system to return to normal, recovering remote local site data from remote site.
System Composition
National fundamental geographic information data disaster recovery system is mainly composed of the WAN, local disaster subsystem, remote disaster subsystem and data remote replication strategy.
WAN:
Design of renting a 20Mbps bandwidth Special communication cable between the local disaster recovery subsystem and the remote disaster recovery system, using SDH (Synchronous Digital Hierarchy) optical fibre line, supports the TCP/IP protocol, the transmission efficiency is about 90%, the rate can reach 1.8Mbps (6.33GB/ hours).
Subsystem Equipment Allocation:
Local disaster recovery subsystem is responsible for the daily management of the data achievements of Disaster Recovery Strategy.Remote disaster recovery subsystem is responsible for receives achievements, the outcome of the data achievements recovery in case of a disaster, as well as alternative local service in case of emergency.
Local disaster recovery subsystem is mainly composed of hardware and software.The hardware part mainly consists of 1 disk array, 1 storage virtualization equipment, 1 CDP equipment, 2 SAN (storage area network) switches and 1 special remote backup server.The software part mainly consists of a database management software and GIS software, spatial data engine management software each of the 1 sleeve.
Remote disaster recovery subsystem is mainly composed of hardware and software.The hardware part mainly consists of 1 data management server, 1 disk array, 1 CDP equipment, 2 SAN switches and 1 tape library.The software part mainly consists of a database management software and GIS software, spatial data engine management software each of the 1 sleeve.
Storage virtualization equipment through SAN for virtual and shield the host storage layer.Realizing the data real-time replication between production data of local site and data of disaster recovery database system through traffic mirroring.When disaster database systems to accept the new write I / O in the local site, using CDP equipment through its data split point features, the data is updated through CRR(continuous remote replication ) features asynchronous replicate to disaster recovery database system in the remote site.
Disaster Recovery Strategy:
With combination of hardware and software, the data storage in SAN that will be replicated to remote site through IP network, once the event of a disaster, it able to recovery data from local or remote site based on the nature of catastrophic, or directly to enable remote site for data services.
Data disaster recovery systems-oriented database management system (DBMS) in the three disaster recovery framework, namely the local site production database (first point) and local disaster database (second point) for data synchronous replication, then, remote asynchronous data replication between local disaster recovery system and remote backup database.
(1) Within Local Site Data Replication
The existing data achievements in the local production database which are using storage virtual equipment to complete the achievements migrate storage equipment to the existing disaster recovery storage equipment.
The new generate data achievements in the local production database, the first to establish the data replication relationship within the local site data, that is, through storage virtualization equipment within a SAN environment establish traffic mirroring relationship between the production disk array volumes and local disaster recovery subsystem disk array volumes, to synchronize data update replication.This data replication relationship built on SAN (16Gbps) network, enabling real-time replication of data increments.
(2)Data Remote Replication After completed replicating the local site data achievements, the local and remote disaster recovery subsystem through the data split and CRR provided by continuous data protection equipment, establish volume group replication relation, asynchronous remote data replication.
Based on CDP equipment of CRR can be achieved for the RPO provides point in time recovery, while achieving complete site disaster protection, reached in RTO and zero data loss, and significantly reduce the bandwidth requirements via bandwidth reduction techniques.
When the local site breakdown, you can choose to select a continuous time point from the local disaster recovery subsystem or select the important point in time recovery of data and applications from the remote disaster recovery subsystem, you can also enable remote site database system instead of the local site, in extreme cases associated systems for data services.
Key Technology
National fundamental geographic information data disaster recovery system is built on the basis of data on remote storage backup application-level disaster recovery, using a hot standby disaster recovery solutions which combine with the synchronous hardware and asynchronous replication.The key technologies include storage virtualization, continuous data protection and CRR.
Storage virtualization technology to achieve the internal local site across different types, different manufacturers uninterrupted storage, data movement transparent to the user, to simplify data movement processes, improve the efficiency of data movement.
Based on the IP network restore to any point in time, combine with the CDP and CRR technology achieved application-level disaster recovery between local and remote sites.All data changes are recorded in the log and take time tag, so data recovery can be rolled back to a consistent point in time.
Rollback in the data while the application also consistent recovery, to ensure that the application-level disaster recovery implementations.Also implements two-way replication between the local and remote site, and allows each site as the recovery site.
CONCLUSION
This article analyses the features of fundamental geographic information data and the requirements of storage management, designs a disaster recovery system of DBMS plan.It employed the disaster recovery industry mainstream technologies, with good compatibility, scalability, robustness and economy.The system ensure the data security and the smoothness of business.
Figure 1 .
Figure 1.Data disaster recovery system structure4.3Design IndexData disaster recovery system is mainly preventive irresistible or unpredictable catastrophic events brought about nonavailability of data and applications, and loss caused minimized.Two technical indicators are RTO and RPO used to measure the system.
Figure 2 .
Figure 2. Data disaster recovery system topology | 2018-12-07T12:25:59.698Z | 2014-04-23T00:00:00.000 | {
"year": 2014,
"sha1": "c6cce8943c54eb5c289dc1563701378b41076e81",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-4/353/2014/isprsarchives-XL-4-353-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c6cce8943c54eb5c289dc1563701378b41076e81",
"s2fieldsofstudy": [
"Computer Science",
"Geography",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
246297090 | pes2o/s2orc | v3-fos-license | Space-Occupying Intraventricular Vascular Lesion in Tuberous Sclerosis Complex
Intraventricular Vascular Lesion in Tuberous Sclerosis Complex To the Editor: We read your Neurologist case report “Hemorrhage into a subependymal giant cell astrocytoma in an adult with tuberous sclerosis” published in July 2021,1 and found it very interesting. In the article, Barbiero and colleagues presented a case of intratumoral hemorrhage of a subependymal giant cell astrocytoma (SEGA), which is a rare phenomenon, in an adult diagnosed with tuberous sclerosis complex (TSC). In fact, there is a similar case of TSC of our own whereby we are glad to discuss the disease further in this short letter. TSC is a rare condition that can affect virtually any organ in the body,2 first reported in 1862 by von Recklinghausen,3 tentative diagnostic criteria had been made and improved now and then based on the
To the Editor:
We read your Neurologist case report "Hemorrhage into a subependymal giant cell astrocytoma in an adult with tuberous sclerosis" published in July 2021, 1 and found it very interesting. In the article, Barbiero and colleagues presented a case of intratumoral hemorrhage of a subependymal giant cell astrocytoma (SEGA), which is a rare phenomenon, in an adult diagnosed with tuberous sclerosis complex (TSC). In fact, there is a similar case of TSC of our own whereby we are glad to discuss the disease further in this short letter.
TSC is a rare condition that can affect virtually any organ in the body, 2 first reported in 1862 by von Recklinghausen, 3 tentative diagnostic criteria had been made and improved now and then based on the progression of our understanding about this peculiar disease. The Updated Diagnostic Criteria for TSC 2012 is the most recent and generally accepted program for TSC diagnosis to date. 2 Subependymal nodules (SENs) and SEGA are considered to be the major characteristics of intracranial lesions of TSC and were both explicitly made diagnostic on the rare genetic disease in this criteria. 2 Taking no account of the correlation between SEN and SEGA for now, however, an interesting non-SEGA but SEGA-like intraventricular vascular lesion, assumed vascular malformation, has never been reported and it is controversial with current understanding about TSC.
We would like to share an unique case of a tumor-like intraventricular vascular lesion, instead of SEGA, in a TSC patient. In 2010, a 19-year-old boy desperately sought surgical treatment of a "brain tumor" for intermittent attacks of seizures over the past 11 years, which were marginally controlled by multiple medications yet remained repeatedly recurrent. Visually he had facial angiofibromas and ungual fibromas (Fig. 1A) that were fairly recognizable, meanwhile, computed tomography and magnetic resonance imaging revealed multiple renal cysts of both sides (Fig. 1B) as well as SENs in his head (Fig. 1C). These findings were amply in compliance with the 2012 Criteria with presence of at least 3 major (facial angiofibromas, ungual fibromas, and SENs) and 1 minor clinical features (renal cysts). However, a spaceoccupying lesion near left foramen of Monro led us to a diagnosis of SEGA which seemed the only rational reflection. Fresh, preoperative magnetic resonance imaging at that time indicated heterogenous enhancement within the "tumor" as well as prominent growth compared to 7 years before then (Fig. 1C). A frontal transcortical tumorectomy was scheduled several days later, which went well with a complete resection of the lesion. No signs of epileptic relapse were found postoperatively. However, unfortunately, the patient suffered from an infection of the central nervous system which gave us no choice for betterment but to engage a sustained lumbar cistern drainage, which proved worked subsequently.
Surprisingly, a pathologic test showed unexpected results that the majority of the specimen consisted of vasculopathy but tumoral cells (Fig. 1D). Clotted blood and abnormal vascular plexus were found at the site, pathologic diagnosis was determined as an intraventricular vascular lesion with hemorrhage in the end and the patient was discharged before long, with optimistic expectancy of fine recovery. The patient is well till publishing, without any signs for recurrence of the intraventricular vascular lesion, after a follow-up over 10 years.
We regret a genetic screening was not conducted regarding TSC1/TSC2 mutations to aid in confirmation of TSC but the clinical findings were just sufficient for the diagnosis: This is a TSC patient. But the question goes, is the mass in his head a SEGA?
In this case, we believe the pathologists have ultimate priority to say the nature of this "tumor", and when the fact goes "wrong," it is time we must challenge the traditional ideas. Both SENs and SEGAs were extensively reported and frequently detected prenatally or at birth, 2 but usually before the age of 20 years. 4 Up to 80% TSC patients had SENs while SEGAs are less common (5% to 15%) and it is believed that SEGA derives from SENs in many cases. 4 It is widely accepted that SEGAs typically arise from SENs, especially near the foramen of Monro. 2 Our patient had SENs but it does not mean the mass "has to" be a SEGA, not necessarily. Thus, a non-SEGA intraventricular lesion in a TSC patient, with hemorrhage, should add to diversification of central nervous system features of TSC as we consider. To say the least, hemorrhage within a SEGA is still exceedingly rare. 1 The report of our case is a rare phenomenon in a rare, complicated disease. It argued that intracranial lesions of TSC may be diversified, not limited to SEN or SEGA, with mechanisms awaiting exploring. More concerns shall be arisen for this matter. As neurosurgeons and neurologists, we might as well give some reinspection to the essence of TSC's central nervous system manifestations and apprehending its natural history, conceivably. | 2022-01-28T06:17:10.950Z | 2022-01-26T00:00:00.000 | {
"year": 2022,
"sha1": "f145edabff5e170819853729c20ae64ec935713d",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/theneurologist/Citation/9000/Space_occupying_Intraventricular_Vascular_Lesion.99942.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5051a07d48afd83d60f1b0b5ba2ee0450a2ba573",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52095355 | pes2o/s2orc | v3-fos-license | Learning behavioral context recognition with multi-stream temporal convolutional networks
Smart devices of everyday use (such as smartphones and wearables) are increasingly integrated with sensors that provide immense amounts of information about a person's daily life such as behavior and context. The automatic and unobtrusive sensing of behavioral context can help develop solutions for assisted living, fitness tracking, sleep monitoring, and several other fields. Towards addressing this issue, we raise the question: can a machine learn to recognize a diverse set of contexts and activities in a real-life through joint learning from raw multi-modal signals (e.g. accelerometer, gyroscope and audio etc.)? In this paper, we propose a multi-stream temporal convolutional network to address the problem of multi-label behavioral context recognition. A four-stream network architecture handles learning from each modality with a contextualization module which incorporates extracted representations to infer a user's context. Our empirical evaluation suggests that a deep convolutional network trained end-to-end achieves an optimal recognition rate. Furthermore, the presented architecture can be extended to include similar sensors for performance improvements and handles missing modalities through multi-task learning without any manual feature engineering on highly imbalanced and sparsely labeled dataset.
Introduction
The problem of context recognition is centered on inferring person's environment, physical state, and activity performed at any particular time. Specifically, a understanding of the user's current context requires determining where and with whom the person is? and in what type of activity the person is involved in? The behavioral and activity analysis is an important and challenging task mainly because it is crucial for several applications, including smart homes (Rashidi and Cook 2009), assisted living (Lin et al. 2015;Rashidi and Mihailidis 2013), fitness tracking (Rabbi et al. 2015), sleep monitoring (Lin et al. 2012), user-adaptive services, social interaction (Lee et al. 2013) and in industry. In particular, an accurate recognition of human context can greatly benefit healthcare and wellbeing through automatic monitoring and supervision of patients with chronic diseases (Lara and Labrador 2013) such as hypertension, diabetes and dementia (Ordóñez and Roggen 2016). Furthermore, the gathered Preprint. knowledge and extracted activity patterns can enable novel treatment design, adjustment of medications, better behavioral intervention and patient observation strategies (Lorincz et al. 2009).
In practice, for a context detection system to be effective in a real-life requires an unobtrusive monitoring. It is important to not distress a person in order to capture their realistic behaviors in a natural environment. The penetration of smart sensing devices (e.g. smartphones and wearables) that are integrated with sophisticated sensors in our daily lives provides a great opportunity to learn and infer about various aspects of a person's daily life. However, there is considerable variability in the human behavior in real-world situations that can cause the system to fail, if it is developed using data collected in a constrained environment. For instance, Miluzzo et al. shows that the accuracy of activity classification differs based on the interaction with the phone e.g. when in hand or carried in the bag. The various sensors embedded in the smart devices convey information about different ambient facets each with a distinct prospect. The variability issues of different patterns in phone usage, environments, and device types can be very well addressed (to improve the recognition capability of the system) through learning disentangled representations from a large-scale data source and fusing rich sensory modalities rather than separately utilizing each of them.
In the past, several studies have shown great improvement in sensor processing for basic activity recognition (Lara and Labrador 2013;Hoseini-Tabatabaei, Gluhak, and Tafazolli 2013). The majority of the earlier methods use shallow learning classifiers (such as, Random Forest and Support Vector Machine) with hand-engineered features extracted from raw sensor readings e.g. heuristically selected statistical or frequency measures (Figo et al. 2010). Likewise, many studies involve simulated controlled trials for data collection in lab environments that require users to wear extra sensors. Broadly, they also treat activity recognition as a multi-class classification problem, where a user's activity at a specific moment can be defined by one of the k defined classes. On the contrary, people are not generally engaged in just one activity in their day-to-day living e.g. a person might surf the web while eating or talking to friends. These problems limit the applicability of these studies to detect very few rudimentary activities and make it harder for the system to generalize to real-life settings. Nevertheless, to be successful in everyday scenarios, the context recognition module should support a diverse set of activities, varying device usage, and a wide range of environments. Importantly, it must not only learn discriminative representations directly from raw signals without any ad-hoc feature engineering, but also seamlessly combine the discovered explanatory factors in the milieu of diverse sensory modalities (Bengio, Courville, and Vincent 2013). In recent years, the fields of speech recognition, drug discovery, image segmentation and machine translation have been tremendously revolutionized thanks to the availability of massive labeled datasets and end-to-end deep representation learning (Bengio, Courville, and Vincent 2013). Similarly, the domain of human activity recognition has also started leveraging deep neural networks for automatic feature learning (Ordóñez and Roggen 2016;Radu et al. 2018;Yang et al. 2015) though commonly restricted to the detection of only elementary activities such as, walking, sitting, standing etc. There has not been the same progress in recognizing complex behavioral context in daily-life situations using devices of daily use. This can be partially attributed to the lack of a large labeled dataset, which is both expensive and time-consuming to accumulate in a real-world settings. We believe that large-scale sensory data can significantly advance context recognition. This issue is very recently addressed in (Vaizman, Ellis, and Lanckriet 2017;Vaizman, Weibel, and Lanckriet 2018) which has open-sourced multi-modal data (see Figure 2) of activities in-thewild. The authors provide a baseline system for sensor fusion and a unified model for multi-label classification. They trained logistic regression and fully connected neural networks on hand-crafted features that are extracted based on extensive domain-knowledge. In this paper, we utilize this heterogeneous sensors data collected over a week from sixty users to learn rich representations in an end-to-end fashion for recognizing multi-label human behavioral context.
The task of learning detailed human context is challenging, especially from imbalanced and multi-label data. Unconstrained device usage, a natural environment, different routines, and authentic behaviors are likely to result in a joint training dataset from several users with significant class skew (Vaizman, Weibel, and Lanckriet 2018) and missing labels. Another challenge with learning from multi-modal signals is developing an architecture that feasibly combines them as in diverse environments a certain sensor might perform better than others. For instance, if a person is watching a television with a phone lying on the table, the sound modality may dominate in the network as compared to an accelerometer. We address the former issue with instance weighting scheme same as (Vaizman, Weibel, and Lanckriet 2018) and later through a unified architecture that can efficiently fuse representations in multiple ways.
We present a deep temporal convolutional neural network (CNN) that learns directly from various modalities through a multi-stream architecture (accelerometer, gyroscope, sound and phone state networks). Here, a separate network facilitates learning from each modality and a contextualization module incorporates all the available information to determine the user's context (see Figure 1). In our experiments, we show that deep multi-modal representations learned through our network without any sophisticated preprocessing or manual feature extraction achieve state-of-theart performance.
The primary contribution of this paper is in showing how to leverage ample amount of raw sensory data to learn deep cross-modal representations for multi-label behavioral context. Although, the methods in the paper are standard, their application on a large-scale imbalanced and sparsely labeled smartphone data set is unique. The proposed network architecture achieves sensitivity and specificity score of 0.767 and 0.733, respectively averaged over 51 labels and 5-folds cross-validation. The rest of the paper describes our tech-nique and experiments in detail. First, we review the related work on activity recognition. Then we present our multistream temporal convolutional network, architectural modifications for handling missing sensors, the proposed training procedure and implementation details. Next, the description of the dataset, evaluation protocol and experimental results are described, followed by the conclusions.
Related Work
Human activity recognition has been extensively studied in simulated and controlled environments. It is concerned with classifying sensor measurements into existing activity categories. The earlier techniques are predominantly based on applying shallow learning algorithms on manually extracted features (e.g. statistical and spectral attributes) (Figo et al. 2010). Despite there are unsupervised (Bhattacharya et al. 2014;Plötz, Hammerla, andOlivier 2011) andsupervised (Yang et al. 2015;Ordóñez and Roggen 2016;Ronao and Cho 2016;Zeng et al. 2014) deep learning methods applied for automatic feature extraction to detect activities, these approaches are fairly limited by the amount of labeled data (of many sensing modalities) from the real-world. Furthermore, they do not fully address the issue of multilabel context recognition. A user state is described by only one class or label, which is not true for activities humans perform in real-life. Moreover, only recently the exploration has begun into joint-learning and fusing multiple modalities for ubiquitous sensing through deep networks (Radu et al. 2018;Vaizman, Weibel, and Lanckriet 2018). The works cited here are by no means an exhaustive list, but provide a recent representative advancements made in utilizing deep neural networks for activity recognition. We recommend the interested readers to refer (Rashidi and Mihailidis 2013;Shoaib et al. 2015) for an extensive survey of former approaches.
A systematic analysis of several deep neural architectures for activity recognition is provided by Hammerla, Halloran, and Ploetz. The suitability of various models is investigated that were trained only on raw accelerometer signals for different activity classification tasks. On diverse benchmark datasets, CNN and long-short-term memory networks are found to outperform hand-crafted features by a significant margin. Likewise, Alsheikh et al. proposed an approach combining pre-training and fine-tuning of deep belief networks for sequential activity recognition. They extracted spectrograms from a triaxial accelerometer and found them to be helpful for capturing variations in the input. Similarly, Jiang and Yin used 2D activity images extracted from accelerometer signals as CNN input. The importance of unsupervised training of models in feature learning and optimization is highlighted in (Bhattacharya et al. 2014) using a combination of sparse-coding framework and semi-supervised learning. Likewise, Yang et al. developed a multi-channel CNN model to replace heuristic based handcrafted features. Their analysis showed CNNs work well compared to traditional (shallow) learning algorithms on several datasets. Audio sensing is also employed in unconstrained acoustic environments through applying fully connected neural networks (Lane, Georgiev, and Qendro 2015).
Recently, Radu et al. used deep networks for multi-modal activity recognition and compared them with traditional learning algorithms on various recognition tasks. Likewise, numerous other studies also positively utilize deep learning for detection of basic activities (Ordóñez and Roggen 2016;Ronao and Cho 2016;Zeng et al. 2014).
We differentiate ourselves from the existing approaches through utilizing a deep multi-stream CNN (with depthwise separable convolutions) on a large and diverse context detection dataset. Specifically, we build on previous work by Vaizman, Weibel, and Lanckriet that only employed hand-engineered features for training linear and shallow neural networks. In contrast, our general-purpose approach allows us to train a deeper network that can not only automatically discover hidden latent factors, but also seamlessly combine them to achieve an end-to-end learning system without requiring domain expertise. Moreover, through taking advantage of multi-task learning (Caruana 1997) we develop an architecture that can robustly handle missing sensors.
Learning Multi-Modal Networks
We design a deep convolutional neural network to address the problem of behavioral context recognition through learning representations from raw sensory inputs. To deal with cross-modality signals i.e. accelerometer (Acc), gyroscope (Gyro), audio (MFCC/Aud), and phone state (PS), we use a multi-stream architecture. The network comprises five main modules as demonstrated in Figure 3. This section describes each component, presents a strategy to modify the proposed architecture to handle missing sensors and provides the implementation details.
Modality Specific Networks
We present a deep multi-modal convolutional architecture for learning context representations. We propose to use a series of depthwise-separable convolutions (DPS-Conv) (Chollet 2017) for processing different components (or channels) of raw signals. In general, CNNs are also found to be well suited for processing 1D sequences due to their ability to learn translation invariant features, scale separation, and localization of filters across time and space (Bai, Kolter, and Koltun 2018). DPS-Conv consists of two operations i.e. a depthwise convolution and a pointwise (or 1 x 1) convolution. Specifically, the first function (depthwise convolution) performs a convolution independently over each input channel and it is followed by the second operation of 1 x 1 convolution that projects the channels estimated by the earlier onto a distinct channel space to have the same number of output filters (Kaiser, Gomez, and Chollet 2017). The intuition of this formulation falls in line with the classical procedures utilized by domain experts to extract several features from each signal component independently (e.g. x, y and z constituents of an accelerometer) but pointwise convolution goes one step further and tries to learn unified factors that may capture relationships among independent elements. Moreover, separable convolutions make efficient use of parameters as opposed to their classical counterpart and this Figure 3: End-to-end multi-modal and multi-label context recognition: We propose a deep temporal convolutional architecture for multi-label behavioral context recognition. A separate network learns representations (features) from each modality using depthwiseseparable convolutions and contextualizes this information through shared layers to infer the user context.
property has made them a very promising candidate for contemporary architectures that run on smart devices with limited computing and energy capabilities (Sandler et al. 2018;Zhang et al. 2017). Formally, in case of 1D input sequence x of length L with M channels, the aforementioned operation can be formulated as follows (Kaiser, Gomez, and Chollet 2017): where is elements-wise product, x[i : j] represents a segment of the complete sequence with adjacent columns from i to j, and w represents filter with receptive field size of k.
The proposed network takes four different signals as input, each with its independent disjoint pathway in the earlier layers of the network. Towards the end, they are merged into shared layers that are common across all modalities that are described in the next subsection. This network configuration has the benefit of not just extracting modality-specific (and channel-specific) features but it can also feasibly extract mutual representations through shared layers. Each of the presented Acc and Gyro networks consist of 2 temporal convolution layers which act as feature extractors over raw signals of dimensions 800 x 3. The convolution layers have kernel sizes of 64 and 32 with a stride of 2 and each layer has 32 and 64 filters, respectively. We use rectified linear activation in all the layers and apply depth-wise L2-regularization with a rate of 0.0001. The audio network takes mel frequency cepstral coefficients (see Section Dataset and Modalities) of size 420 x 13 as input and it has a similar architecture except the kernel size, which is set to 8 and 6 in the first and second layers, respectively. Likewise, the discrete attributes indicating PS are fed into a single layer fully-connected (FC) network with 64 units and L1-penalty is used on the weights with a rate of 0.0001. Furthermore, we explore different mechanisms to get a fixed dimension vector from each modality that can be fed into a shared network. Specifically, we use: a) global max pooling (GMP), b) global average pooling (GAP), c) a FC layer, and d) exactly pass the representations without any transformation to the shared network.
Shared Network (Contextualization)
Given the concepts extracted from each modality, the shared network generates a modal-agnostic representation. To achieve this, we fuse the output of earlier networks either through concatenation or apply standard convolution (only for Acc, Gyro and Aud). We then feed the output into 2 FC layers having 2048, 1024 hidden units, respectively. Same as earlier, we use rectified linear non-linearity and L1-regularization with a weight decay coefficient of 0.0001. The final output layer contains 51 units (one for each label) with sigmoid activation. Figure 3 visualizes the sharing of the network layers, where, earlier layers are modality specific but downstream layers become more general.
Missing Sensors
In a real-life setting, a context recognition system may encounter missing modalities which can limit its inference capability. To make the model robust against such a situation, we develop a multi-task network (Caruana 1997), where learning from each sensor is posed as a task. The initial configuration of the model is the same as before but an addi-tional layer (of 128 units for Acc, Gyro, MFCC/Aud and 64 units for PS) with a separate loss function is added after only a single shared layer of 1024 hidden units. Figure 4 provides a high-level overview of the architecture. We employ joint-training (with a learning rate of 0.0003) on all the modalities through aggregating cost functions of each model in order to get a total loss. This architectural configuration allows not only to learn independent and shared factors but enables inference even when any of the sensors is missing. It does so through averaging (which can be weighted) over probabilities produced by the individual networks. ditional task (modality-specific) layers and a separate loss function for each modality. It is able to recognize user context even if only one sensor is producing data and the others are unavailable.
Implementation and Training Details
The networks are implemented in Tensorflow (Abadi et al. 2016) and the models are learned from scratch; initializing the weights with Xavier technique (Glorot and Bengio 2010). Dropout (Srivastava et al. 2014) is applied on the hidden layers with a probability of 0.2. We use the Adam optimizer with a learning rate of 0.0001 (unless mentioned otherwise) and use a batch size of 100. We optimize the model weights for a fixed number of iterations (i.e. 15000) with mini-batch stochastic gradient descent and backpropagation using instance-weighted cross-entropy objective function: where L ce is the binary cross-entropy loss, and Ψ is an instance-weighting matrix of size N x C (i.e. number of training examples and total labels, respectively). The instance weights in Ψ are assigned by inverse class frequency. Likewise, the entries for the missing labels are set to zero, to impose no contribution in the overall cost from such examples.
Experimental Results
We conduct several experiments to analyze the capability of the proposed method. First, we provide a brief description of the utilized dataset and signals. Second, we describe the evaluation approach and metrics used to determine the model's performance on a multi-label and imbalanced dataset. Finally, we discuss our empirical observations, effect of different modalities' representation, comparison of various procedures to learn shared factors and visualization of the internal representation.
Dataset and Modalities
We choose to learn discriminative representations directly from raw Acc, Gyro, Aud/MFCC and PS attributes from a smartphone because of their wide adoptability and ubiquity. For this purpose, we chose to leverage ExtraSensory Dataset (Vaizman, Ellis, and Lanckriet 2017) since it is collected in a natural environment from users' personal devices. The experimental setup was not scripted but data collection was performed when participants were busy with their daily routines to capture varied activities and context combinations, in-the-wild conditions. This data source contains over 300, 000 multi-labeled instances (with classes such as 'outside', 'at a restaurant', 'with friends' from a total of 51 labels) from sixty users. The complete data collection protocol is described in (Vaizman, Ellis, and Lanckriet 2017).
Here, we provide a high-level overview of the signals that we used in this study. The samples are collected for 20 seconds duration every minute from tri-axis Acc and Gyro at a sampling frequency of 40Hz, mel frequency cepstral coefficients (MFCCs) for 46msec frame are extracted from Aud recorded at 22, 050Hz. Likewise, several phone state binary features are also collected such as those specifying, time of day, battery level, ringer mode and Wi-Fi connection etc. A few randomly selected samples of these signals are illustrated in Figure 2. We seek to process raw sensory values without manual feature engineering. Thus, the only pre-processing we applied is to transform variable length inputs to an identical temporal length. For this purpose, the MFCCs of environmental audio are repeated (along time dimension) to get equal size input, this is reasonable for ambient soundscapes as we are not particularly interested in inferring a specific sound event. Similarly, the Acc and Gyro samples of varying sizes are zero-padded and instances, where MFCC length is shorter than twenty are discarded. Furthermore, we treat Acc, Gyro and Aud as m-channels inputs (3, 3, and 13 channels, respectively) as it allows us to efficiently learn independent factors from every sensor axis, thus maximally utilizing the large-scale dataset.
Evaluation and Metrics
Our models are evaluated with five-folds cross-validation with the same divisions of sixty users as of (Vaizman, Weibel, and Lanckriet 2018), where training and test folds contain 48 and 12 users, respectively. For hyper-parameter optimization, we use nested cross-validation (Cawley and Talbot 2010) by randomly dividing the training fold data into training and validation sets with ratio of 80-20. After hyper-parameters selection, we train our models on the complete dataset of training folds (individually, each time from scratch) and calculate metrics on the testing folds. Furthermore, it is mentioned earlier that the considered dataset is highly imbalanced with sparse labels. In this case, simply calculating naive accuracy will be misleading due to not taking underrepresented classes into account. Similarly, precision and f1-score are also very likely to be affected by the class-skew due to involvement of true positives in the denominator. Hence, we adopt a metric named balanced accuracy (BA) (Brodersen et al. 2010) as used in (Vaizman, Weibel, and Lanckriet 2018), which incorporates both recall (or true positive rate) and true negative rate: BA = Sensitivity+Specificity 2 . BA can be interpreted as average accuracy achieved on either class (positive or negative regarding binary classification). It stays identical to traditional accuracy, if a model performs equally well on each class but drops to a random chance (i.e. 50%) if a classifier performs poorly on a class with few instances (Brodersen et al. 2010). We calculate BA for each label independently and average them afterwards to get a trustworthy score of the model's overall performance.
Results and Analysis
Analysis of Fusing Multi-Modal Representations: We quantify the effect of different procedures for getting a fixed dimension feature vector from each modality-specific network and examine their fusion through different configurations of the shared network. It is important to note that, we keep an entire network's configuration same but only the layers under consideration are changed. Table 1 provides the averaged (metrics) scores over 51 contextual labels and 5-folds as a result of applying global (max and average) pooling, using FC layer or simply feeding the extracted representations to the shared network for further processing. For the latter, we explore learning mutual representation from Acc, Gyr, and Aud/MFCC through an additional standard convolution layer and compare its performance with directly using flattened representations. Our experiments suggest that global max pooling (GMP) over each modality's features outperforms other utilized techniques; achieving BA of 0.750 with a sensitivity rate of 0.767. We believe the reason is that, GMP is capable of picking-up high-level shift-invariant features, which are most discriminative among others. Figure 5 presents per label metrics for this network on all the 51 labels in the dataset. Specifically, we notice majority of the labels have BA score in range of 70%-80%.
Comparison of Convolution Variants:
We evaluate the complete multi-stream model through replacing only DPS-Conv layers with standard convolution (Std-Conv) in modality-specific networks. We did not observe major performance differences between the two models as shown in Table 2. Nevertheless, a model with DPS-Conv should be preferred because of having lower computational cost than Std-Conv (Sandler et al. 2018 Quantifying Modality Influence: To examine the effect of different combinations of sensors (or features learned from them) on the recognition capability of the model, we experimented with training several networks with modified architectures. Specifically, in this case the model only consisted of layers that are relevant to the signals under consideration e.g. for evaluating models with only Acc, Aud, and PS, we removed the Gyro network entirely and then trained it end-to-end from scratch. Table 3 shows the evaluation results that highlights the importance of joint-learning and fusion of multiple modalities to improve detection rate. Table 3). However, this flexibility comes at the price of slightly lower BA but makes a model capable of operation in the face of unavailable sensors. weights to handle class-imbalance. To test network's dependence on the cost sensitive loss function (J c ), we examined a model's performance that is trained without it. As expected, the overall BA score drastically drops to a random chance (see Figure 6) with worse performance on positive samples in comparison with the negative ones. Likewise, we also trained a model without any sort of regularization i.e. removing dropout, L1 and L2 penalties from the network. The average recall rate on the held-out testing folds dropped to 0.58 which can be an indication of overfitting the training set. Hence, incorporating both instance-weighting (IW) and regularization improved performance significantly in learning from this imbalanced dataset. However, further work will be necessary to investigate other techniques for managing (sparse) rare labels such as oversampling and data augmentation in case of multi-labeled instances.
Visualization: In order to illustrate the semantic relevance of the learned features, we applied t-SNE (van der Maaten and Hinton 2008) to project high-dimensional data to 2D embedding. We take the output of the last FC layer (see Figure 3) from the shared network by feeding a limited (but randomly selected) subset of the dataset to extract the embeddings. Further, as the data under consideration is multilabeled, we identified sets of mutually-exclusive labels (e.g. Indoors vs. Outside) that can be used to color code the data points to visually identify meaningful clusters. Figure 7 provides a visualization for various sets of labels suggesting the network is able to disentangle possible factors of variation that may distinguish a class from the rest in large-scale sensory data. Furthermore, to get better insights in the diversity of the extracted features from each modality, in Figure 8, we visualize the feature maps produced by the first layer of the DPS-Conv layer of modal-specific networks.
Conclusions
In this work, we tackled the problem of multi-label behavioral context recognition with deep multi-modal convolutional neural networks. We propose to train an endto-end model for jointly-learning from low-level sensory data (accelerometer, gyroscope, audio and phone state) of smart devices collected in-the-wild. Our empirical results demonstrated various strategies for feasibly fusing representations learned from different modalities and quantifying their contribution on the predictive performance. We also showed that instance-weighted cross-entropy loss (as also leveraged in (Vaizman, Weibel, and Lanckriet 2018)) and regularization schemes enable the model to generalize well on highly imbalanced (sparsely labeled) dataset. Furthermore, we present a slight modification in the proposed network's architecture to handle missing sensors; potentially taking advantage of multi-task learning. We believe, the proposed methodology is generic enough and can be applied to other related problems of learning from multivariate time series. Additionally, potential directions for future work would involve developing techniques to handle imbalanced multilabel data, optimal sensor selection to reduce computation and battery consumption, and incorporating other analogous sensors to further improve the detection rate. | 2018-08-27T10:06:01.000Z | 2018-08-27T00:00:00.000 | {
"year": 2018,
"sha1": "6fbc39289f724546e44c95c459acdd03f2c2b969",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "71035c1353ba2a7dee2cdb6102deb00de9681d2c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
257242510 | pes2o/s2orc | v3-fos-license | Quantitative Analysis of Oat (Avena sativa L.) and Pea (Pisum sativum L.) Saponins in Plant-Based Food Products by Hydrophilic Interaction Liquid Chromatography Coupled with Mass Spectrometry
This work presents the sample extraction methods for solid and liquid sample matrices for simultaneous quantification of oat (Avena sativa L.) and pea (Pisum sativum L.) saponins: avenacoside A, avenacoside B, 26-desglucoavenacoside A, and saponin B and 2,3-dihydro-2,5-dihydroxy-6-methyl-4H-pyran-4-one (DDMP) saponin, respectively. The targeted saponins were identified and quantified using a hydrophilic interaction liquid chromatography with mass spectrometric detection (HILIC-MS) method. The simple and high-throughput extraction procedure was developed for solid oat- and pea-based food samples. In addition, a very simple extraction procedure for liquid samples, without the need to use lyophilisation, was also implemented. Oat seed flour (U-13C-labelled) and soyasaponin Ba were used as internal standards for avenacoside A and saponin B, respectively. Other saponins were relatively quantified based on avenacoside A and saponin B standard responses. The developed method was tested and successfully validated using oat and pea flours, protein concentrates and isolates, as well as their mixtures, and plant-based drinks. With this method, the saponins from oat- and pea-based products were separated and quantified simultaneously within 6 min. The use of respective internal standards derived from U-13C-labelled oat and soyasaponin Ba ensured high accuracy and precision of the proposed method.
Introduction
The demand for sustainable protein sources in food production is continuously growing [1,2]. Oat (Avena sativa L.) and pea (Pisum sativum L.) proteins in the form of concentrates or isolates can act as an alternative to animal proteins due to their potential ability to provide desirable technological properties in plant-based meat and dairy substitutes [3,4]. Pea protein is an insufficient source of methionine but, on the other hand, has a high content of the essential amino acid lysine and branched-chain amino acids-leucine, isoleucine, and valine [4]. In contrast to pea, oat contains enough methionine but a scarce amount of lysine [3]. Blending oat and pea proteins in products is one way to achieve a complete essential amino acid profile [5], and such products are already available on the market. However, one of the main obstacles in the application of plant-based proteins in food production is their bitter and astringent off-taste [6][7][8]. It has been suggested that saponins might be the main cause of this sensation [9][10][11][12][13][14] influencing consumer acceptance.
Saponins are a diverse group of secondary defence metabolites widely spread in plant species [15]. Saponins investigated in this study are amphiphilic molecules, with polar water-soluble sugar moieties attached to a nonpolar, water-insoluble steroid or triterpene Saponins are a diverse group of secondary defence metabolites widely spread in plant species [15]. Saponins investigated in this study are amphiphilic molecules, with polar water-soluble sugar moieties attached to a nonpolar, water-insoluble steroid or triterpene core [16]. Oats, as the only cereals capable of accumulating saponins, contain bisdesmosidic steroidal saponins avenacoside A and B, and monodesmosidic 26-desglucoavenacoside A in their leaves and grains ( Figure 1) [9,10,12,13,17]. Saponin B and 2,3dihydro-2,5-dihydroxy-6-methyl-4H-pyran-4-one (DDMP) saponin are monodesmosidic triterpenoid saponins found in peas [18,19]. Besides being taste-active bitter compounds, saponins have also been reported as antinutrients. As such, they may affect nutrient absorption by inhibiting metabolic and digestive enzymes [20] and by binding to minerals such as zinc and iron [21]. High concentrations of saponins in the diet may lead to hypocholesterolemic effect [22], hypoglycemia [23], inefficient protein digestion, vitamin and mineral uptake in the gut, and the development of a leaky gut [24]. Despite the reported negative nutritional impact, some studies have also shown positive cholesterol-lowering [25] and anticancerogenic [26] effects of saponins. Chemical structures of saponins: avenacoside A, avenacoside B, and 26desglucoavenacoside A from oat, saponin B and DDMP from pea, and soyasaponin Ba (used as internal standard [IS]).
The analysis of saponins could be performed using a wide range of classical methods such as gravimetry [15,27], hemolysis [28], bioassays [29], and spectrophotometry [30]. In addition, different saponins could be separated and analysed using chromatographic methods, e.g., thin-layer chromatography [31,32], gas chromatography [33], and high- performance liquid chromatography (HPLC) [19]. The detection of the saponin class compounds could be carried out using the simplest optical detection methods [34,35], but these methods usually lack the selectivity and sensitivity of more advanced analytical techniques such as mass spectrometry [9,17,[36][37][38]. The saponin extraction and the pre-and post-extraction sample clean-up before the LC analysis [9,17,19,39] are required to obtain a clean extract which would minimise matrix effect in mass spectrometric measurements. However, these sample preparation procedures are time-consuming and unsuitable for routine analysis of large amounts of samples. This creates the need for an improved, efficient, sensitive, more selective, and reproducible extraction method of saponins prior to the analysis. The use of liquid chromatography coupled with mass spectrometry (LC-MS) allows more precise and selective determination of the contents of different types of saponins in various plant species: oat [9,13,17], pea [19,32], and soya [39,40]. Although, the amounts of saponins have been quantified mainly from the seeds or husks of numerous oat and pea varieties [9,19,32], there is a lack of data concerning the concentrations of saponins in processed food ingredients, and the half-and end-products produced therefrom. To the best of our knowledge, there is no versatile method for the determination of saponins derived from different plant species in various food matrices.
The objective of this study was to develop simple sample extraction methods for solid and liquid plant-based food sample matrices for the selective and quantitative determination of five oat and pea saponins: avenacoside A, avenacoside B, 26-desglucoavenacoside A, saponin B, and DDMP saponin, using hydrophilic interaction liquid chromatography with mass spectrometric detection (HILIC-MS). To our knowledge, there are no reports on simultaneous HILIC analysis of the above-mentioned saponins in solid and liquid samples containing concurrent oat and pea ingredients.
Food Samples
Yellow pea flour, whole-grain oat flour, and oat and pea drinks were purchased from a local supermarket. Pea protein isolate (Bang & Bonsomer Estonia OÜ, Tallinn, Estonia), pea protein concentrate (Aloja-Starkelsen Ltd., Limbažu novads, Latvia), and oat protein concentrate (Lantmännen, Stockholm, Sweden) were obtained from producers. The composition and nutritional information available on the product label of these products is available in Supplementary Table S1. Untreated and extruded blends of pea protein isolate, oat protein concentrate, and pea protein concentrate (52:28:20, w/w) were produced in-house by following a previously published protocol [41].
Extraction Method for Solid Samples and for Liquid Samples
Sample extraction methods 1A, 1B, 2A, 2B, and 2C, which were tested during solid sample extraction method development, are described in the supplementary information.
Solid sample extraction (method 2D) was performed according to Heng et al. [19] with some modifications. Powdered non-defatted solid sample (100 mg) was weighed into a 10 mL volumetric flask (n = 3), filled with aqueous EtOH (70%, v/v), mixed thoroughly, and ultrasonicated for 30 min (without additional heating). After ultrasonication, samples were centrifuged (14,000× g for 10 min at 10 • C) to remove insoluble matter. The supernatant (500 µL) was passed through PLD+ columns by applying positive pressure to remove proteins and phospholipids. The obtained filtrate was diluted to receive an aqueous MeCN (50%, v/v) solution. The diluted filtrate (100 µL) was transferred to the LC-MS vials, mixed with 50 µL soyasaponin Ba working solution and 50 µL U-13 C-oat extract working solution, and injected into the LC-MS.
A homogeneous liquid sample was weighed (0.25 g) into a 5 mL volumetric flask (n = 3), filled with ultrapure water, and mixed thoroughly. Diluted sample solutions were centrifuged (14,000× g for 15 min at 10 • C) to remove insoluble matter. Sample supernatant (200 µL) and 800 µL MeCN were transferred into the next tube, mixed thoroughly, and centrifuged (14,000× g for 15 min at 10 • C) to remove precipitated proteins. The supernatant (500 µL) was passed through PLD+ columns. The obtained filtrate (300 µL) was transferred into a clear tube and diluted with 180 µL ultrapure water to obtain an aqueous MeCN solution (50%, v/v). The diluted sample filtrate was combined with internal standard solutions as described for solid samples and injected into the LC-MS.
Preparation of Standard Solutions
The stock solution of avenacoside A (500 mg/L) was prepared in ultrapure water and the aliquots were stored at −80 • C. The stock solution of saponin B (500 mg/L) was prepared in aqueous EtOH (60%, v/v) and aliquots were stored at −80 • C. The internal standard stock solution of soyasaponin Ba (100 mg/L) was prepared in MeOH.
The stock solution of U-13 C-oat seed flour extract containing 13 C 51 -avenacoside A was prepared using the previously described solid sample extraction method 2D with some modifications. U-13 C-oat seed flour (150 mg) was weighed into a 50 mL volumetric flask, filled with EtOH (70%, v/v), and mixed thoroughly. The flask was ultrasonicated for 30 min (without additional heating) and the obtained solution was centrifuged (17,000× g for 10 min at 10 • C) to remove insoluble matter. The supernatant was passed through PLD+ columns using a vacuum manifold. The cleaned extract was aliquoted and stored at −80 • C.
The internal standard working solutions were prepared freshly before the analysis. The working solution of internal standard soyasaponin Ba was prepared by diluting stock solution in the aqueous MeCN (50%, v/v). The U-13 C-oat extract working solution was prepared by diluting the stock solution two-fold with neat MeCN.
Liquid Chromatography Mass Spectrometry
Samples were analysed using a Waters UPLC ® system (Waters Corporation, Milford, MA, USA) coupled with a Waters Quattro Premier XE Mass Spectrometer equipped with ZSpray™ Source and controlled by Waters The analytes were ionised under negative electrospray ionisation (ESI-) and optimised source conditions. The source temperature was set to 120 • C, and high-purity nitrogen was fed into the source at 25 L/h (cone) and 600 L/h (desolvation) and desolvation gas was heated to 350 • C. The capillary voltage was set to −1.5 kV, cone voltage to 80 V, and extractor
Calibration and Quantification
The working solution was prepared by diluting standard stock solutions 100 times with MeCN:H 2 O:EtOH solution (50:36:14, v/v). Internal standards, soyasaponin Ba and U-13 C-oat extract, were added before injection to the autosampler vial, and their concentration in the vial was set at 0.75 mg/L and 0.3 mg/L, respectively. Calibration curve standard solutions (100 µL) were mixed with internal standards working solutions (50 µL U-13 Coat extract working solution and 50 µL soyasaponin Ba working solution). Calibration curves were built for avenacoside A (0.01-2.44 mg/L) and saponin B (0.01-2.48 mg/L) using eight-point measurements of serially diluted standards, which were run in triplicate. The regression was found by fitting points to the linear equation. The external standard calibration curves were built by correlating the concentrations of external standards to the response factors, which were calculated according to Equation (1). response factor (RF) = (area of analyte)/(area of internal standard) As only the avenacoside A standard was commercially available, other analytes of interest (avenacoside B and 26-desglucoavenacoside A) were quantified relatively using the avenacoside A calibration curve. Avenacoside B and 26-desglucoavenacoside A results are presented in avenacoside A equivalents. Avenacosides were quantified using isotopically labelled 13 C-avenacoside A as an internal standard. As DDMP saponin could not be sourced commercially, its quantification was based on the saponin B standard curve, and the results are given in saponin B equivalents. Both were quantified using soyasaponin Ba as an internal standard.
Validation of the Method
The following parameters were assessed during method validation: linearity, limit of detection (LOD), limit of quantification (LOQ), precision, specificity, sample extraction recoveries, and matrix effect (ME). Developed extraction methods for solid and liquid samples were validated separately. Oat protein concentrate and pea protein isolate were used to validate the solid sample extraction method. Saponin determination in liquid samples was validated using oat and pea drinks.
The linear range and linearity were evaluated via repeated measurements of standard solutions of avenacoside A and saponin B consisting of 8 individual points obtained from serial dilution of stock solutions. For the calculation of LOD and LOQ values for avenacoside A and saponin B compounds, the standard deviation (SD), obtained by analysing the peak areas of the lowest standard concentration point, was multiplied by three or ten, respectively [42].
To determine the intra-day precision of the instrumental method, oat protein concentrate and pea protein isolate extracts containing all analytes and internal standards were injected six times, and for inter-day precision, sample extracts were studied across three independent days to confirm the stability of the retention times and peak areas. The precision of the extraction methods was determined by repeatability (intra-day) and intermediate precision (inter-day). Repeatability was carried out by performing six repeated analyses of the samples on the same day, while the intermediate precision of the method was assessed using samples that were analysed on three different days over two months under the same experimental conditions. The total recoveries for avenacoside A and saponin B were evaluated by spiking the solid and liquid samples with a known amount of avenacoside A and saponin B at three different concentration levels. For estimation of solid sample extraction method recovery, oat protein concentrate and pea protein isolate (100 mg) were weighed into a 10 mL volumetric flask (n = 3). Aliquots of avenacoside A and saponin B standard solutions (10 mL) at three different concentrations were prepared in aqueous EtOH (70%, v/v) separately. These solutions were added to oat protein concentrate and pea protein isolate, mixed thoroughly and subjected to the solid sample extraction method as described above. The recoveries of avenacoside A and saponin B in oat and pea liquid samples were determined by cross-matrix spiking both sample matrices. For estimation of liquid sample extraction method recovery, separate standard stock solutions of avenacoside A and saponin B were prepared (200 mg/L). These solutions were added in different volumes to 0.25 g of liquid sample (oat and pea drink) (n = 3) weighed into a 5 mL volumetric flask, mixed thoroughly, and subjected to the liquid sample extraction method as described above. The total recovery was calculated using Equation (2) [43], total recovery (%) = (C spiked /(C unspiked + C spike )) × 100% (2) where C spiked is the amount of saponin determined in the spiked sample, C unspiked is the amount of saponin in the unspiked sample, and C spike is the amount of saponins at three different concentration levels. ME as one of the most problematic issues in LC-MS was evaluated for all four sample matrices (oat protein concentrate and pea protein isolate and plant-based drinks) by postextraction sample spiking with calibration curve standard solutions, then constructing a calibration curve based on response factors and spiked standard concentrations, and comparing the matrix-matched calibration curve slope with the calibration curve slope in solvent (Equation (3)) [42] ME (%) = slope matrix-matched /slope solvent × 100%. ( Statistical analysis was carried out using Excel ® (Microsoft ® 365 for enterprise). The results are presented as mean ± SD or relative standard deviation (RSD).
Development of Liquid Chromatography Method
The HPLC method was developed and assessed by analysing external standards and compounds available in oat and pea sample matrices. During development of the liquid chromatography method, two types of stationary phase chemistry were tested (C18 and HILIC) as well as different column dimensions. The best separation performance in terms of time of analysis, selectivity, and efficiency was achieved by the BEH Amide column (1.0 × 50 mm, 1.7 µm). Based on the literature [9,19] and scan-type experiments of oat flour and pea flour sample extracts, m/z values for SIR channels were chosen for the detection and relative quantification of targeted compounds without existing standard compounds in these sample matrices. Avenacoside B and 26-desglucoavenacoside A were found to be present in the oat sample matrix in addition to avenacoside A. DDMP saponin also occurred in the pea sample matrix besides saponin B. MRM experiments were conducted during development of a methodology but we have found that the MRM approach did not bring any more selectivity but significantly reduced sensitivity by not producing consistent fragments. The example of a chromatogram obtained by injecting the oat and pea flour extracts is shown in Supplementary Figure S1.
Development of Sample Extraction Methods
Two previously published extraction methods (avenacosides in grain and husks of oats [9] and saponins in peas [19]) were the starting points for the development of a method for simultaneous saponin extraction from oat and pea matrices. As both extraction methods were time-consuming, a more efficient sample preparation was proposed for saponin quantification. All samples were analysed using LC-MS method described in the Materials and Methods section. Table 1 shows the main steps of extraction methods and saponin extraction yields obtained by reference methods (1A and 2A) and modified methods (1B, 2B, 2C, and 2D). To demonstrate the efficiency of the optimized methods, oat protein concentrate and pea protein isolate were analysed in duplicate.
Since both reference methods [9,19] started by fat elimination, defatted oat protein concentrate (fat 18.9%) and pea protein isolate (fat 4.7%) were extracted using methods 1A, 1B, 2A, and 2B. The oat protein concentrate extracted using method 1B gave 37% higher avenacoside A concentration compared to method 1A, and method 2B resulted in 50% higher yield than method 2A. Overall, the highest avenacoside A content in oat protein concentrate was achieved using extraction method 2B. Using method 1B, the pea protein isolate gave two times higher saponin B yield than using extraction method 1A, and method 2B gave a 76% higher yield than method 2A. Thus, the highest saponin B amount from pea protein isolate was extracted using method 2B. Although both improved methods 1B and 2B gave similar saponin yields in analysed matrices, it was decided to proceed with more process-efficient method 2B, as method 1B utilizing two-step methanol reflux extraction is very time-consuming.
The necessity for fat removal before saponin extraction from the matrix was determined. For this, saponins from four samples (oat flour and protein concentrate and pea flour and protein isolate) were extracted using extraction methods 2B and 2C, and lastly, the extracts were filtered through different filtering devices (the molecular weight cut-off filters with different membrane pore sizes (3, 10, 30, and 50 kDa), 0.2 µm syringe filter, and ISOLUTE ® PLD+ Protein and Phospholipid Removal columns) before the LC-MS analysis. The results of this experiment are shown in Supplementary Figure S2. No significant differences in avenacoside A, avenacoside B, saponin B, and DDMP saponin content were determined in Soxhlet-defatted and non-defatted oat and pea matrices. On the other hand, different molecular cut-off sizes had a significant impact on the recovery of saponins. The 3 kDa and 10 kDa cut-off filters showed inferior performance irrespective of the sample matrix and saponin type determined. The maximum recovery of analytes in the samples was achieved using 50 kDa and in some cases 30 kDa cut-off devices. In all sample matrices except oat protein concentrate, the application of PLD+ columns and syringe filters gave even better results than 30 kDa or 50 kDa cut-off filters. Although the PLD+ and 0.2 µm syringe filters gave quite similar analyte recovery, the application of PLD+ columns resulted in clearer MS chromatograms with a minimum number of interfering peaks in the chromatogram baseline. Moreover, filtering through the PLD+ column enables an easy transition of the procedure to a high-throughput workflow in the case of using 96-well PLD+ plates. The ISOLUTE ® PLD+ proprietary multifunctional sorbent phase is optimised to selectively retain proteins and phospholipids [44]. The results indicated that pre-extraction fat removal is not necessary before saponin extraction and could be omitted and the application of PLD+ columns is the best solution for post-extraction clean-up of sample extracts. This resulted in a modified method 2C (described in Table 1). The influence of ultrasonic power on the saponin extraction yields was also investigated. Saponins from oat protein concentrate, pea protein isolate, and oat and pea flours were extracted using methods 2C and 2D (results are shown in Supplementary Table S2). The results showed that ultrasonication did not have a statistically significant effect on saponin yield but considering the extraction time the application of ultrasonication is preferable. It should be noted that heating taking place during sonication had no effect on the analytes. During this experiment, the ultrasonic bath heated itself from ambient temperature (23 • C) to 40 • C in 30 min. Previous research has shown that the exposure of DDMP saponin to a temperature higher than 40 • C has a profound effect on its degradation into saponin B [18]. However, in another study, it was reported that the pure DDMP saponin in methanolic solution started to decrease in concentration when heated at 65 • C [45].
Based on the obtained results and considering the extraction time and yield, method 2D was utilized for analysis and validation of all solid samples.
Liquid food samples were analysed without the need to use lyophilisation before the sample extraction. The sample preparation method was based only on the application of ISOLUTE ® PLD+ cartridges for sample extract purification before LC-MS analysis, previously chosen as the most efficient for cleaning the extracts of the solid samples.
Validation of the Method
When the chromatographic methods and sample extraction methods were developed, validation was performed to evaluate the linear range, LODs and LOQs, precision, recoveries, and matrix effect of the proposed method. The linearity of response and other calibration parameters for avenacoside A and saponin B are presented in Table 2. Linearity for these two saponin standards was obtained in the concentration range of 0.01-2.5 mg/L. The LOQs were estimated from the lowest point of the calibration curve ranging from 0.015 mg/L for avenacoside A and 0.014 mg/L for saponin B. The obtained LOQ results were lower than or in accordance with previous research [9,13,17,39]. After linearity was found to be acceptable for avenacoside A and saponin B, the repeatability of the method was appraised. Repeatability of retention times and peak areas were studied first with six replicate injections of oat protein concentrate and pea protein isolate extract. Table 3 shows the repeatability of retention times, peak areas, and the precision of solid and liquid sample extraction methods. RSDs of peak areas for all saponins did not exceed 6%. Intra-and inter-day RSDs were at a similar level, indicating that the methods are reproducible to an acceptable extent for the routine analysis of oat and pea products. Intra-day and inter-day RSDs were determined by extracting oat protein concentrate, pea protein isolate, and plant-based drinks on different days. The RSD of the intra-day precision ranged from 6 to 13% and inter-day precision from 7 to 11% in powdered oat and pea samples. For oat and pea plant-based drinks, the intra-day precision ranged from 3 to 12% and inter-day precision from 7 to 16%. The precisions for the DDMP saponin pea drink were not evaluable despite multiple measurements (DDMP saponin content in this sample was <LOQ). Table 3. Repeatability of retention times (RT) and peak areas of saponins, and precision of the whole method. The recoveries were determined in oat protein concentrate and pea protein isolate powder by spiking the oat matrix with avenacoside A and the pea matrix with saponin B. The recovery of analytes in the case of the liquid sample extraction method was investigated separately. Table 4 shows the recovery results of powdered and liquid samples. The recoveries of avenacoside A and saponin B ranged from 90 to 115% and from 82 to 100% in oat protein concentrate and pea protein isolate, respectively. In the oat drink, the recovery of avenacoside A ranged from 96 to 113% and saponin B from 98 to 113%. In the pea drink matrix, the recoveries of avenacoside A and saponin B were from 94 to 106% and from 89 to 98%, respectively. According to validation guidelines, the acceptable recovery range for this method should be in the range of 80 to 110% [46]. Thus, the mean values of obtained recoveries were acceptable for both matrices. The recovery results obtained with the current procedure were similar to ones reported for previously proposed methods [9,13]. Oat protein concentrate ME on avenacoside A was 100%, and pea protein isolate ME on saponin B was 110%. Avenacoside A and saponin B ME were 107% and 105% in the oat drink and 105% and 102% in the pea drink, respectively. All measured ME were in the optimal range between 90 and 110% [47].
The stock solution of U-13 C-oat seed flour extract was analysed for purity. The unlabelled avenacosides were not detected; thus, isotopically labelled avenacoside A was regarded as fully labelled. The working solution of 13 C-oat flour was added into the LC-MS vial before the analysis to assess the quantity of analytes and take into account ME. Moreover, recovery experiments confirmed that the method could be used even with internal standards added post-extraction.
Overall, the method has demonstrated acceptable validation performance in terms of recovery, sensitivity, specificity, and precision, and could be characterised as robust and effective and could potentially be applied in a high-throughput environment. Thus, the developed sample extraction method and the LC-MS method are suitable tools for the analysis of oat and pea saponins in different matrices, e.g., flours, protein concentrates and isolates, mixed matrices, and liquid plant-based drinks.
Determined Concentrations of Saponins in Food Ingredients, Half-and End-Products
High sensitivity and reproducibility as well as very short analysis time make the developed method suitable for routine quality analysis of oat-and pea-based food ingredients In whole-grain oat flour, the contents of avenacoside A, avenacoside B, and 26desglucoavenacoside A were 23.4 ± 2.9 mg/100 g, 14.0 ± 1.5 mg/100 g, and below LOQ, respectively. According to previous research, the concentrations of avenacosides and their ratios are different and depend largely on the variety of oats [9]. According to the latter study, the average avenacoside A content in oat grain in 16 analysed varieties was 36 ± 8 mg/100 g, avenacoside B content was in the range of 30 ± 4 mg/100 g, and 26desglucoavenacoside A was 2.4 ± 0.8 mg/100 g [9]. Indeed, the contents of avenacoside A differed up to two-fold depending on the variety, and the ratios of avenacoside A to avenacoside B varied from 0.9 to 1.7 [9]. According to Günther-Jordanland et al. (2020), avenacoside A and avenacoside B content in oat flour has been reported to be 24.6 mg/100 g and 21.9 mg/100 g, respectively [13]. Thus, the concentration of avenacosides in the wholegrain oat flour determined in the present study is in a good correspondence with the results reported before [9,13]. In oat protein concentrate (53% protein; Table S1), avenacoside A content was 42.3 ± 3.0 mg/100 g, avenacoside B was 33.8 ± 0.7 mg/100 g, and 26desglucoavenacoside A was 5.1 ± 0.2 mg/100 g. According to specification (Table S1), this product was manufactured from oat bran. Previous research has shown that the average content of avenacoside A and avenacoside B in three analysed oat bran products was 26 ± 7 mg/100 g and 8 ± 2 mg/100 g, respectively [17], which is similar to concentrations determined in the whole-grain flour in the current study. Thus, the increased content of avenacosides in oat protein concentrate should be ascribed to the partial concentration of the oat saponins together with the protein fraction during the production process of oat protein concentrate. In an oat drink, avenacoside A content was 4.6 ± 0.1 mg/100 g, avenacoside B was 2.7 ± 0.2 mg/100 g, and 26-desglucoavenacoside A was below LOQ. As it was a commercial liquid product with low dry matter content, it resulted in an apparently lower content of measured saponins. Nevertheless, according to specification (Table S1), the product contains only 1% of protein and the oat base is the only protein source in the oat drink. In this respect, considering the oat drink and, e.g., the whole-grain oat flour (12.5% of protein), the ratio of avenacosides to protein is much higher in the oat drink. One can suppose the considerable migration of saponins into the liquid phase when soaking the oats during the initial step of oat drink manufacture.
In pea flour (17.9% protein; Table S1), saponin B content was 6.2 ± 0.4 mg/100 g and relatively quantified DDMP saponin content was 61.1 ± 2.0 mg/100 g. In fact, our findings are inconsistent with the results of Reim and Rohn (2015), who analysed saponin B and DDMP saponin contents in hulls and peas in six different pea varieties using the HPTLC method [32]. They reported that saponin content in peeled peas was 10 to 40 mg/100 g of saponin B and 0 to 20 mg/100 g of DDMP saponin depending on pea variety [32]. Nonetheless, the present findings of high DDMP content in pea flour are comparable with the results of : the DDMP saponin content varied from 70 to 150 mg/100 g DM, whereas saponin B varied from 0 to 40 mg/100 g DM [19]. Our results confirm that the DDMP saponin is the predominant naturally occurring saponin present in pea. The high level of DDMP saponin in pea flour was observed in the current study most likely because it has not been thermally treated and DDMP saponin has not been converted into saponin B. In pea protein concentrate (46.9% protein; Table S1), the saponin B content was 80.3 ± 1.6 mg/100 g and DDMP content was 107.6 ± 4.1 mg/100 g. Saponins are found in the cotyledons and are often associated with the protein bodies of legumes [4]. Therefore, saponin accumulation in pea concentrate produced by dry milling and air classification is evident [4], which is in accordance with at least twice higher levels of saponins in pea protein concentrate compared to pea flour determined in our study. In pea protein isolate (75% protein; Table S1), saponin B content was 243.8 ± 6.2 mg/100 g and DDMP content was 10.8 ± 0.7 mg/100 g. These results show that protein wet extraction and isoelectric precipitation, likely performed to achieve protein isolate, degrade unstable DDMP saponin naturally occurring in peas into saponin B. In the pea drink, saponin B content was 3.5 ± 0.2 mg/100 g and DDMP saponin was below LOQ. According to the product specification (Table S1), it contains 2% of protein, and the only protein source is pea. Although the exact production process of the pea drink is unknown, taking into account the content of saponin B per 1 g of pea drink protein (1.75 mg), the probable pea protein source should contain at least 175 mg of saponins (sum of saponin B and DDMP saponin, as DDMP saponin is converted into saponin B during drink pasteurization) per 100 g of pure pea protein.
To test the applicability of the developed method for simultaneous determination of oat and pea saponins from one matrix, the blend of pea isolate, oat protein concentrate, and pea protein concentrate was used. In addition, the part of the mixture was extruded according to the previously published article [41]. Results show that avenacoside A, avenacoside B, 26-desglucoavenacoside A, saponin B, and DDMP saponin content in the blend were 13.5 ± 1.0 mg/100 g, 10.9 ± 0.3 mg/100 g, 1.3 ± 0.3 mg/100 g, 123.9 ± 6.2 mg/100 g, and 27.1 ± 3.5 mg/100 g, respectively. Considering that this blend was composed of 52% pea protein isolate, 28% oat protein concentrate, and 20% pea protein concentrate, which were also analysed separately, the recoveries of avenacoside A, avenacoside B, 26desglucoavenacoside A, saponin B, and DDMP saponin were 114%, 115%, 90%, 95%, and 100%, respectively. In the extruded blend, avenacoside B and 26-desglucoavenacoside A content did not change significantly, avenacoside A content decreased by 21%, and saponin B content increased from 123.9 to 132.9 mg/100 g, which could potentially happen due to DDMP saponin conversion into saponin B during extrusion cooking.
Conclusions
In conclusion, the HILIC-MS-based method for oat and pea matrices, with a relatively simple extraction procedure for solid and liquid samples, allowing the simultaneous quantification of avenacoside A and saponin B, and the relative quantification of avenacoside B, 26-desglucoavenacoside A, and DDMP saponin, was employed for analysis of saponins in various food ingredients and products. Oat protein concentrate, pea protein isolate, and oat-and pea-based drinks were chosen for development and validation of the sample extraction methods. The optimised HILIC-MS method was able to absolutely quantify avenacoside A and saponin B in the matrices; other compounds were quantified based on existing standard compounds. The validation of the improved methods for both sample types (solid and liquid) showed the acceptable linear range, LODs and LOQs, precisions, recoveries, and MEs. Generally, an inter-day precision was below 20%. The accuracy and the precision of quantification were achieved by using the labelled internal standard ( 13 C-avenacoside A) obtained from U-13 C-labelled oat flour and with soyasaponin Ba as internal standards. The content of saponins was measured in different plant-based oat and pea products (ingredients, half-and end-products). This method could be potentially extended for other plant-based sample matrices, and the absolute quantification of all analytes could be achieved if the missing saponin standards were to arrive on the market.
Supplementary Materials:
The following supporting information can be downloaded at https:// www.mdpi.com/article/10.3390/foods12050991/s1. Description of solid sample extraction methods (1A, 1B, 2A, 2B, and 2C) used during the method development; Table S1: Nutritional information of analysed products; Figure S1: LC-MS chromatograms of oat and pea flours (SIR and ESI-). In oat flour: avenacoside A, avenacoside B, 26-desglucoavenacoside A, and internal standard 13 C-avenacoside A. In pea flour: saponin B, DDMP saponin, and internal standard soyasaponin Ba; Figure S2: Saponin yield in (a) oat and (b) pea matrices. The effect of sample clean-up: the pre-extraction of fat and six post-extraction filtration possibilities. The results of avenacoside B are presented in equivalents of avenacoside A mg/g and DDMP saponin in equivalents of saponin B mg/g; Table S2: The effectiveness of ultrasonic bath extraction compared to reference extraction conditions using the tube rotator (extraction yield 100%). Funding: The financial support for this research was provided by the European Union European Regional Development Fund (ERDF) and Estonian Research Council via projects RESTA16 and RESTA17.
Data Availability Statement: Data is contained within the article or supplementary material. | 2023-03-01T16:01:27.155Z | 2023-02-26T00:00:00.000 | {
"year": 2023,
"sha1": "9dcb941e5021f704106af96b096e19e143975cdd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/12/5/991/pdf?version=1677461725",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "062f2d64f6d5a9791488ffa4dd04ec1e0d13ffb4",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240090773 | pes2o/s2orc | v3-fos-license | Antimicrobial Susceptibility and Detection of Virulence-Associated Genes in Escherichia coli Strains Isolated from Commercial Broilers
The aim of this study was to investigate the presence of iron-uptake and virulence genes, antibiotic resistance profiles, and phylogenetic relatedness in 115 Escherichia coli (E. coli) strains isolated from broilers in Slovakia and to determine their potential threat to human health. The most frequent phylogroups were B1 (37%) and A (21%), and 33.9% strains were included in pathogenic groups. The commonly observed iron-uptake genes were feoB (94%), sitA (83%), and iutA (58%). Protectins (iss, kpsMTII) were identified in 30% of samples. Four percent of B2-associated broilers carried the papC (P fimbria) gene connected with upper urinary tract infection. The dominant resistance was to tetracycline (49%), ampicillin (66%), ampicillin + sulbactam (27%), ciprofloxacin (61%), and trimethoprim + sulfonamide (34%); moreover, sporadically occurring resistance to cephalosporins, aminoglycosides, fluoroquinolones, and polypeptide colistin was observed. Genotypic analysis of resistance revealed the presence of blaCTX-M-1 and blaCTX-M-2 in two isolates from broilers. Commercial broilers can be reservoirs of virulent and resistant genes as well as E. coli causing (extra-)intestinal infections, which can be a potential threat to humans via direct contact and food.
Introduction
The presence of potentially pathogenic E. coli strains in bird faeces may pose some risks to other animals and humans. In birds, E. coli isolates containing virulence factors are designated as avian pathogenic E. coli (APEC), which cause avian colibacillosis, and together with uropathogenic E. coli (UPEC) and E. coli causing meningitis in the newborn (NMEC), they belong among the extra-intestinal E. coli (ExPEC) strains. Since all ExPEC strains present the same phylogenetic origin and a notable degree of overlap in serogroups, sequence types (STs), and virulence-associated genes, several studies have suggested their zoonotic potential [1,2].
Another health concern is the presence of antibiotic-resistant bacteria, an emerging global problem in human and veterinary medicine and one of the main challenges for the twenty-first century. Bacteria can become resistant to antibiotics as a result of efficient horizontal gene-transfer mechanisms through mobile genetic elements such as plasmids, transposons, and integrons [3], the latter of which are able to integrate or excise gene cassettes in their structures.
Consequently, the presence of such pathogenic drug-resistant bacteria poses a public health threat. Humans can be infected due to non-compliance of hygiene procedures, such as washing hands with soap after manual manipulation with poultry and not wearing protective clothing and shoes on farms (it is possible to transfer faeces on shoes and contaminate the domestic environment). The other possibility is foodborne infections, be-cause of contamination of meat during slaughtering or eggs during laying and subsequent insufficient heat treatment of these contaminated products [4].
Numerous studies on phylogenetic groups, virulence-associated genes, and antimicrobial resistance in APEC isolates from farm poultry derived from various colibacillosis have been conducted in many countries, to prevent drug-resistant pathogenic strains reaching consumer foodstuffs [2]. These pathogenic strains result generally from commensals by the acquisition of infectious capacity through horizontal transfer of virulence genes [5]. In order to better understand the risks of emergence of antibiotic-resistant pathogens from healthy animal reservoirs, it is necessary to know the presence and prevalence of virulence factors and antibiotic resistance in faecal commensal E. coli. In this context, the aim of our study was to analyse the distribution of phylogenetic groups and the occurrence of virulence factors and antimicrobial resistance in faecal E. coli strains isolated from healthy broilers kept on commercial farms in Slovakia.
Results
A total of 115 E. coli strains found in asymptomatic broilers from five commercial poultry farms were included in this study.
Genes encoding protectins (iss, kpsMTII) were identified in an average of 30% of broiler samples (iss-34, kpsMTII-35 isolates). The presence of a gene encoding P fimbriae (papC), which is strongly connected with upper urinary tract infections, was confirmed in 4% of B2-associated E. coli.
Discussion
In our study, we found six virulence-associated genes of ExPEC in isolates from the intestinal microbiota of broilers, namely, tsh (9.6%), papC (4.3%), cvaC (21.7%), iss (29.6%), kpsMTII (30.4%), and ibeA (3.5%). The percentages indicate that the most numerous virulence factors were protectins. These protectins are typical for avian pathogenic E. coli (APEC), and they also contribute to survival and proliferation of microorganisms in the host [9]. Wang et al. [10] analysed the presence of invasion by brain endothelium protein A in isolates from ducks with colibacillosis, but none were found in healthy birds. Our results showed the presence of ibeA gene only in 3.5% of strains from healthy (asymptomatic) chickens, belonging to groups B2 and D. The recent study by Meena et al. [11] testifies to the presence of the genes characteristic for ExPEC (iroN, cvaC, and kpsMTII) more often in groups B2 and D, while papC gene was distributed among all phylogroups. Our results show iroN mainly in B1; kpsMTII in D; and cvaC and papC equally in broiler B2 and D groups. The strains containing genes for the synthesis of capsule and P fimbrie were classified in the ExPEC pathotype. The production of adhesins (fimbriae/pilli) is one of the virulence factors in UPEC enabling the colonization, adherence, and creation of the host inflammatory response [12]. P fimbriae coded by the papC gene, localized at the cell surface, are associated with upper urinary tract infections by binding to the endothelium of the kidney vascular system. Colonizing of the urinary tract by UPEC associated with papC gene can lead to pyelonephritis [13]. We found the papC gene in a total of five strains from broilers, of which three were UPEC (group B2), one was APEC (D group), and the remaining strain was ExPEC (unknown group).
E. coli strains colonizing human and animal guts can acquire genes coding virulence factors from pathogenic strains and can cause intestinal or extra-intestinal infections in various organ systems (specifically, urinary tract infections, pneumonia, meningitis, and sepsis). Köhler and Dobrindt [14] characterized virulence factors typical for extra-intestinal pathogenic E. coli (ExPEC), such as various adhesins (e.g., S/F1C and P fimbriae), toxins (e.g., cytotoxic necrotizing factor 1), factors for eliminating the host defense system (e.g., increased serum survival, colicin V, and capsule synthesis), and for nutrition acquisition (e.g., siderophores and their receptors). These virulence factors are often carried on mobile genetic elements such as pathogenity islands (PAI), which help to transport genes between commensals and pathogens. Iron availability modulates the gut microbiota composition [15]. Bacterial iron recovery is thus also one of the virulence factors available to bacteria. One method of getting iron from the environment is through the production of specific iron chelators, called siderophores, and their receptors on cell surfaces [16]. Tu et al. [17] investigated the influence of iron-uptake genes fyuA and irp2 on APEC pathogenesis. Deletion of these genes from the genome led to reduced transcription of virulence genes and their ability to adhere to cells. In general, pathogenicity was more reduced through the absence of fyuA than in strains with the deletion of irp2. These authors summarized the hypothesis about the cooperation of irp2 and fyuA in APEC pathogenicity, which can be the reason for the presence of irp2 gene in all isolates containing gene fyuA analysed in the present study. The iron acquisition systems used in low-iron conditions are encoded by sitA, iroN, and iutA genes. More than half of the strains from broilers and goshawks possessed these genes in the research by Handrová and Kmet' [18]. Our results point to the presence of all three genes in E. coli strains isolated from broilers. The Feo system is able to transport Fe 2+ , which occurs in anaerobic conditions and at low pH and which is necessary for bacteria living in such environments. Proteins FeoB and FeoA, which are components of the Feo system, participated in bacterial virulence [19]. The feoB together with sitA were the most common genes identified by us (on average, around 90%) in broilers.
A large diversity of antimicrobials is used to raise poultry in most countries, and most of them are considered to be essential in human medicine [21]. The study conducted by Joosten et al. [22] reported that aminopenicillins (ampicillin and amoxicilin), fluoro-quinolones, and tetracycline were the most frequently used antimicrobials in broilers in nine European countries, and only 3 to 26% of drugs were of the "highest priority critically important for human health" group of drugs (including colistin, quinolones, cephalosporins, and macrolides). This roughly reflects the situation in our farms, where the occurrence of resistance was as follows: the highest resistance was to ampicillin followed by ciprofloxacin, tetracycline, and ampicillin + sulbactam. The resistance to cephalosporins, aminoglycosides, and polymyxins was detected at less than 6% of isolates.
Johar et al. [23] analysed resistance of the APEC strain in healthy and non-healthy chickens. The resistance to ampicillin, cephalothin, ciprofloxacin, tetracycline, and fosfomycin was higher than 75% in both bird groups. Isolates from sick birds showed resistance to cefuroxime, ceftriaxone (4.4%), piperacillin-tazobactam (1.5%), and colistin (33.3%). Resistance to cephalosporins, namely, cefuroxime, cefotaxime, and ceftazidime, was found in 5.88%, 4.9%, and 2.94% of birds, respectively and to piperacillin-tazobactam (1.96%) in healthy broilers in our study. Of broilers, 2.94% demonstrated phenotypic resistance to colistin, but the value of MIC 90 was 1.0 mg/L, i.e., less than what is listed in EUCAST clinical breakpoints (2.0 mg/L). The resistance of E. coli from broilers in EU member states was analysed and evaluated by the European Food Safety Authority (EFSA) and the European Centre for Disease Prevention and Control (ECDC). Their results showed that the average of resistance to cephalosporins was 3%, with 8.5% in Spain, 3% in France and Poland, and 1% in Germany [24]. In another study from Italy, the susceptibility of E. coli from various poultry sources was investigated. The results confirmed the resistance to cephalosporins of the first (cefazolin-16%), second (cefoxitin-2%), and third (ceftazidime-2%) generation [25]. The results of these studies are consistent with our findings regarding cephalosporins in broilers (cefuroxime-5.88%, cefotaxime-4.9%, and ceftazidime-2.94%). Higher percentages detected by us represent resistance to ampicillin (65.69% of broilers), ampicillin-sulbactam (26.47%), tetracycline (49.02%), and trimethoprimsulphonamide (33.66%). Tetracycline is commonly used in poultry treatment, and this is the main reason for the high resistance to this antibiotic [26]. E. coli isolates from southwestern Nigeria were resistant to tetracycline in 81% of samples, which was the highest level found in this study [27]. In our findings, we detected resistance to tetracyclines as the third highest resistance in broilers (49.02%). E. coli isolated from broilers reared on farms in the Netherlands were resistant to ciprofloxacin in 50% of cases, in comparison to turkeys (45%) and laying hens (0%) [28]. Reduced sensitivity to ciprofloxacin was confirmed in our study too, namely, in 60.78% of broilers. Resistance to aminoglycosides was confirmed in commercial poultry in Nepal, where seven strains isolated from laying hens were resistant to gentamicin. On the other hand, broilers had not developed this resistance. Resistance to amikacin was not detected in any of the samples [29]. The E. coli from broilers in our study showed resistance to gentamicin in 2.94% and to tobramycin in 1.96% of samples. Two chicken isolates (1.96%) were detected as susceptible with increased exposure to amikacin.
Genotypic resistance was analysed only in some strains according to the results of our interpretive reading.
Cefotaximases confer resistance against all β-lactam antibiotics apart from cephamycins and carbapenems. Furthermore, CTX-M-producing E. coli strains are often resistant to other families of antibiotics such as quinolones, aminoglycosides, or cotrimoxazole. This remains a major emerging health concern because the choice of effective antimicrobial drugs is limited [30]. We found similar results as follows: CTX-M-producing E. coli were confirmed in two broilers. One of them was positive for presence of the gene bla CTX-M-1 and the second for bla CTX-M-2 . One strain had a gene encoding resistance to aminoglycosides; both of them were phenotypically resistant to ciprofloxacin and trimetoprim-sulfonamide. The Enterobacterales family often has a connection between plasmid-mediated quinolone resistance and extended-spectrum β-lactamases (ESBL) mechanisms [31]. The presence of aac(6 )-Ib-cr was confirmed in both strains from broilers with an identified mechanism of resistance. The acquisition of resistance leads to the formation of multi-drug-resistant strains and to the limitation of the effects of antimicrobial therapy [32].
Reduced sensitivity to carbapenems was not confirmed in any of our isolates. This is positive information for human medicine, because of their use as a last resort in treating bacterial infections, together with polymyxin colistin as the last choice in the treatment of human infections caused by carbapenem-resistant enterobacteria. The development of resistance to these antibiotics is therefore a global threat to the treatment of bacterial infections in humans and animals [33]. Cepas and Soto [34] described the relationship between growing resistance and a reduced amount of virulence factors. One possible hypothesis is that the process of acquisition of antimicrobial resistance is connected with deletion of virulence-associated genes from DNA regions. Several studies have focused on investigating UPEC strains and their susceptibility to quinolones and virulence. The strains resistant to quinolones did not possess virulence factors such as aerobactin and Pfimbriae [35]. Furthermore, E. coli strains producing haemolysins were more susceptible to tetracycline, nalidixic acid, cefotaxime, and cotrimoxazol than isolates without haemolysin production [36]. High resistance and low virulence were observed in non-B2 strains, and, in contrast, the B2 group had wide virulence factor capacity in isolates in a study carried out in French hospitals [37]. Our E. coli isolates of broiler origin presented higher average numbers of virulence factors, specifically 7.9, 7.6, and 4.1 in groups F, B2, and D, respectively. Less virulent were strains from groups A and B1, which accounted for approximately 3.0 factors. Despite the fact that most of the virulence genes were found in the pathogenic phylogroups, their susceptibility was higher compared to the commensal ones. In contrast, more than half (52.6%) of the resistant strains were groups A and B1, but they were poor in terms of virulence. This can be a reason for these strains' survival and growth in an antimicrobial environment, whereas their ability to survive in normal conditions can be limited.
Samples, Bacterial Isolation, and Identification
A total of 115 E. coli strains isolated from cloacal swabs of asymptomatic broilers from commercial farms were analysed in this study. The samples were collected from five broiler commercial farms, which are situated in regions of the North and the West Slovakia. All chickens were 12-30 days of age and reared in a large-capacity farm with litter floor rearing. In our study were examined the broiler breed Ross 308. Samples were inoculated in buffered peptone water (Oxoid, Basingstoke, UK) at 37 • C for 12 h and then cultured on MacConkey (Oxoid, Basingstoke, UK) and Uriselect agar (Bio-Rad Laboratories, Hercules, CA, USA) under the same temperature conditions overnight. Individual colonies were identified as E. coli using a matrix-assisted laser desorption/ionization time-of-flight mass spectrophotometry (MALDI-TOF MS) biotyper (Bruker Daltonics, Leipzig, Germany), according to the method described by Bessède et al. [38].
DNA Extraction, Clermont's Phylogenetic Typing, and Detection of Iron Uptake and Virulence-Associated Genes
Genomic DNA used for polymerase chain reaction (PCR) analysis was extracted from overnight culture by means of the boiling DNA extraction method. Briefly, samples were centrifuged at 12,000× g for 15 min, the supernatant was eliminated, and then the pellets were washed with filtered distilled water. Then, the pellets were re-suspended in 200 µL of filtered distilled water, subjected to boiling at 100 • C in a heat block for 10 min, cooled on ice for 10 min, and centrifuged for 2 min at 12,000× g, and then the supernatant was used for PCR. The isolated DNA was quantitatively and qualitatively analysed with a Nanodrop 2000c Spectrofotometer (Thermo Fisher Scientific, Wilmington, DE, USA) and stored at −20 • C.
The isolates were subjected to phylogenetic typing using the quadruplex phylogroup assignment method [39] for detection of eight E. coli phylogroups. The protocols are based on amplification of chuA, yjaA, arpA, and TspE4.C2 DNA fragments and additional testing for specific genes in the E (arpAgpE) and C (trpAgpC) groups. All detected genes, their primer sequences, PCR products sizes, annealing temperatures, and relevant references used in PCR are listed in Table 4. Table 4. Target genes and primers sequences used in the PCRs performed in this study to determine the E. coli phylogroups, iron uptake system, virulence factors, and antimicrobial resistance.
Conclusions
In conclusion, our results indicate that broilers reared for human consumption can be considered as potential reservoirs and carriers of ExPEC strains, containing high numbers of virulence-associated genes with the possibility of causing infections in both humans and animals. Some of these isolates were resistant to antimicrobials used in human treatments, such as cephalosporines and fluoroquinolones (presence of E. coli with CTX-M; and plasmid quinolone resistance qnrS; aac(6 )-Ib-cr), and 6.1% of broiler-associated isolates showed multi-drug resistance. In addition, the high prevalence of mobile elements (Int1 and Tn3) may allow gene dissemination. Based on these findings, we conclude that commercial broilers can be a potential threat for other animals and humans, because of the potential for meat and other products to become contaminated due to non-compliance with hygiene practices in farming or food-handling.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. | 2021-10-29T15:18:41.043Z | 2021-10-26T00:00:00.000 | {
"year": 2021,
"sha1": "45cc1282addf374ea30de7306cee481f96e03dc0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/10/11/1303/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e917998517d806cf27260d8edb48bb1a3dced3c",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
219988240 | pes2o/s2orc | v3-fos-license | Effectiveness of a video lesson for the correct use in an emergency of the automated external defibrillator (AED)
Background and aim of the work: Every year around 275 thousand people in Europe and 420 thousand in the United States are affected by sudden cardiac arrest. Early electrical defibrillation before the arrival of emergency services can improve survival. Training the population to use the AED is essential. The training method currently in use is the BLSD course, which limits training to a population cohort and may not be enough to meet the requirements of the proposed Law no. 1839/2019. This study aims to verify the effectiveness of an online course that illustrates the practical use of the AED to a population of laypeople. Methods: An observational study was conducted to compare a lay population undergoing the view of a video spot and a cohort of people who had participated in BLSD Category A courses. The performances of the two groups were measured immediately after the course and 6 months later. Results: Overall, the video lesson reported positive results. Six months later the skills were partially retained. The cohort that followed the video lesson showed significant deterioration in the ability to correctly position the pads and in safety. Conclusions: Although improved through significant reinforcements, the video spot represents a valid alternative training method for spreading defibrillation with public access and could facilitate the culture of defibrillation as required by the new Italian law proposal. (www.actabiomedica.it)
Background
An estimated 17.9 million people died from cardiovascular diseases in 2016, representing 31% of all global deaths. Of these deaths, 85% are due to heart attack and stroke (WHO, 2020) (1). In the same year, in Italy were recorded 221,914 deaths (ISTAT, 2019) (2).
Sudden cardiac arrest affects about 420 thousand people in the United States and 275 thousand in Europe annually (3). Gräsner et al. (4) calculated in Europe an incidence rate of 84 cardiac arrests per 100 thousand inhabitants. Analyzing the primary heart rhythms, a quarter of the victims have initial ventricular fibrillation, which tends to evolve into asystole before extrahospital rescuers arrive. This because rescue time after 5 -8 minutes from the event makes the heart rhythm no longer shockable. (5). The presence of an automatic external defibrillator (AED) at the time of collapse could improve the prognosis by restoring a cardiac rhythm compatible with an effective circulation up to 76% of subjects affected by ventricular fibrillation (6).
Recent studies have shown that the use of the AED by unqualified people or with minimal training is safe and effective. Therefore, if a spectator can use an automatic defibrillator, this practice is recommended (7,8).
In the last ten years, there has been an explosion of defibrillation programs in places with public access (PAD), in fact, even in Italy, the legislator, with its decree, established by law the presence of the AED in places with large numbers of visitors (9).
Yet despite the literature, the recommendations, the regulations, and the increase in the number of AEDs in public places, the portion of patients in cardiac arrest defibrillated before the rescue vehicle arrives, remains significantly low (10). Brooks et al. (11) and Smith et al. (12) have shown that in front of a collapsed person who is not responding, only 1.7-2.1% of viewers look for and use an AED before the ambulance arrives.
Since 2001, in Italy, a precise law grants the use of the semi-automatic defibrillator by non-healthcare personnel with specific training in cardio-pulmonary resuscitation activities (13). This rule does not seem to facilitate the use of the AED, as in the countries where the law allows all citizens to use the AED, their use is about 15-20% higher (8).
However, the Italian law, having requested the presence of the defibrillator in sports centers, highlighted an important first result. In 2015, were 123 the arrests reported in sports facilities with a return to spontaneous circulation for 62% of the victims after using the AED. This shows that the norm works, even if the device used only by specially trained people is not enough to reduce incidence in the general population (14). For this reason, we have made efforts to identify potential obstacles related to the use of AEDs by the lay public. The nature of obstacles turned out to be multifactorial and, among the causes, there is certainly the legal liability (4-38%); other relevant obstacles are the lack of knowledge about how the device works, not feeling comfortable in using it and the fear of causing harm to the victim (12).
A public access defibrillation program (PAD) (15) to be successful requires three requisites: firstly, there must be enough devices in the community; secondly, their position must be known or easily identifiable in an emergency, thirdly the public must have sufficient knowledge and confidence to use them. All these requirements must be met for a bystander to provide effective defibrillation. However, generally, national systems have so far focused on the first of these requirements, therefore greater use of public AEDs will not be achieved until the population has acquired greater knowledge and confidence with them (11).
The current classroom teaching methods, to acquire enough knowledge and confidence for the use of the AED, cannot be easily disclosed to the whole community and can present significant logistical, temporal and financial barriers, especially when it is necessary to form large groups. To cope with these difficulties, the scientific community has proposed alternative training methods for non-experts.
Although not yet validated, these new methods present good results in their support (16). Remarkably, the testing of structured courses through video lessons and the use of the web has proven effective even in the physical absence of the instructor (17,20). Recent studies conducted on school-aged students between 14 and 19 years old have shown a good efficacy of video-based training. This type of population seems to be particularly receptive to training in use the AED through this teaching method (21).
In Italy, a bill proposed at the end of July 2019 called "Progetto Life" provides that it is allowed to be able to use an AED even without specific training.. However, rescuers must follow the instructions given by the 112 emergency center health professionals. (22).
Aim
Assess of level of the learning in the use of the AED, through the viewing of a video spot and the subsequent practice on a static simulator
Research questions
1. In remote training courses, does the lesson through video spots compared to traditional BLSD courses provide the necessary skills for using the defibrillator?
2. Which skills, acquired through the video lesson and the BLSD courses, remain in the learner after months from the end of the training?
Study design and participants
Prospective and retrospective observational studies were conducted in 2019. The research involved the 1st year students of the Degree Course in Nursing at the University of Bologna and a cohort of operators from some metalworking companies in northern Italy. Our convenience sample of 91 lay people, 44 students, and 47 workers never held CPR courses or early defibrillation courses and had no clinical subject knowledge. We divided the sample into two groups according to the training method. The students attended the video lesson, while the workers were trained with the BLSD course certified IRC ©.
The video spot production was carried out by medical instructors and nurses experienced in cardiopulmonary resuscitation and with IRC certification. The video creation took inspiration from three videos found on the web: "How to Use a Defibrillator (AED) -First Aid Training" by St John Ambulance, "Emergency CPR / AED Emergency Response Update" and "HeartStart Field Defibrillator (AED) Video of training "by Philips.
1ª Phase of the study The prospective study took place in this phase. The students were divided into groups with an instructor and learner ratio of approximately 1: 6. The learners were shown twice, 5 minutes apart, the video in which the use of the AED was explained in 4 minutes. The video instructions included six essential steps: 1. the 112/118 call; 2. switching on the AED; 3. how to uncover the victim's chest; 4. the positioning of the defibrillator pads; 5. how to comply with the rules for using the AED safely; 6. the delivery of the shock if indicated.
After watching the videos, the students were asked to immediately perform the procedure on the manikin under the supervision of the instructors, who immediately proceeded to evaluate the performance through a specific form that summarized the video steps. As regards the retrospective survey, 47 workers were randomized through the database of BLSD courses carried out in metalworking companies in a province of northern Italy and their performances were obtained from the forms. The BLSD courses had been held a few months earlier by the same instructors Figure 1. Phase 1 of the study enrolled for the prospective study with the video lesson ( Figure 1). 2ª Phase of the study After 6 months, 1st year nursing students were recalled. Of these, 34 responded to the invitation, 10 fewer than in the first phase. Without viewing any video or repeating the instructors' instructions, they were asked to repeat the procedure for using the AED on the manikin.
After 6 months, the 47 randomized workers were also asked to use the AED on a manikin without any revision of the IRC © procedures.
The performance of the two groups was assessed by the same instructors, with the same tool used in the first phase.
Ethical considerations
All the sample involved in the study gave their written consent after learning the purpose and method of conducting the research. The anonymity and confidentiality of the participants were guaranteed. The research was authorized by the Bioethics Committee of Bologna University and by the Coordinator of the Degree Course in Nursing in Faenza, Italy.
Statistical analyses
The data were collected through Office Excel; the statistical analysis was conducted with SPSS software, version 26. Descriptive statistics calculations (mean, standard deviation, frequency, percentages) were performed. Significances were analyzed through Pearson's Chi-square.
Results
Nursing students evaluated immediately after the video lesson are 90.9% (n = 40) females, with an average age of 21.32 ± 5.59 and all with a high school diploma without further degrees.
The workers who achieved the BLSD certificate are 95.7% (n = 45) male, with an average age of 38.51 ± 11.69 and all with a high school diploma. Overall, the results of the video lesson are positive. Compared to the performances provided by the BLSD course, which provides for the knowledge of all the passages (100%), the video lesson achieved an average performance percentage of 89.77% ± 15.75. As regards the 6 single performances provided by the video: 79.5% call 112 or 118, 95.5% correctly turn on the AED, 97.7% correctly discover the patient's chest, 81.8% attacks the plates in the correct position, 88.8% follow the safety instructions and 95.5% deliver the shock if required (Graphic 1).
After six months, the average percentage of performance retention is significant for both training modalities (p = < .0001). The BLSD course sustains a performance of 81.21 ± 15.39, with an average difference of 18.79 respect to the certification phase. The video lesson maintains a 63.73 ± 15.05 losing a 26.04. However, both students and workers have maintained positive performances in skills 1, 2 and 3 with a range that varies from 64.7 to 94.1%. Skill 4 suffered a worsening for both groups with an average percentage value of 48.1%, while for performances 5 and 6 there were significant differences. Workers trained with the BLSD responded positively to the "follow instructions safely" ability for 63.8% (n = 30) of the sample, against 2.9% (N = 1) of the student group. In skill 6, although recording a significant difference between the video lesson and the BLSD, the performances remain above 88% ( Table 1).
The difference in performance between the video lesson and the BLSD is also highlighted by a significant loss in the performance of the video lesson (activities 4 and 5) in the period immediately after viewing the videos (T0) and its subsequent verification after 6 months (T1) ( Table 2).
In graphic 2, after 6 months, there is a good holding of the skills acquired in the video lesson, the curve becomes significantly negative in skills 4 and 5 with a loss of 28.9% in the first and 85.7 in the second.
Discussion
The study, by what has been highlighted in the literature (17)(18)(19)(20), shows that a short video lasting some minutes, which illustrates simply the fundamental Graphic 1. Distribution of students performance in the first phase steps of the use of a semiautomatic defibrillator and which uses an easy to understand language, can be a valid training method for a "non-professional" public of health (layman). The video allows the desired message to reach many people, especially as regards the youth population (21).
The use of the AED requires specific training and simulation is a tool highly appreciated by learners (23)(24)(25), but it is also the method that allows you to prevent accidents and promote safety (26). Simulate the use of the AED through a video lesson could be a valid strategy considering a possible future application of the legislative proposal no. 1839 (2019) (22); distance learning could guarantee the minimum skills needed to effectively use the AED (7, 8). The study, however, noted a certain difficulty for the students in maintaining their performances. After 6 months, statistically significant differences were pointed out, especially as regards the safety of the scene, significantly less guaranteed by the students compared to the BLSD sample.
However, even the workers trained through the BLSD course have shown a decrease in performance in the application of adhesive pads on the chest and in the safe execution of the procedure. The skills gaps months after defibrillation courses are confirmed in the literature. Educational reinforcements following certification are recommended to maintain the standards acquired during training (27,28).
Conclusions
Unlike the BLSD courses that require the physical presence of the students and a certain economic and organizational commitment both in the certification courses and in the retraining, the video lesson can be carried out remotely, at low prices, easily accessible to a large population. A video spot of this kind, related to the law currently under approval, could empower the population. The statements of the video should be passionate and highly engaging on the emotional side, so the main concepts could thus be better retained in the population that receives them (29).
As exemplified by this study, videos need training reinforcement; a simple solution could be to administer them as spots of the Ministry of Health, using both traditional and social information channels; so they could represent a valid tool to spread the culture of defibrillation and encourage citizens to use it effectively, as required by the new Italian law proposal and literature.
The main limitations of the study are the size and characteristics of the sampling. The research takes into consideration a limited sample relating to two cohorts that are not entirely equivalent by age, gender and professionalism: the average age of 20 years, female prevalence, university students for video lessons; an age < 40 years, male prevalence, workers for BLSD courses. To confirm our results further studies with a larger sampling are needed, including subjects randomly assigned to cohorts with comparable age, professionalism and personal characteristics. na, for having authorized the study within the course and the group of instructors at the IRC centre AUSL Romagna -Forlì. | 2020-06-24T13:07:01.332Z | 2020-06-20T00:00:00.000 | {
"year": 2020,
"sha1": "c788c8e7fc4a151497fedcd0c7053d9644b36a66",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3f35dea4bf7a9ae0c6c8b745dd07f6d6b7f1752e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218680109 | pes2o/s2orc | v3-fos-license | COVID-19 in patients with HIV
We read with interest the report by Blanco and colleagues of five people living with HIV who were admitted to a Barcelona hospital with COVID-19. We believe that caution is required before drawing conclusions on the outcome of COVID-19 in this population. Evidence is evolving that protease inhibitors developed for the treatment of HIV, both lopinavir and darunavir boosted by ritonavir or cobicistat, are not efficacious against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in vivo. Therefore, antiretroviral combinations should not be changed in an attempt to treat SARS-CoV-2 infection, because neither drug combination is a firstline choice in most guidelines for HIV and changing treatment could lead to increased rates of adverse events. Antiretroviral treatments such as non-nucleoside reverse transcriptase inhibitors and integrase inhibitors have better tolerability than boosted protease inhibitors. Moreover, three of the five cases described by Blanco and colleagues were initiated or switched to an antiretroviral combination containing a pharmacokinetic booster, thereby introducing a substantial risk of significant drug-drug interactions. New antiviral drugs active against COVID-19 are being developed, and interactions of such drugs with antiretrovirals can be seen frequently. For example, remdesivir might interact with carbamazepine and other drug metabolism inducers, and no data are available on potential interactions with nucleoside analogues used in antiretroviral combinations. Caution is needed when interpreting the incidence of COVID-19 in people living with HIV compared with the HIV-negative population. The numbers reported by Blanco and colleagues are small and patients attended only one hospital, so the sample is subject to bias. The authors do not report on the proportion of patients with COVID-19 who were tested for HIV infection. Without universal HIV testing, it is not possible to calculate the incidence of the two viral infections occurring in the same individual simultaneously. The statement that only 1% of people admitted with COVID-19 to one hospital in Barcelona had HIV can be misinterpreted and falsely reassuring, particularly while we still do not entirely understand which populations should be protected from COVID-19 by social interventions, such as shielding, self-isolation, and frequent testing. In the UK, large cohort studies are being done to investigate the true rate of infection, clinical characteristics, and outcomes of COVID-19 in people with HIV. Challenges in understanding the true frequency of COVID-19 in people with HIV include the overall limited testing that has happened so far, particularly for patients not needing hospitalisation, the admission of patients in hospitals external to where the individual might access their HIV care, and the fact that people with HIV might be more vigilant at shielding and self-isolation because of the propagation of fears of higher acquisition rates and a poorer outcome of SARS-CoV-2 infection in people living with HIV. Finally, appropriately powered and designed studies are needed to draw conclusions on the effect of COVID-19 in people with chronic diseases, including HIV infection. HIV infection is itself characterised by various clinical scenarios, ranging from viral supp ression and good quality of life to HIV-associated comorbidities or virological failure with or without immunosuppression.
COVID-19 in patients with HIV
We read with interest the report by Blanco and colleagues 1 of five people living with HIV who were admitted to a Barcelona hospital with COVID-19. We believe that caution is required before drawing conclusions on the outcome of COVID-19 in this population.
Evidence is evolving that protease inhibitors developed for the treatment of HIV, both lopinavir and darunavir boosted by ritonavir or cobicistat, are not efficacious against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in vivo. [2][3][4] Therefore, antiretroviral combinations should not be changed in an attempt to treat SARS-CoV-2 infection, because neither drug combination is a first-line choice in most guidelines for HIV and changing treatment could lead to increased rates of adverse events. Antiretroviral treatments such as non-nucleoside reverse transcriptase inhibitors and integrase inhibitors have better tolerability than boosted protease inhibitors. 5 Moreover, three of the five cases described by Blanco and colleagues were initiated or switched to an antiretroviral combination containing a pharmacokinetic booster, thereby introducing a substantial risk of significant drug-drug interactions. 5 New antiviral drugs active against COVID-19 are being developed, and interactions of such drugs with antiretrovirals can be seen frequently. For example, remdesivir might interact with carbamazepine and other drug metabolism inducers, and no data are available on potential interactions with nucleoside analogues used in antiretroviral combinations. 5 Caution is needed when interpreting the incidence of COVID-19 in people living with HIV compared with the HIV-negative population. The numbers reported by Blanco and colleagues are small and patients attended only one hospital, so the sample is subject to bias. The authors do not report on the proportion of patients with COVID-19 who were tested for HIV infection. Without universal HIV testing, it is not possible to calculate the incidence of the two viral infections occurring in the same individual simultaneously.
The statement that only 1% of people admitted with COVID-19 to one hospital in Barcelona had HIV can be misinterpreted and falsely reassuring, particularly while we still do not entirely understand which populations should be protected from COVID-19 by social interventions, such as shielding, self-isolation, and frequent testing. In the UK, large cohort studies are being done to investigate the true rate of infection, clinical charac teristics, and outcomes of COVID-19 in people with HIV.
Challenges in understanding the true frequency of COVID-19 in people with HIV include the overall limited testing that has happened so far, particularly for patients not needing hospitalisation, the admission of patients in hospitals external to where the individual might access their HIV care, and the fact that people with HIV might be more vigilant at shielding and self-isolation because of the propagation of fears of higher acquisition rates and a poorer outcome of SARS-CoV-2 infection in people living with HIV.
Finally, appropriately powered and designed studies are needed to draw conclusions on the effect of COVID-19 in people with chronic diseases, including HIV infection. HIV infection is itself characterised by various clinical scenarios, ranging from viral supp ression and good quality of life to HIV-associated comorbidities or virological failure with or without immunosuppression. 5 RJ received grants from ViiV and MSD and personal fees from Gilead. MN has received payment as a speaker, travel grants, and research grants from MSD, AbbVie, Gilead, ViiV, Hetero, and Mylan. DA has acted as an adviser to Gilead and ViiV and received support to attend scientific meetings from Gilead. MBo has acted as a speaker or adviser to, has been an investigator for, or has received grants to her institution from Gilead, ViiV, Janssen, BMS, Teva, Cipla, Mylan, and MSD. MBr declares no competing interests. | 2020-05-16T13:04:54.008Z | 2020-05-14T00:00:00.000 | {
"year": 2020,
"sha1": "b53141116013b45c507dba1764acdf48f4307c79",
"oa_license": null,
"oa_url": "http://www.thelancet.com/article/S2352301820301399/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "dee0ca141b2bcdd5da87c5d7ffdd915e44a98944",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235401178 | pes2o/s2orc | v3-fos-license | Fatal myopericarditis complicated with coronary vein perforation under the triple antithrombotic therapy: a case report
Abstract Background Acute pericarditis generally follows a mild clinical course and is rarely fatal. Coronary vein involvement is rarely reported. Case summary We report an autopsy case of cardiac tamponade from idiopathic myopericarditis due to coronary venous perforation under the triple antithrombotic therapy. A 69-year-old man was admitted to our hospital with abnormal findings on electrocardiography, bloody pericardial effusion, and mild elevation of troponin I. Oral anti-inflammatories were started and the patient followed a benign course. However, on hospital Day 5, he suddenly suffered cardiogenic shock with pulseless electric activity due to cardiac tamponade under the combination use of the dual antiplatelet drugs and an anticoagulant drug. He died despite intense medical treatment. Autopsy revealed cardiac tamponade caused by perforation in the coronary venous wall. To the best of our knowledge, this is the first description of fatal myopericarditis as a complication of coronary venous perforation. Discussion The aetiology and mechanism remain unknown; however, we should take care for this rare complication in patients with acute myopericarditis and bloody effusion under the triple antithrombotic therapy.
Introduction
Acute pericarditis generally follows a mild clinical course and is rarely fatal. 1 Herein, a severe case of acute myopericarditis is reported presenting with blood pericardial effusion leading to uncontrollable cardiac dysfunction and death. Autopsy revealed perforation of the coronary vein due to acute inflammation. Although the underlying mechanisms remain unclear, this appears to represent the first description of fatal myopericarditis complicated by coronary vein perforation.
Learning points
• Coronary vein perforation is a rare complication of myopericarditis whilst on antithrombotic therapy, presenting as a haemorrhagic pericardial effusion.
Case presentation
A 69-year-old man presented to our outpatient department complaining of anterior chest discomfort, which had intermittently continued for a week. Because the symptom was improving and blood testing and electrocardiography (ECG) showed no abnormalities, he returned home, and visited a doctor again 1 week later. He had a history of successful percutaneous coronary stenting for angina pectoris (10 years ago), surgical resection of rectal cancer (6 years ago), and lung metastatic cancer (5 years ago). Medications included low-dose aspirin (100 mg/day), cilostazol (200 mg/day), valsartan (80 mg/day), and cilnidipine (10 mg/day). He visited our outpatient department without any symptoms during the preceding week. However, auscultation revealed pericardial friction rub and ECG showed ST-elevation in I, II, III, aVF, and V2 to V6 leads ( Figure 1A). Laboratory evaluation revealed elevated C-reactive protein (CRP) level of 2.97 mg/dL (normal, < _0.30 mg/dL) and troponin I level of 1995 pg/mL (normal, < _26.2 pg/mL). Creatine phosphokinase (CPK) was within normal limits. On admission, the patient showed no symptoms with a blood pressure of 133/84 mmHg, temperature of 35.0 C, heart rate of 68 b.p.m., respiratory rate of 15 breaths/min, and oxygen saturation of 98% in room air.
Transthoracic echocardiography showed circumferential mild pericardial effusion without any findings of cardiac tamponade including swinging heart and diastolic collapse of right atria and ventricle (Video 1). Coronary angiography revealed no significant stenosis. Computed tomography (CT) also revealed mild pericardial effusion with attenuation of 60 Hounsfield units (HU), indicating blood constituent ( Figure 1B). Acute pericarditis and pericardial effusion with minor injury in cardiac muscle were diagnosed. Treatment including aspirin (750 mg every 8 h) and colchicine (0.5 mg/day) was started according to pericardial disease management guidelines. 2 On hospital Day 3, paroxysmal atrial fibrillation (AF) lasting 20 min was recognized, although the patient was asymptomatic. A regular dose (15 mg/day) of rivaroxaban was added to his medications.
On hospital Day 5, laboratory evaluation revealed that CRP and troponin I were reduced, at 0.44 mg/dL and 1718 pg/mL, respectively. However, the volume of pericardial effusion was unchanged on echocardiography. The patient abruptly complained of severe chest discomfort and cardiogenic shock developed, accompanied by systolic blood pressure of 50-59 mmHg and a heart rate of 155 b.p.m. of AF. He suffered cardiogenic shock with pulseless electric activity due to cardiac tamponade despite synchronized cardioversion. Urgent pericardiocentesis was performed and confirmed haemorrhagic effusion. Percutaneous cardiopulmonary support was introduced immediately, and intra-aortic balloon pumping was also started, because return of spontaneous circulation was not obtained.
Bloody drainage and cardiac dysfunction were continued. Despite extensive medical treatment including blood transfusion, heart rate, and blood pressure gradually decreased without any ventricular arrhythmia and the patient finally died 12 h after onset of cardiogenic shock.
Autopsy revealed a drainage tube placed in the posterior side of the pericardial cavity and 690 mL of blood retained there. Culture and polymerase chain reaction yielded negative results for bacterial infection. The heart weighed 520 g and exhibited many petechiae on the fatty tissue of the inferoposterior wall. In addition, blood oozing was identified on manual compression ( Figure 1C, Video 2). Because the oozing site led to the coronary sinus, perforation of the coronary vein was identified. Microscopically, the perforated site was confirmed in the coronary venous wall as expected (Figures 1D-G). Lymphocytic infiltration ( Figure 1H), macrophages containing haemosiderin ( Figure 1I) and mild fibrosis were detected in pericardial and myocardial tissues only around the perforated site. The coronary vein was considered to have been perforated a few days earlier based on the presence of fibrosis.
Discussion
The two major indicators of poor prognosis in acute pericarditis are reportedly the existence of cardiac tamponade and the failure of antiinflammatory drugs. 3 Indeed, the present case was accompanied by cardiac tamponade and failure of aspirin and colchicine treatment. It is important for us to be more careful when encountering such patients with acute pericarditis.
Computed tomography on admission revealed mild pericardial effusion with 60 HU. Hounsfield units >30 in pericardial effusion has been reported to have a sensitivity of 100% and specificity of 70% for visceral or vascular rupture. 4 Haemorrhagic pericardial effusion is caused by several diseases including cardiac rupture due to myocardial infarction or trauma, impending aortic rupture, iatrogenic injury of coronary artery and veins, which were excluded by the clinical history and the histological examination.
In the present case, the cause of death was identified as cardiac tamponade and dysfunction due to coronary venous perforation complicating myopericarditis.
Pericarditis and myocarditis share common aetiologies and overlap in daily clinical practice. According to European Society of Cardiology Task Force consensus, our patient met the criteria for pericarditis plus mild elevation of markers of myocardial damage without focal or diffuse impairment of left ventricular function on echocardiography, and we diagnosed 'myopericarditis'. 2 Damage to the myocardium was limited to within the surface area, with no elevation of CPK recognized during the clinical course and autopsy ruled out active myocarditis. Negative results from culture excluded the possibility of purulent pericarditis.
The use of rivaroxaban in addition to aspirin and cilostazol might have exacerbated damage to the coronary vein wall due to myopericarditis. However, no other common bleeding events such as gastrointestinal and/or subcutaneous bleeding were observed. It remains unclear whether we should or should not start anticoagulant drugs when paroxysmal AF is observed in patients with pericarditis and pericardial effusion with high HU in CT images.
Viral infections are among the most common causes of myopericarditis. Other mechanisms causing myopericarditis include connective tissue diseases, radiation-induced, drug-induced, or vaccineassociated diseases. 2 Any preceding viral illness was not reported by the patient and blood tests showed no evidence of connective tissue diseases.
The patient had undergone low anterior resection due to rectal cancer 6 years earlier, thoracoscopic resection of metastases in the left lung inferior lobe followed by chemotherapy including bevacizumab (a vascular endothelial growth factor inhibitor) and radiofrequency ablation therapy 3 years earlier. Therefore, it is impossible to rule out the possibility completely that chemotherapy and radiofrequency ablation therapy had induced chronic inflammation in the pericardium or surface of the myocardium. Although coronary veins can be involved by pericardial metastatic cancer, 5 the tumour cells were not identified histologically and cytologically. The thickening of the pericardium, which is an important sign of chronic radiationinduced pericarditis, 6 was not recognized on chest CT image 3 months before the admission and not in autopsy specimen. Therefore, it is unlikely that coronary vein perforation was associated with chronic inflammatory response due to metastasis or radiation.
Coronary venous perforation is an extremely rare entity and only several cases have been reported as a complication of catheterization and percutaneous coronary intervention. 7 Perforation of the coronary vein was estimated to be induced by both inflammation of myopericarditis and the combined use of rivaroxaban, aspirin, and cilostazol.
In conclusion, our case presented with fatal myopericarditis (pericarditis plus minor cardiac muscle injury) complicated by coronary vein perforation. The aetiology and mechanisms underlying extensive focal damage to superficial tissue around the coronary vein remain unclear. It is important to take this rare complication into consideration when we see patients with pericarditis and uncontrollable cardiac tamponade and dysfunction due to bloody effusion under the triple antithrombotic therapy.
Lead author biography
Akihiro Takasaki is a cardiologist at Mie University Hospital, Mie, Japan. He is currently undertaking a PhD with the Mie University, exploring familial hypercholesterolaemia using the Acute Coronary Registry data in Mie prefecture.
Supplementary material
Supplementary material is available at European Heart Journal -Case Reports online.
Slide sets: A fully edited slide set detailing this case and suitable for local presentation is available online as Supplementary data.
Consent:
The authors confirm that written consent for submission and publication of this case report including images and associated text has been obtained from the patient's next of kin in line with COPE guidance. | 2021-06-12T05:17:08.168Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "af680462f8b0905e27c8d84750dcf58b889c39ee",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ehjcr/article-pdf/5/4/ytab098/37337433/ytab098.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af680462f8b0905e27c8d84750dcf58b889c39ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267210023 | pes2o/s2orc | v3-fos-license | Navigating the Landscape of Intestinal Regeneration: A Spotlight on Quiescence Regulation and Fetal Reprogramming
Tissue-specific adult stem cells are pivotal in maintaining tissue homeostasis, especially in the rapidly renewing intestinal epithelium. At the heart of this process are leucine-rich repeat-containing G protein-coupled receptor 5-expressing crypt base columnar cells (CBCs) that differentiate into various intestinal epithelial cells. However, while these CBCs are vital for tissue turnover, they are vulnerable to cytotoxic agents. Recent advances indicate that alternative stem cell sources drive the epithelial regeneration post-injury. Techniques like lineage tracing and single-cell RNA sequencing, combined with in vitro organoid systems, highlight the remarkable cellular adaptability of the intestinal epithelium during repair. These regenerative responses are mediated by the reactivation of conserved stem cells, predominantly quiescent stem cells and revival stem cells. With focus on these cells, this review unpacks underlying mechanisms governing intestinal regeneration and explores their potential clinical applications.
Introduction
Tissue homeostasis is maintained throughout an organism's lifespan by adult stem cells, whose self-renewal and differentiation processes are intricately regulated.The steadystate turnover rate of the intestinal epithelium is remark-ably higher than other tissues and leucine-rich repeat-containing G protein-coupled receptor 5 (LGR5) + crypt base columnar cells (CBCs) are identified as bona fide intestinal stem cells (ISCs) that replenish the gut epithelial lining (1,2).In the mouse small intestine, crypt-bottom residing CBCs give rise to transit-amplifying (TA) cells that migrate toward the luminal side, where they stochastically differentiate into either absorptive-or secretory progenitors, leading to the generation of defined intestinal epithelial cells (IECs) including enterocytes, Paneth cells, goblet cells and enteroendocrine cells (3).Since these matured cells at the tips of the apical part keep shedding into the lumen via anoikis, a detachment-associated apoptotic death, the fine balance between self-renewal potential and the differentiation capacity of actively cycling CBCs is pivotal for intestinal integrity.In specific, the stem cell niche, composed of neighboring Paneth-and stromal cells as well as the surrounding microenvironment, provides CBCs with a range of molecules that modulate key signaling involved in stem cell fate, including the Wnt/R-spondin, Notch, bone morphogenetic protein (BMP), transforming growth factor (TGF)β, and epidermal growth factor (EGF) path-ways (4,5).Furthermore, along the crypt-TA zone-villus architecture axis, various cell-to-cell interactions between epithelial-and non-epithelial cells, extracellular matrix (ECM)associated features (composition and stiffness), and even microbiota and their metabolites play crucial roles in regulating intestinal homeostasis (6)(7)(8).Building on this knowledge, Sato et al. (9) have reported that crypt bottom regions bearing ISCs and Paneth cells of mouse small intestine can be effectively cultured in vitro with the supplement of stromal niche factors and the ECM analog Matrigel, leading to the emergence of self-organized three-dimensional (3D) epithelial structures termed intestinal organoids.Currently, organoid culture technologies have been well-established for various conditions, such as origin (small intestine vs. colon), cell type (embryonic stem cells vs. adult ISCs), and species (humans and other animals), to support the expansion and differentiation of conventional LGR5 + ISCs in vitro.
Up to date, emerging evidence suggests that the regenerative responses upon injury in the intestinal epithelium are distinct from its homeostatic processes.Exposure to cytotoxic agents including chemicals and inflammatory cytokines disrupts crypt structure and abolishes LGR5 + ISCs in the intestine.CBCs are highly vulnerable to DNA damage mainly due to their active cell cycle with enhanced Wnt activity (10,11) and Olfm4, one of the representative markers for CBCs, is rapidly down-regulated upon irradiation (10,12).Moreover, a mouse model with diphtheria toxin-induced genetic ablation of LGR5 + stem cells showed no significant defect in intestinal homeostasis (13).Given the effective repair of the intestinal epithelium despite the absence of LGR5 + ISCs, and their subsequent resurgence implying the restoration of homeostasis, it is evident that alternative stem cell pools instead of CBCs drive the intestinal regenerative process.Lineage tracing of IECs in transgenic animal models and single-cell RNA sequencing-based identification of novel cell populations with an enhanced understanding of cell-cell communications, as well as mechanistic validation studies using the organoid system (14,15), collectively highlight the pronounced cellular plasticity of the damaged intestinal epithelium during wound healing and regeneration.The core regenerative mechanisms can be broadly divided into two: (1) reactivation of LGR5 − or LGR5 low -conserved stem cells and (2) dedifferentiation of precursors and differentiated IECs into stem cell-like cells.In this review, we predominantly focus on the former category, particularly label-retaining, quiescent stem cells and revival stem cells (RSCs), highlighting their key characteristics and regulatory mechanisms with a brief summary of representative findings based on animal models and human studies.In addition, this review will introduce preliminary research on efforts to induce these regenerative-specific ISC populations for potential clinical applications.
Quiescent Stem Cells: Label-Retaining Cells at the +4 Position CBCs continuously proliferate and undergo asymmetric division to differentiate, leading to the formation of cryptvillus ribbons in clonal labeling experiments (16,17).In contrast, Potten et al. (18) reported a subset of cells in the crypt resided at the +4 position above the Paneth cells, demonstrating long-term retention of DNA labels.Compared to CBCs, these slow-cycling cells maintain their dormant state in homeostatic conditions, while they reactivate and exhibit both self-renewal and multipotency to regenerate intestinal epithelium in the absence of CBCs, implying that label-retaining cells (LRCs) represent a 'reserved' stem cell population (19).Based on further lineage tracing results, several candidates have been proposed as LRC markers including B cell-specific moloney murine leukemia virus integration site 1 (Bmi1), HOP homeobox (Hopx), SRYbox transcription factor 9 (SOX9), leucine-rich repeats and immunoglobulin-like domains 1 (Lrig1), mouse telomerase reverse transcriptase (mTert) and mex-3 RNA binding family member A (Table 1) (20)(21)(22)(23)(24)(25); however, the molecular characteristics of reported LRCs might differ from each other and warrant detailed investigation.In terms of marker expression, for instance, Bmi1 and Hopx, initially identified as a marker for LRCs, are not exclusively expressed in +4 cells but also found in subpopulation of CBCs and even differentiated cells (26)(27)(28).In the intestinal epithelium of Sox9-eGFP mice, crypt cells could be categorized into four populations based on Sox9 expression level (29).Notably, Sox9 high cells that demonstrate label-retaining capacity and radioresistance exhibited gene signatures resembling both +4 cells and secretory progenitors.Roth et al. (30) identified Mmp7 and Defa1-expressing long-lived Paneth cells as the LRC population using an in vivo pulse-chase experiment with H2B-GFP mice for nuclear labeling.Meanwhile, fluorescence-based reporter assay revealed that Lgr5 high cells exhibited substantial levels of transcripts associated with proposed quiescent cell markers including Bmi1, Hopx, Lrig1, and mTert as comparable levels to Hopx-or Bmi1 expressing cells (26,31).Collectively, LRCs are perceived as a slow-cycling heterogeneous population predominantly located around the +4 position of the crypt base.They display a range of markers that overlap with those of CBCs and precursor cells, par- Bmi1: B cell-specific Moloney murine leukemia virus integration site 1, Mex3a: mex-3 RNA binding family member A, Sox9: SRY-box transcription factor 9, Lrig1: leucine-rich repeats and immunoglobulin-like domains 1, Hopx: HOP homeobox, mTert: mouse telomerase reverse transcriptase, Prox1: prospero homeobox 1, NeuroD1: neurogenic differentiation 1, Tph1: tryptophan hydroxylase 1, Y: yes, N: no, ND: not determined, 5-FU: 5-fluorouracil, Lgr5: leucine-rich repeat-containing G protein-coupled receptor 5.
ticularly those of the secretory lineage.Despite the ongoing disputes regarding the detailed characteristics, a major criterion for identifying quiescent stem cells is their capacity to reactivate and lead the regeneration process upon injury.Interestingly, they also participate in maintaining homeostasis.Lineage-tracing experiments under homeostatic conditions have shown that Bmi1or Hopx expressing cells can generate clonal strips covering the crypt-villus axis as CBCs (20,24,32).Sangiorgi and Capecchi (20) verified the impact of Bmi1 + cell loss by tamoxifen injection to Bmi1Cre-ERT; Rosa26 DTA mice and found that ablation of Bmi1 expression in the intestine resulted in marked weight loss and acute death of mice.In the histological analysis, crypt structures throughout the intestinal tract were remarkably disoriented and damaged but gradually replaced by intact ones over approximately a year.
In addition, several works suggest that secretory precursor cells are reserved as LRCs in homeostatic conditions.A lone-term pulse-chasing study of crypt cells using cytochrome p450 1a1-H2B-YFP mice revealed a novel LRC population that expresses secretory precursor markers ChgA and ChgB instead of conventional +4 cell markers such as Lrig1 and Hopx (33).Li et al. (34) further demonstrated that LRCs marked by H2B-GFP differ from Hopx-CreERT2 expressing quiescence stem cells, although both exhibit a decreased Wnt signaling activity than CBCs.H2B-GFP labeling conducted at various periods revealed two LRC populations: short-term-LRCs with enteroendocrine cell-like features could generate organoids in vitro, while long-term-LRCs exhibited Paneth cell traits but lack organoid-forming capacity.Of note, both 'secretory' LRCs are distinctive to Hopx-positive cells, which demonstrate a profound regenerative potential against radiation damage.On the contrary, the emergence of GFP-labeled cells at the +4 position in Bmi1-GFP mice is dependent on Atoh1 expression (35), which is one of the crucial transcription factors in secretory lineage specification.Indeed, RNA sequencing and trajectory analysis indicated that Bmi1 GFP cells correlated to enteroendocrine cell signature and expressed 'multi-capable' enteroendocrine cell markers including prospero homeobox 1 (Prox1) (32,36).Similar to Bmi1-expressing cells, Prox1 + cells were capable of expansion and reconstruction of the crypt-villi structure of homeostatic-and irradiated intestinal epithelium.The LRClike properties of the enteroendocrine lineage have been also confirmed in CreERT2 mice targeting neurogenic differentiation 1 (NeuroD1), a basic helix-loop-helix transcription factor essential for enteroendocrine cell differentiation, or tryptophan hydroxylase 1 (Tph1), a marker gene for enterochromaffin cells (37).In this work, NeuroD1 + cells co-expressing ChgA or Tph1 persisted at the +4 position for several weeks and occasionally generated differentiated cells or reverted back to CBCs in homeostatic conditions.The authors further showed that a majority of these cells were also labeled by Hopx, a representative marker for +4 cells, and HopX + /Tph1 + cells displayed superior organoid forming capacity compared to HopX − /Tph1 + or HopX + /Tph1 − cells.Meanwhile, Higa et al. (38) indicated that cyclin-dependent kinase inhibitor p57 can label the quiescent ISC populations at the +4 position, which also expressed the classical marker Bmi1.The p57 deletion driven by the villin gene promoter reduced the 5-ethynyl-2'-deoxyuridine (EdU)-retention capability of Bmi1 + cells, implying its role in the maintenance of dormant stem cells.According to the single-cell sequencing results, p57 high cells did not express proliferation-associated genes but were enriched with enteroendocrine cell markers, such as Rfx6 and ChgB, with LRC markers.These p57 + cells could function as an alternative ISC population for postinjury epithelial regeneration in 5-fluorouracil (5-FU)-treated mice.Shifting the focus to a correlation between LRCs and goblet cells, a recent study indicated that Krüppel-like factor 4 (Klf4), typically expressed in differentiated IECs, is also observed in some Bmi1-expressing crypt cells that co-express goblet cell marker Muc2 (39).The cell-type specific deletion of Klf4 resulted in increased cell proliferation of Bmi1 + cells but reduced the number of goblet cells in the villi.Intestinal organoids derived from single Bmi1-YFP + cells generated Klf4 + /Muc2 + differentiated cells, while those from Klf4-deleted Bmi1-YFP + cells demonstrated enhanced proliferation without Muc2 + cell differentiation.Therefore, differentiation potential towards goblet cell lineage as well as the proliferation capacity of Bmi1-expressing LRCs might be regulated by Klf4 during homeostasis.
While accumulated evidence suggests the role of LRCs in maintaining homeostasis, certain studies contend that, in the absence of damage, LRCs do not contribute the epithelial cell turnover as conventional ISCs (30,33), partially due to their insufficient endogenous Wnt signaling activity (38).Moreover, when Lgr5-expressing CBCs were genetically ablated in conjunction with intense radiation injury (over 6 Gy), the recovery of the intestinal epithelium was significantly impeded (40).This challenges the notions of radioresistance of +4 cells and the dispensability of Lgr5 + cells in intestinal regeneration.Hence, to understand the complex properties of the LRC populations, further studies are required on their reactions to diverse stimuli and the underlying mechanisms that drive regeneration.In this regard, a recent work by Chaves-Pérez et al. (41) explored the protective mechanism of LRCs in overcoming radiationmediated gastrointestinal syndrome (GIS).The authors observed that the endogenous levels of the unconventional prefoldin RPB5 interactor (URI), a chaperone essential for maintaining DNA integrity following damage (42), were altered in the crypt following abdominal irradiation (8 to 14 Gy); URI levels decreased post-irradiation but returned to normal range during the recovery phase, implying its role in regeneration.Indeed, villin + cell-specific overexpression of URI levels in Knock-in mice could protect mice from GIS, while deletion of URI resulted in intestinal failure resembling GIS even under normal conditions.Interestingly, URI expression was found predominantly at the upper crypt, especially Lgr5 low cells.Further genetic tracing of URI-expressing cells in URI-YFP mice indicated that YFP labeled radiotolerant, slow-cycling cells in the crypt and these URI + LRCs exhibited a repopulating capacity in homeostatic status as well as irradiated epithelium.It was noted that reduced URI expression in LRCs prompted cell cycle reactivation and proliferation, leading to increased radiosensitivity and apoptosis.Mechanistically, URI in LRCs function as a gatekeeper for the Wnt/βcatenin signaling, a crucial pathway for self-renewal of CBCs, by suppressing the nuclear translocation of β-catenin via direct binding.These findings underscore the pivotal role of URI in preserving the defining attributes of LRCs, namely quiescence and radioresistance.Another study by Chen et al. (43) proposes an experimental approach to enhance the regenerative potential of Lgr5 − LRCs against GIS.To control the cytotoxic impact of reactive oxygen species (ROS) during injury, the authors overexpressed one of the ROS scavengers, TP53-induced glycolysis and apoptosis regulator (TIGAR) (44), in a stem cell-specific manner.Interestingly, TIGAR upregulation in Bmi1-expressing cells, but not in Lgr5 + cells, significantly improved the survival of irradiated mice.Followed lineage tracing of TIGAR-overexpressed Bmi1 + cells showed that the proliferation of these cells was enhanced both in vivo and in vitro (confirmed by organoid culture).Transient overexpression of TIGAR via adenovirus in organoids also promoted the growth in Bmi1 + cell-derived organoids following irradiation, whereas organoids from Lgr5 + cells were not affected by TIGAR expression level.This beneficial impact of TIGAR was mediated by activating protein 1 (AP-1) activity, considering that administration of a specific inhibitor of AP-1, 3-PA, could reverse the enhanced recovery of Bmi1-specific TIGARoverexpressed mice.These studies suggest novel insights into how various innate stress-regulating molecules can modulate the reactivation and regenerative response of LRCs.
The great advances in genetic engineering technologies have enabled the development of various mouse models to study cell type-specific behaviors, thereby facilitating indepth investigations into the characteristics of both Lgr5 + ISCs and LRCs; however, research regarding human stem cells has been constrained due to experimental limitations.To overcome this challenge, Ishikawa et al. (45) conducted an elegant work exploring the dynamics of human colonic stem cells (hCoSCs) during homeostasis as well as injury status with hCoSC-implanted mouse models, generated as previously reported (46).Single-cell sequencing of human colonic epithelium revealed both LGR5 + stem cell and +4 cell-like secretory precursor populations.Notably, the LGR5 + stem cell subpopulation in humans lacks of proliferative markers, in contrast to that of mice.Visualization of LGR5 mRNA and Ki67 protein displayed Ki67 − , slow-cycling LGR5 + cells were localized mainly at the crypt bottom.Based on the observation that p27, a negative regulator of the cell cycle, readily labels these non-dividing LGR5 + CoSCs, genetically engineered human colonoids bearing LGR5-and p27 reporters simultaneously were generated.In in vitro organoid culture, most of the LGR5 + /p27 + cells were in the G0 status, while LGR5 + /p27 − cells expanded rapidly to differentiate.Within the orthotopically transplanted human colonoids, LGR5 + /p27 + cells were localized at the de-novo crypt bottom, mirroring their position in human colonic tissue, and exhibited LRC-like behavior in the in vivo EdU label retention assay.Moreover, these cells were tolerable to chemotherapeutic 5-FU treatment; Indeed, 5-FU treatment ablated murine LGR5 + CBCs but enriched p27-expressing cells in human LGR5 + cells and progeny of LGR5 + /p27 + cells generated stirps in 5-FU-treated injured lesion.Another intriguing point is the damage-induced dedifferentiation procedure of mature IECs in the presence of LGR5 + / p27 + hCoSCs, suggesting that LGR5 + cells drive the regeneration process in the human intestine.This study supports previous findings that slow-cycling, quiescent stem cells contribute to the intestinal regeneration process.However, by revisiting the role of LGR5 + ISCs in this context, it also underscores the need for further research into pioneering specific markers for quiescent stem cells and understanding the mechanisms that govern their state of quiescence and cell cycle dynamics, especially in the light of interspecies variations.
Revival Stem Cells: Injury-Induced Intestinal Stem Cells Undergoing a Fetal-Like Transition
Beyond the canonical niche cues, such as Wnt and BMP signaling cascades, the Hippo-yes-associated protein (YAP) signaling axis plays a pivotal role in the guidance of intestinal physiology from development to homeostasis (47).Accumulating evidence also suggests that the activation of the YAP signaling pathway orchestrates cell fate remodeling, crucial for complete tissue regeneration.YAP activation, characterized by nuclear translocation of YAP, is apparent in damaged intestinal epithelium from both mouse models and human tissues (12,(48)(49)(50), and suppression of YAP activity either with genetic ablation or inhibitory chemicals hinders the recovery of injured epithelium (12,49).Regarding this phenomenon, Gregorieff et al. (12) demonstrated that rapid but transient activation of YAP signaling occurs in IECs in response to whole-body irradiation.YAP activation, in conjunction with EGF receptor signaling, led to a decline in canonical Wnt signaling, subsequently suppressing the expression of CBC markers Lgr5 and Olfm4 and Paneth cell differentiation in the crypt region.
To achieve an in-depth understanding of the regeneration-oriented cellular adaptation process regulated by YAP signaling, single-cell RNA sequencing on the repairing mouse intestine was performed to classify the radiation injury-affected IEC populations and identify damage-responsive subsets (51).Transcriptional profiling data of single cells showed profound differences in the IEC populations between normal-and irradiated epithelium, including a loss of Lgr5 + CBCs and regional shift in mature cells to less-differentiated states as previously described.In the regenerative tissue, the authors identified the distinct and novel cell populations, characterized by the exclusive expression of DNA damage response and cell survival markers, including Clusterin (Clu) and Ly6a/Sca-1.As this subset of cells did not exhibit the proposed previous ISC markers associated with +4 cells as well as CBCs, they were designated as "revival stem cells" or RSCs, with Clu as their defining marker.Using BAC-Clu-GFP transgenic mice which marked endogenous Clu-expressing cells, authors revealed that Clu-GFP + cells constituted a rare (less than 0.1%) population in the crypt, distinct from Lgr5 or Olfm4-expressing CBCs.Using the TAM-induced Clu tdTomato mice-based cell labeling technique, authors proposed that Clu + cells were able to contribute to the homeostatic epithelial turnover, as evidenced by crypt-villus ribbon formation.Importantly, ribbon emergence correlated with the contribution of CLU + progeny to Lgr5 + CBCs, as well as to differentiated lineages.In addition, the YAP1 gene signature was markedly enriched in the RSC population, and the emergence of Clu + cells in the intestinal epithelium is regulated by a YAP1 activity-dependent manner.Moreover, RSCs exhibited the capability to reconstruct the injured crypt and compensate for the loss of residual CBCs, thereby replenishing the Crypt-villi structure during epithelial damage; Indeed, in 'Cluless' mice, intestine-specific Clu-deleted models by Tamoxifen treatment in Clu DTA mice, exposures to irradiation or administration with the colitogen, dextran sodium sulfate (DSS), significantly reduced the length of intestine and crypt numbers due to the impaired regenerative capacity compared to controls.This is the first study to propose that RSCs, driven by YAP signaling, represent a slow-cycling, novel subset of ISCs essential for intestinal regeneration.
Of note, both prolonged activation and inactivation of YAP impair the growth of intestinal organoids; single-cell focused imaging and genomic analysis further indicated that YAP1 activity must exhibit spatial, temporary fluctuations between cells to drive Notch-DLL1 interaction-dependent symmetry breaking during normal organoid generation (52).These findings may also shed light on how the dynamic transition between regenerative (symmetric)and homeostatic (asymmetric) states is regulated in the intestinal epithelium.The intervention of niche cellular components in this context is investigated in 2020 by Roulis et al. (53).They focused on the characterization of mesenchymal niche cells supporting epithelial homeostasis and turnover.In specific, rare pericryptal fibroblasts (RPPFs) expressing cyclooxygenase-2 (Cox-2) encoded by Ptgs2, named as RPPFs, mediated the adenoma formation in APC min mice, implying their potential role in boosting the stemness of ISCs.Co-culture of organoids with RPPFs or prostaglandin E2 (PGE2), one of the Cox-2-generated prostaglandins, led to a transformation of spheroid-like structures.Interestingly, the budding-free, cyst-like morphology was a major characteristic of immature intestine-derived organoids from the embryonic period (54).These organoids consisted of undifferentiated, Wnt signaling-independent Lgr5 low -cells and displayed a primitive fetal gene signature, including Sca-1 and yap target genes such as Clu, Ctgf, Ankrd1, and Anxa10 (48,54).It is also reported that epithelial transcriptomes have to be reprogrammed into fetal colonic epithelial profiles via YAP activation to initiate the regenerative process (48).Similarly, RPPFs-derived PGE2 resulted in Sca-1 + RSC mobilization in the damaged epithelium by inducing YAP nuclear translocation.Furthermore, as observed in a YAP-deficient context, deletion of a PGE2 receptor Ptger4 in the epithelium significantly perturbed the normal repair response against irradiation, validating the PGE2-Ptger4-Yap signaling axis between intestinal epithelial-mesenchymal niche in RSC induction and regeneration.In addition, TGFβ1 also exert pleiotropic roles in this regenerative circumstances (55).Following irradiation-induced damage, the level of TGFβ1 noticeably surged around the injury site due to the action of damage-responsive macrophage.Surprisingly, addition of TGF β1 to organoid culture led to fetal-like conversion characterized by RSC gene signature.Utilizing transwell-based intestinal organoid-stromal cell co-culture system, it is revealed that TGFβ1 could activate stromal cells, enhancing their ability to induce RSC population in organoids.Mechanistically, TGFβ1 led to an increase in chromatin accessibility in transcription factor motifs for Sox9, which is involved in YAP pathway.These findings provide a detailed depiction of how non-epithelial cells respond to intestinal epithelial injuries and their roles in contributing to regeneration, in terms of RSC induction.
Ectopic induction of the RSC population can be driven by an immune reaction against infection.As a murine patho-symbiont helminth in the intestine, Heligmosomoides polygyrus disrupts the epithelial barrier upon chronic infection (56), leading to a transformation of infected crypts into granuloma-associated phenotype (GAC) (57).Notably, GACs no longer expressed Lgr5 and Olfm4, suggesting the depletion of CBCs.Instead, infection with H. polygyrus upregulated the interferon (IFN) signaling-associated gene signature in GACs with the highest transcription of the Ly6a gene.An increase in IFN-γ, produced by stimulated T lymphocytes, augmented the Ly6a + cells within the crypt while suppressing Lgr5 expression, even in the absence of infection.In addition, GACs underwent fetal-like reprogramming and GAC-derived organoids displayed a hyper-proliferative spheroid formation (54), suggesting that H. polygyrus infection and subsequent epithelial injury enhances the regenerative property of crypt region, marked by the emergence of RSCs.A similar study by Karo-Atar et al. (58) further elucidated that conditioned media from H. polygyrus bakeri (Hpb-CM) can mediate the Clu + RSC induction in organoids, partially via the Hippo-YAP axis-dependent manner; however, the authors argued the role of IFN-γ on supporting the RSC-inducing effect of Hpb-CM, as the addition of IFN-γ to Hpb-CM-treated organoids reduced the Clu + RSC populations despite the increment of Ly6a transcription.It is also found that the acquisition of the RSC phenotype following H. polygyrus infection was simultaneously accompanied by a reduced differentiation of secretory lineages, goblet cells and tuft cells, which play a crucial role in defending against helminth invasion.Moreover, type 2 immune response effec-tors, interleukin (IL)-4 and IL-13, could counteract the impact of Hpb-CM in RSC regulation.Overall, this study presents a new perspective on the interactions between infectious injury factors and the subsequent immune reaction in regulating RSC induction and epithelial reprogramming.
Strategies to Optimize the Induction of Regenerative Intestinal Stem Cells: A Novel Approach for Intestinal Restoration
Although the intestinal epithelium is one of the most potent regenerative tissues in the body, transient and/or permanent defects in its repairing capacity often lead to detrimental outcomes, as observed in chemotherapy-induced diarrhea (CID) and radiotherapy-mediated GIS (59,60).Given that Lgr5 + CBCs are rapidly depleted following injury, therapeutic strategies targeting alternative ISCs are imperative to enhance the endogenous regenerative potential.In this context, reactivation of quiescent stem cells with cell cycle regulators could be a promising therapeutic approach.Palbociclib, a U.S. Food and Drug Administrationapproved CDK4/6 inhibitor for treating breast cancer, has been reported to govern ISC fate, offering protection against chemotherapy-induced damage and promoting the recovery of the intestinal epithelium in CID models (60).By blocking cell cycle progression, Palbociclib reduced cell proliferation of Lgr5-expressing cells and protected them from 5-FU toxicity both in vitro and in vivo.Surprisingly, cotreatment of 5-FU and Palbociclib unexpectedly activated the expansion of Bmi1 + quiescent cells at the +4 position, implying an independent mechanism of action for Palbociclib based on the cell cycle status.
Enhancement of the RSC induction offers beneficial effects on intestinal regeneration.A recently reported novel combination comprises eight components (8C) designed to mimic the authentic regenerative state of intestinal organoids (61).Compared to the original culture media, 8C led to a generation of hyper-proliferative organoids enriched for injury-associated regenerative phenotype.Within the composition, the epigenetic modulators, valproic acid (VPA) and EPZ6438, play critical roles in reprogramming IECs into RSC-like cells expressing Sca-1, Anxa1, Reg3b, and Clu.It is noted that a combined administration of VPA (HDAC inhibitor) and EPZ6438 (EZH2 inhibitor) facilitated intestinal epithelial healing with enhanced crypt reconstitution upon irradiation.Mechanistically, these two chemicals led to a global alteration in epigenetic modification patterns in injured IECs, especially inducing a downregulation of H3K27me3 at promoter regions of YAP target genes.Furthermore, a potent YAP pathway inhibitor, verte-porfin, significantly reverted the phenotype of VPA/EPZ6438treated organoids, bringing them back to a normal state without regenerative signatures.Thus, YAP signaling might underlie the RSC-inducing capacity of VPA/EPZ6438.Meanwhile, to unravel the governing mechanisms that orchestrate intestinal homeostasis and regeneration, the Liberali group classified single-cell-derived organoids by phenotype into seven clusters following the treatment with a chemical library (62).Using the image-based organoid screening platform combined with hierarchical interaction score, the authors revealed that among various signaling pathways, the retinoid X receptor (RXR) pathway controls the transition between homeostatic-and regenerative status of IECs in conjunction with the YAP signaling.Treatment of RXR agonist (all-trans retinoic acid, atRA) or inhibitor (RXRi) to organoids altered cytoplasmic/nuclear YAP1 localization and subsequent symmetric breaking.In specific, RXRi-treated organoids acquired regenerative transcriptome patterns associated with YAP target genes and a fetal-like regenerative state, while organoids exposed to atRA underwent rapid maturation, which primarily consisted of enterocytes.Finally, in vivo administration of RXRi significantly alleviated the pathologic symptoms of the radiation-mediated colitis model, suggesting RXR antagonists as potent therapeutics for intestinal regeneration.
Transplantation of intestinal organoids can be beneficial to reconstitute the damaged intestinal epithelium.Given the positive role of TGFβ1 in RSC induction, Chen et al. (55) compared the therapeutic impact of TGFβ1-stimulated organoids to repair DSS-mediated injury with naïve organoids.After transplantation via rectal route, TGFβ1treated organoids led to an improvement in histologicaland pathological features of colitis in comparison to control organoids.The engraftment capacity of TGFβ1-treated organoids was also superior than counterparts in the co-transplantation assay, emphasizing the usefulness of TGF β1 pretreatment in organoid-based cell therapy.
In addition to organoids, adult stem cells can be applied to treat intestinal disorders.In specific, mesenchymal stem cells (MSCs) serve as a primary source for stem cell therapy owing to their immunomodulatory functions and tissue-regenerative capacity.Based on previous works that underscored the therapeutic benefits of naïve MSCs in the colitis model (63)(64)(65), several approaches have been explored to enhance their therapeutic efficacy.In 2021, we utilized MSC spheroids (MSC3D) for this purpose, given that 3Dcultured cells exhibit a potent secretory capacity compared to two-dimensional (2D)-grown counterparts (MSC2D) (66).After confirming the superior therapeutic impacts of MSC3D compared to MSC2D in alleviating the DSS-induced colitis model, we further investigated the phenotypic changes of IECs induced by MSCs with organoid assay.Interestingly, organoids cultured with either MSCs or their conditioned media (CM) underwent dramatic morphological transitions, shifting from a budding shape to a cystic spheroid, one of featured phenotypes indicative of fetal-like organoids enriched with RSCs.Indeed, these transformed organoids contained more abundant Ly6a + cells than control organoids, and this observation was especially evident in organoids exposed to MSC3D or MSC3D-CM.Given the suspected involvement of paracrine factors in this phenomenon, we focused on PGE 2 , one of the major therapeutic mediators from MSCs (64) as well as a potent inducer of the RSC population in the intestinal epithelium (53), and measured its concentration in MSC-CM.It is found that MSC 3D could produce a much higher level of PGE2 in comparison to MSC2D.In addition, blockage of the PGE2 action with chemical inhibitors successfully diminished the RSC-inducing impact of MSCs on organoids.Therefore, this study elucidates the role of MSCs as a source of the RSC-inducing factor, PGE 2 , illustrating how MSCs directly contribute to intestinal epithelial regeneration.
Conclusion and Future Perspectives
In summary, the intestinal epithelium contains a diverse and dynamic set of 'potential' or 'reserve' stem cells for compensation of homeostatic ISCs.In specific, the recently introduced concept of RSCs, along with their regulatory and inductive mechanisms, enriches our understanding of intestinal regeneration (Fig. 1); however, from both experimental and technical perspectives, numerous pitfalls exist in understanding the complexities of intestinal plasticity.Genetic differences in reporter alleles used in lineage labeling often lead to misinterpretations the cell fate tracing results for injury-specific ISC candidates (34).The myriad methodological approaches used to induce epithelial injury (i.e., irradiation, chemotherapeutics, colitogen and infectious agents), each with their distinct etiology, also pose challenges to the integrated understanding of the subsequent regenerative responses (67).Finally, the pronounced cellular plasticity in the IECs during the continuous regenerative cascade complicates the classification and characterization of individual stem cell populations that drive intestinal repair.For instance, the characteristics of reported injury-responsive ISCs often manifest overlapping features, especially in terms of marker expression (Table 1).In most cases, +4 cells tend to be positive for secretory lineage markers (36).When quiescent p57 + /Bmi1 + subsets reactivate upon injury, they undergo spatiotemporal reprogramming and start to express RSC-associated fetal markers (38).On the contrary, it has been reported that representative RSC markers, Clu and Ly6a, are differentially regulated among the RSC population (57,58).Collectively, advanced lineage tracing strategies with specificity and refined genetic ablation techniques are needed for a comprehensive assessment of candidate markers, aiding the identification of reliable core markers for each regenerative ISCs.
Although YAP-dependent fetal-like reprogramming is crucial for cryptic RSC-driven regeneration, it has been also identified as playing an essential role in addressing villi injury scenarios.A poly(I:C)-mediated damage model in mice that mimics acute viral gastroenteritis demonstrates villi-specific loss with severe atrophy without any disturbance in the crypt region (68).During the regeneration of the villi structure, a specialized cell type referred to as atrophy-induced villus epithelial cells (aVECs) emerged to cover the atrophic villi lesion before the restoration of barrier junctions.Interestingly, aVECs lost their mature enterocyte markers but gained fetal markers Clu and Msln and expressed nuclear YAP protein, indicating that adaptive epithelial differentiation into reparative cells resembles a major feature of fetal conversion process as suggested previously (69).Another crucial consideration is the potential correlation between the reactivation of quiescent ISCs/ RSCs and tumorigenesis.Indeed, uncontrolled activation of YAP pathway is regarded as an oncogenic driver (47), and several studies have reported the possible contribution of quiescent ISCs/RSCs in tumor initiation (12,20,53).Therefore, the fine balance required in therapeutic induction of quiescent ISCs/RSCs to avoid adverse effects and further research is needed on the upstream regulators that ensure YAP-mediated regenerative responses transition back to a homeostatic state instead of advancing toward tumorigenesis.Moreover, it offers insights for establishment of innovative therapeutic strategies to enhance the intrinsic repairing potential of the intestine.
Table 1 .
Overview of key studies on quiescent stem cells | 2024-01-26T06:17:34.864Z | 2024-01-25T00:00:00.000 | {
"year": 2024,
"sha1": "8068cc2acd4096c990ddcaca41f92e02fee2e798",
"oa_license": "CCBYNC",
"oa_url": "https://www.ijstemcell.com/journal/download_pdf.php?doi=10.15283/ijsc23176",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c87faf2662634b7962839977384cde0ec6e3710",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271159730 | pes2o/s2orc | v3-fos-license | Comparison of high risk factors (hot food, hot beverage, alcohol, tobacco, and diet) of esophageal cancer
Abstract Background: Esophageal cancer (EC) is one of the most common malignant tumors with a poor prognosis and identified as one of the leading causes of cancer death in the world. Many studies have reported that the incidence of EC is closely related to the intake of alcohol, hot food, and hot beverages, as well as smoking and diet. However, there is a lack of studies on the quantitative analysis of these risk factors for EC. If the solid quantitative evidence of these risk factors is provided for the prevention of EC, the prevalence of EC can be effectively reduced. We will conduct a systematic review and meta-analysis of high risk factors for EC in order to provide reliable evidence for the prevention of EC. Methods and analysis: We will search PubMed (Medline), the Cochrane Central Register of Controlled Trials, Embase, and Google Scholar for related studies published without language restrictions before December 1, 2019. Two review authors will search and assess relevant studies independently. Trials used a case-control, cross-sectional, cohort studies, randomized controlled trials (RCTs), and quasi-RCTs will be included. We will perform subgroup analysis in sex, age, ethnicity, and region. Results: The results of this study will be published in a peer-reviewed journal. Conclusion: We will perform a systematic review and meta-analysis of high risk factors for EC in order to provide reliable evidence for the prevention of EC. However, because of the characteristics of disease and intervention, large-sample trials that meet the inclusion criteria of this study may be insufficient. We will consider including some high-quality small-sample related trials, which may lead to high heterogeneity and affect the reliability of the results.
Introduction
Esophageal cancer (EC) is one of the most common malignant tumors with a poor prognosis and identified as one of the leading causes of cancer death in the world. [1,2] Although EC is rare in most western countries, its incidence varies widely around the world, and is relatively high in Asia, southern and eastern Africa, and northwestern France, where the incidence of EC exceeds 100 per 100,000 persons per year. [3] This malignant tumor mainly consists of 2 main histological types, esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma (EAC), with different etiology and pathological characteristics, among which ESCC is the main one. [4] The risk factors for EC follow a geographic pattern. In western countries with a low incidence of EC, smoking and drinking are the main risk factors for the occurrence of EC, [5][6][7][8][9][10] while in regions with a high incidence, the consumption of food and beverage with high temperature and the diet of low fruit and vegetable intake is closely related to the occurrence of EC. [11][12][13][14] The correlation between these high risk factors and the incidence of EC is of great clinical significance. For malignant tumors with poor prognosis, these risk factors can be changed and can be easily eliminated, with little impact on patients, thus effectively reducing the incidence of EC. We will conduct a systematic review and meta-analysis of high risk factors for EC in order to provide reliable evidence for the prevention of EC.
Objective
We will evaluate the high risk factors (hot food, hot beverage, alcohol, tobacco, and diet) of EC among cases and controls.
Methods
This protocol is conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) statement. [15] We will report the results of this systematic review and meta-analysis adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyse (PRISMA) guidelines. [16] This protocol has been registered in the PROSPERO network (registration number: CRD42019124789).
3.1. Eligibility criteria 3.1.1. Types of studies. Trials used a case-control, crosssectional, cohort studies, randomized controlled trials (RCTs), and quasi-RCTs published or unpublished will be included, which have been completed and compared the correlation between these risk factors (hot food, hot beverage, alcohol, tobacco, and diet) and the incidence of EC.
3.1.2. Types of participants. The participants will be adults diagnosed with EC histologically or cytologically confirmed. There will be no restrictions on sex, ethnicity, economic status, and education.
Types of exposure factors.
According to the types of exposure factors for patients with EC, the studies included will be divided into the following categories.
Studies examined the consumption of hot food and beverage among EC cases and controls. Studies examined the consumption of alcohol among EC cases and controls. Studies examined the tobacco among EC cases and controls. Studies examined the diet among EC cases and controls.
3.1.4. Types of outcome measures. The outcome will be correlation between high risk factors and the incidence of EC.
Information sources
We will search PubMed (Medline), the Cochrane Central Register of Controlled Trials, Embase, and Google Scholar for related studies published without language restrictions before December 1, 2019 without language restrictions.
Search strategy
We will use the relevant keywords or subject terms adhered to Medical Subject Heading (MeSH) terms to search for eligible studies in the electronic databases which were mentioned above without language restrictions. The PubMed search strategies are shown in Table 1.
Data collection and analysis
We will utilize the measures described in the Cochrane Handbook for Systematic Reviews of Interventions to pool the evidence. [17] 3.4.1. Study selection. Two reviewers (TCC, ZMS) will investigate each title and abstract of all literatures searched independently and identify whether the trials meet the inclusion criteria as designed and described in this protocol. Two authors (TCC, ZMS) will in duplicate and independently screen the full text of all potential eligible studies to exclude irrelevant studies or determine eligibility. The two reviewers will list all the studies included and document the primary reasons of exclusion for studies that do not conform to the inclusion criteria. Disagreements between the two authors will be resolved by discussing with the third author (JBL), if necessary, consulting with the fourth author (MQK). We will show the selection process in details in the PRISMA flow chart.
3.4.2.
Data extraction and management. The 2 authors (TCC, ZMS) will extract the following data independently from the studies included.
Study characteristics and methodology: the first author, publication date, country or region, study design, periods of data collection, follow-up duration, total duration of study, and withdrawals, etc. Participant characteristics: sex, age, tumor stage, pathology diagnosis, ethnicity, performance status, pathologic tumor size, and inclusion criteria, etc. Interventions (exposure factors): hot food and drink intake (temperature scale), alcohol consumption, tobacco intake, dietary habits (food composition), etc. Other data: overall survival, 5-year survival, median survival, 95% confidence intervals (CIs), quality of life, adverse events, etc. We will record all the date extracted in a pre-designed table and consult the first author of the study by e-mail before determining eligibility, if the reported data of which are unclear or missing.
Assessment of risk of bias in included studies
Two authors (TCC, ZMS) will use the Cochrane Handbook for Systematic Reviews of Interventions to assess the risk of bias of each study included independently based on the following ranges: random sequence generation (selection bias); allocation concealment (selection bias); blinding of participants and personnel (performance bias); blinding of outcome assessment (detection bias); incomplete outcome data (attrition bias); selective outcome reporting (reporting bias); other bias. [15] Each domain will be assessed as high, low, or uncertain risk of bias. The results and details of assessment will be reported on the risk of bias graph.
Data analysis
The data will be synthesised by Review Manager 5.3 software. We will conduct a systematic review and meta-analysis only if the data gathered from included trials are judged to be similar enough to ensure a result that is meaningful. The chi-squared test and I 2 statistic will be used to assess statistical heterogeneity among the trials included in matched pairs comparison for standard meta-analysis. The random effect model will be applied to analyze the data, if there is substantial heterogeneity (P < .1 or I 2 statistic >50%) and the trials will be regarded to be obvious heterogeneous. Otherwise, we will utilize fixed effect model to analyze the data. Mantel-Haenszel method will be adopted to pool of the binary data. The results will be reported in the form of relative risk (RR) between 95% CI of the date. The continuous data will be pooled by inverse variance analysis method and the results will be shown in the form of standardized mean difference (SMD) within 95% (CI) of the date.
3.6.1. Subgroup analysis. If there is high heterogeneity and the data are sufficient, subgroup analysis will be conducted to search potential causes of heterogeneity. Subgroup analysis will be performed in ethnicity, history of smoking, tumor stage, and type of operation.
3.6.2. Sensitivity analysis. Sensitivity analysis will be conducted to assess the reliability and robustness of the aggregation results via eliminating trials with high bias risk.
Publication bias
If there are ≥10 trials included, we will construct a funnel plot and use Egger test to assess publication bias. If reporting bias is suspected, we will consult the study author to get more information. If publication bias does exist, we will apply the fill and trim method to analyze publication bias in the trials. [18]
Evidence evaluation
We will evaluate all the evidence according to the criteria of GRADE (imprecision, study limitations, publication bias, consistency of effect, and indirectness bias). The quality of all evidence will be evaluated as 4 levels (high, moderate, low, and very low). [19] 4. Discussion EC is one of the most common malignant tumors with a poor prognosis and identified as one of the leading causes of cancer death in the world. This malignant tumor mainly consists of 2 main histological types, esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma (EAC), with different etiology and pathological characteristics, among which ESCC is the main one.
In western countries with a low incidence of EC, smoking and drinking are the main risk factors for the occurrence of EC, while in regions with a high incidence, the consumption of food and beverage with high temperature and the diet of low fruit and vegetable intake is closely related to the occurrence of EC. The correlation between these high risk factors and the incidence of EC is of great clinical significance. For malignant tumors with poor prognosis, these risk factors can be changed and can be easily eliminated, with little impact on patients, thus effectively reducing the incidence of EC.
We will conduct a systematic review and meta-analysis of high risk factors for EC in order to provide reliable evidence for the prevention of EC. We will try to include some small sample studies. Although our team has experience in carrying out a systematic review and meta-analysis, there may be high heterogeneity and low reliability of evidence, which is the limitation of this study. | 2019-04-28T13:03:20.913Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "f80eb2631c50ee293a6800a52c22de20923923df",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000015176",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f80eb2631c50ee293a6800a52c22de20923923df",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209356830 | pes2o/s2orc | v3-fos-license | Solutions of the Generalized Abel’s Integral Equations of the Second Kind with Variable Coefficients
Applying Babenko’s approach, we construct solutions for the generalized Abel’s integral equations of the second kind with variable coefficients on R and Rn, and show their convergence and stability in the spaces of Lebesgue integrable functions, with several illustrative examples.
Introduction
In 1823, Abel studied a physical problem regarding the relationship between kinetic and potential energies for falling bodies and constructed the integral equation [1][2][3][4] g(x) = x c (x − t) −1/2 u(t)dt, c > 0, where g(x) is given and u(x) is unknown.Later on, he worked on a more general integral equation given as which is called Abel's integral equation of the first kind.Abel's integral equation of the second kind is generally given as where λ is a constant.Abel's integral equations are related to a wide range of physical problems, such as heat transfer [5], nonlinear diffusion [6], the propagation of nonlinear waves [7], and applications in the theory of neutron transport and traffic theory.There are many studies [8][9][10][11][12][13][14] on Abel's integral equations, including their variants and generalizations [15,16].In 1930, Tamarkin investigated integrable solutions of Abel's integral equations under certain conditions by several integral operators [17].Sumner [18] studied Abel's integral equations using the convolutional transform.Minerbo and Levy [19] found a numerical solution of Abel's integral equation by orthogonal polynomials.In 1985, Hatcher [20] worked on a nonlinear Hilbert problem of power type, solved in closed form by representing a sectionally holomorphic function by means of an integral with power kernel, and transformed the problem to one of solving a generalized Abel's integral equation.Using a modification of Mikusinski operational calculus, Gorenflo and Luchko [21] obtained an explicit solution of the generalized Abel's integral equation of the second kind, in terms of the Mittag-Leffler function of several variables.
where λ i is a constant for i = 1, 2, • • • , m, and I µ is the Riemann-Liouville fractional integral of order µ ∈ R + with initial point zero [22], Lubich [10] constructed the numerical solution for the following Abel's integral equation of the second kind based on fractional powers of linear multistep methods where x ∈ [0, T] and α > 0. The case α = 1/2 is encountered in a variety of problems in physics and chemistry [23].Pskhu [24] considered the following generalized Abel's integral equation with constant coefficients where α k ≥ 0 and x ∈ (0, a), and constructed an explicit solution based on the Wright function and convolution.Li et al. [25][26][27] recently studied Abel's integral Equation (1) for any arbitrary α ∈ R in the generalized sense based on fractional calculus of distributions, inverse convolutional operators and Babenko's approach [28].They obtained several new and interesting results that cannot be realized in the classical sense or by the Laplace transform.Many applied problems from physical science lead to integral equations which can be converted to the form of Abel's integral equations for analytic or distributional solutions in the case where classical ones do not exist [15,27]. Letting and a > 0, we consider the generalized Abel's integral equation of the second kind with variable coefficients where x ∈ (0, a), a i (x) is Lebesgue integrable and bounded on (0, a) is a given function in L(0, a) and u(x) is the unknown function.Clearly, Equation (2) turns to be if n = 1 and a 1 (x) = a 1 (constant).Equation ( 3) is the classical Abel's integral equation of the second kind, with the solution given by Hille and Tamarkin [29] u where Following a similar approach, we also establish a convergent and stable solution for the generalized Abel's integral equation on R n with variable coefficients , ) and I α k is the partial Riemann-Liouville fractional integral of order α ∈ R + with respect to x k , with initial point 0,
The Main Results
Theorem 1.Let x ∈ (0, a), a i (x) be Lebesgue integrable and bounded on (0, a) for i = 1, 2, • • • , n, and g(x) be a given function in L(0, a).Then the generalized Abel's integral equation of the second kind with variable coefficients has the following convergent and stable solution in L(0, a) where which implies, by Babenko's approach (treating the operator like a variable), that Let f be the usual norm of f ∈ L(0, a), given by Then, we have from [30] where This implies that .
Since a i (x) is bounded over (0, a), there exists M > 0 such that sup x∈(0,a) Then, On the other hand, since there exists m i ≥ m/n for some i by noting that m 1 + m 2 + • • • + m n = m, and the factor Γ(m j α n + 1) ≥ 1/2 for j = i.Hence, , and Furthermore, the solution is stable from the last inequality.This completes the proof of Theorem 1.
Illustrative Examples
Let α and β be arbitrary real numbers.Then it follows from [31] Example 1. Assume α > 0. Then Abel's integral equation with a variable coefficient u(x) − x α I 2.5 u(x) = x, x ∈ (0, a) has the following stable solution Clearly, where m ≥ 1.
Example 2. Let a > 0. Then Abel's integral equation u(x) − xI 0.5 u(x) − x 0.5 Iu(x) = x −0.5 , x ∈ (0, a) has the following stable solution in L(0, a), where Clearly, where C k is defined as above.Furthermore, where B m,k is defined above.
Remark 1.
As far as we know, the solution for the generalized Abel's integral equation with variable coefficients over the interval (0, a) is obtained for the first time.However, this approach seems unworkable if the interval is unbounded, as the Riemann-Liouville fractional integral operator is therefore unbounded.In the proof and computations of the above examples, we should point out that the convolution operations are prior to functional multiplications, according to our approach.
Assuming that ω i > 0 for all i = 1, 2, • • • , n, and Ω = (0, ω 1 ) × (0, ω 2 ) × • • • × (0, ω n ), we can derive the following theorem by a similar procedure.Theorem 2. Let α k ≥ 0 for k = 1, 2, • • • , n and there is at least one α i > 0 for some 1 ≤ i ≤ n.Then the generalized Abel's integral equation of the second kind with variable coefficients on R n for a given function g has the following convergent and stable solution in L(Ω) where a k (x) is Lebesgue integrable and bounded on It remains to show that the above is convergent and stable in L(Ω).Let Let f be the usual norm of f ∈ L(Ω), given by Then, it follows from [30] for where This implies for α k > 0 that where In particular for α k = 0, Therefore, , where S = max{ω α 1 1 , • • • , ω α n n }. | 2019-12-12T10:50:11.282Z | 2019-12-05T00:00:00.000 | {
"year": 2019,
"sha1": "62e55d754d0da4ff09ddc57c135204e8b0352c58",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1680/8/4/137/pdf?version=1575767051",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bbc8a3af10e370419149c4eefcf046ad80df01c8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
53283720 | pes2o/s2orc | v3-fos-license | Context-Aware Mobile Applications: Taxonomy of factors for building approaches
Fusion of sensing mechanisms inside mobile devices (e.g.: GPS, accelerometers) have driven the growth of context-aware mobile applications. Currently, there are building approaches for this kind of applications, but these do not have the flexibility, for example, to derive applications combining different location sensing mechanisms. In this paper, we present a first proposal of a taxonomy of factors that could be considered by context-aware mobile application building approaches, in order to provide variability in the kinds of derived applications. The aim is to generate a discussion that can contribute to the unification of aspects that should be addressed by these building approaches. To complement the taxonomy, we present the analysis of an interview that was conducted with regard to developers who use (or could use) these building approaches. We hope this will enrich the discussion in relation to this kind of approaches.
INTRODUCTION
In recent years, technological advances have allowed to emergence of a varied range of mobile applications. The incorporation of sensors in mobile devices (for example: GPS, accelerometers) have driven the growth of contextaware mobile applications [1]. While these applications have been studied for several years [2], [3], only in the last years have technological advances enabled these applications to become useful for people (for example, applications that use GPS to provide information or services).
The concept of context has not only been explored by different areas of Computer Science [4] (e.g.: Artificial Intelligence, Home Automation, etc.), but it has also been addressed by different authors from different perspectives [1]. Some of these perspectives are specifically oriented, for example, offering modeling solutions, while others are building approaches for these kinds of applications [1]. That is, context-aware mobile applications are being studied at different levels of abstraction. However, there is still no consensus as to how to classify the contexts [5] or what characteristics should be considered relevant in their development [6]. Therefore, there is currently no unified solution for such applications as mentioned in [1] and [6]. This aspect is currently an open area of research. Moreover, an important issue is how usable these applications are [7].
Development of context-aware mobile applications is a complex task [6], so, developers look for building approaches to speed up this task. There are some approaches aimed at non-expert users [8] and others that require more technical knowledge [9] such as modeling features. These approaches address certain specific aspects of context-aware mobile applications, for example, only GPS is provided as a sensing mechanism. So, the kind of application that could be derived is limited. This restriction is generally related to the available APIs that these approaches use.
Even though, indoor location systems have been explored in the last years [10], [11], [12], there is not yet overall unified solution for these systems as GPS for outdoor spaces. Therefore, this is a limitation to provide these kinds of sensing mechanisms as part of building approaches.
Building approaches speed up creation but they usually lose sight of the concepts already studied with regard to these applications [6]. From this arises the motivation of this paper, to be able to contribute to the unification of factors related to context, in particular, those that should be considered by building approaches so that they are more flexible when generating applications of this kind.
The goal of the paper is to present a first proposal of a taxonomy of factors that could be considered by contextaware mobile application building approaches. The aim is to generate a discussion that can contribute to the unification of factors that should be addressed by these building approaches.
The taxonomy of factors proposed is oriented to provide, in terms of the approaches perspective, varied range of variability [13] in the kinds of generated applications. For example, ones that could be use different kinds of location sensing. For this, a bibliographic analysis has been carried out to identify those factors that are relevant in relation to giving flexibility to building approaches.
To complement the taxonomy, we present the analysis of an interview that was conducted with regard to developers who use (or could use) these building approaches. We hope this will enrich the discussion in relation to these approaches. This paper is structured as follows. Section 2 describes related work. In Section 3, the taxonomy of factors is presented. The interview carried out in relation to building approaches is analyzed in Section 4. Section 5 generates a discussion space in relation to the theme. The conclusions and future work are described in Section 6.
II. RELATED WORKS
The topic of context has been studied for several years now, as is mentioned in [2] and [3]. Each of the authors who has approached it has done so from a specific perspective, or has focused on specific characteristics [1]. Several proposals of frameworks or platforms for context-aware mobile applications are described in [2], [3] and [7]. In this section, the related works are considered it terms of the goal of the presented paper. In [6] it is emphasized that, in relation to the context, it is required to consider the level of relevance thereof, the way these are combined, as well as precision and accuracy in relation to the sensed value. For example, precision and accuracy of the GPS, in order to determine how to interpret the sensed value. Moreover, each indoor location sensing mechanisms have its own accuracy [10], [11], [12]. So, according to this, they are more appropriate to a specific kind of application, for example, BLE (Bluetooth Low Energy) is better for indoor navigation and tracking [11]. Therefore, this information to be considered must be configured in some way; however, this is not generally contemplated by building approaches [14].
There is not yet a unified consensus with regard to the categorization of contexts. In [5], 36 context models are analyzed, and five classifications shared by these models are identified: social context, location, time, physical context and user context. On the other hand, the authors analyzed contexts features that are considered by context-aware mobile applications, and it identified 10,498 context elements. When the authors of [5] have tried to match these identified context elements with categorization mentioned, they indicated that more than half did not correspond to any categorization. In this way, it could be appreciated that there is a gap between applications and approaches (in this case, modeling approaches).
In [1] and [7], the authors specify that the contexts can have a passive configuration, which the user performs manually, or an active configuration in which the application is self-configuring based on user behavior (this requires a continuous monitoring and learning mechanism). In addition, it is mentioned that sensing mechanisms can be passive, that is, it requires the intervention of the user (for example, reading QR Codes), or the execution can be active, that is, applications contains an automatic sensing mechanism (for example, GPS). Both configuration and execution are of vital importance in the case of building approaches, where it would be ideal for users (of these approaches) to determine whether they are active or passive. Moreover, some indoor location sensing mechanisms require additional infrastructure, for example, a signals' database to work properly [10].
There are currently a number of building approaches for context-aware mobile applications. For example, the App Inventor [8] is an "online" program, which allows users to create Android applications without having any technical knowledge. The generated applications can include only GPS as a location sensing mechanism. That is, users can only create applications for outdoor spaces. The App Inventor allows configuring precision and accuracy of the location sensor (in this case, GPS). However, it is not possible to combine accelerometer sensor to orientation sensor, in that the configuration reacts to these three sensors separately. In this case, the App Inventor only focuses on some contexts of the device and user's location.
On the other hand, WebRatio Mobile [9] allows users to create context-aware mobile web applications. WebRatio Mobile is oriented to expert users who should have knowledge of databases and hypermedia design. The generated applications are packaged in PhoneGap, allowing to be used on both Android and iOS platforms. The users could define contexts related to Device, Network Connectivity and Position. In this case, location setting is also limited to GPS, allowing only generating applications for outdoor spaces. It can define precision and accuracy related to GPS.
It can be observed that these approaches ( [8] and [9]) are limited, in particular focus only of building applications for outdoor spaces, where certain contexts are also considered, most of them related to the available APIs. Moreover, neither [8] nor [9] address the categorization of contexts identified by [5]. For example, they do not consider any aspects with regard to social context or time.
For indoor spaces, there is wide variety of indoor location sensing mechanisms [10], [11], [12]. However, there is not currently overall unified solution for these sensing mechanisms (as it is the case of GPS for outdoor spaces). So, this impacts in order to provide building approaches for indoor applications.
Consequently, it is expected that this paper will be able to contribute to the future discussion; with regard to having more flexible building approaches, for example, generate context-aware mobile applications combined different location sensing mechanisms.
III. TAXONOMY OF FACTORS PROPOSED
This section presents a first proposal of a taxonomy of factors that should be considered by context-aware mobile application building approaches. With this taxonomy, it is expected that the building approaches will have varied range of variability [13] in the kinds of generated applications. To determine these factors, a bibliographic analysis was carried out, highlighting especially the following works detailed in Section 2: [1], [5], [6] and [7]; these works were used as references for the specification of the taxonomy proposed. Figure 1 shows the taxonomy of factors proposed for this paper, with six factors being identified: relevance, combination, precision and accuracy, categorization, type of configuration and type of execution. Below each factor identified in Figure 1 are detailed: Relevance: approaches should be able to indicate the relevance of each of the defined contexts. This would allow, for example, generating reduced versions of applications with the most relevant contexts. For now, approaches such as [8] and [9] do not indicate relevance explicitly, but everything defined is relevant.
Combination: approaches should allow combining contexts. This implies that not only identifying which contexts are combined, but also how they behave when they are considered together. For example, some indoor location sensing mechanisms could be combined in order to work more precisely [10].
Precision and Accuracy: approaches should specify precision and accuracy for each physical sensor (e.g. GPS), as it is the case in [8] and [9]. Each indoor location sensing mechanisms have its own configuration [10], [11].
Categorization: approaches should have a way to categorize contexts. For example, contexts of users, environment or mobile objects require to be handled in a completely different way (using different sensing mechanisms), so, categorization will allow to improve how each of them is derived inside applications.
Type of configuration: approaches should allow the identification of what type the configuration will be, for example, passive or active, as detailed in [1].
According to the type of configuration, approaches should allow to provide more details settings. For example, for a passive configuration it should be possible to define how users (of the generated application) will be able to make this configuration, suppose it is defined a data form with the possible options to set.
Type of execution: approaches should allow identification of the type of execution associated with each context, for example, passive or active, as detailed in [1]. For each of this possible types, approaches should bring support to set it. For example, in the case of location passive execution (that is, users intervening in some way), approaches should allow to specify how this is addressed. Let`s suppose that user has to read a QR code to determine his/her location, it should be indicated (from the approach perspective) how the generated application behaves, for example, when QR code read is incorrect.
Thus, a taxonomy of factors proposed is detailed. Note that this is a first proposal and in the future, we should explore the feasibility of addressing these factors specifically by building approaches for context-aware mobile applications.
Up to now, these factors have been identified to allow them to be addressed by the building approaches, in order to have greater variability [13] in terms of the derived applications. In particular, for indoor location sensing mechanisms factors should be analyzed in more detail to provide more guideline in order to help building approaches, because each mechanisms have its own accuracy or they could be combined better with some others mechanisms [10].
IV. DEVELOPMENT PERSPECTIVE ANALYZED
This section presents the analysis of an interview that was done regarding to developers who use (or could use) these building approaches. The goal of this interview was to provide a discussion with information complementary to the taxonomy presented in this paper. That is, a building approach should not only address factors of the taxonomy proposed, but also requirements of the developers who use these approaches [7].
The interview involved six university professors with experience in the area of mobile application development. In addition, all the interviewees had taken the course "Mobile Computing" given at the Faculty of Informatics of the National University of La Plata (Argentina) in 2014. In this course, advanced concepts were offered in relation to the design of context-aware mobile applications. That is, the interviewees had basic knowledge in relation to the context concept.
Below are listed the two most relevant questions that were presented to the interviewees, and that are related to this paper: Question 1: What building approaches do you know for the development of context-aware mobile applications?
Question 2: What do you expect from a contextaware mobile application building approach?
Question 1 was open; the interviewees could answer with the name of any approach they knew. The result of this question can be observed in Figure 2 where two approaches known to some interviewees are App Inventor [8] and WebRatio [9] (both described briefly in Section 2). Note that some of the interviewees were not aware of any approach.
In the case of Question 2, it was closed. The interviewees could opt one of the following two options in a nonexclusive way: "generation of applications, without knowledge of specific platforms" or "generate applications that encode low-level code for each specific platform". Figure 3 shows the results for this question. It can be appreciated that 62.7% preferred to generate applications without having specific knowledge of the platforms.
Although other questions were asked, they do not provide significant results in terms of this paper. From Question 2, it can be observed that there is a need to have approaches that do not require prior knowledge, but which also offer the possibility to experts of making code configurations at lowlevel. For the latter, a building approach could use the taxonomy of factors proposed to achieve greater flexibility in the generated applications. These was an initial interviews, we know that with some few people and two questions could not be extract any significant conclusion. So, other user's profiles should be interviewed (e.g. teachers [15]), in order to understand what a building approach requires for each specific domain.
V. DISCUSSION
In this section, a discussion is presented in relation to the theme of building approaches for context-aware mobile applications, and what it is proposed in this paper.
The factors of relevance, combination, precision and accuracy are oriented towards building approaches for expert users, for example as detailed in [9]. This is because a nonexpert user may be unaware of the meaning of, for example, accuracy. In the case of the categorization of context depending on how it is presented from the approach could be understood by non-expert users. For example, if it is the context of the user or the environment in which they are moving.
Existing approaches (whether oriented to expert users or not, for example [8] and [9]), generally generate applications for outdoor spaces, due to having access to GPS APIs. With advancements of new mechanisms relating to indoor location ( [10], [11] and [12]), it will be expected that, in a near future, the range of applications that could be generated be expanded. For this, it is going to be essential to be able to incorporate the factors of the taxonomy proposed in this paper.
What users/developers will be able to generate depend on the flexibility provided by building approaches. That is why we believe that the taxonomy of factors proposed could contribute to improving the flexibility of approaches. This in turn will contribute to the expectations of users with regard to such approaches, as is mentioned in Section 4.
VI. CONCLUSION AND FUTURE WORK
In this paper, it is presented a first proposal of a taxonomy of the factors that could be considered by building approaches for context-aware mobile applications, in particular, when it comes to achieving greater variability in the kinds of generated applications. It is expected that this paper will contribute to the discussion in relation to this topic; especially how to move towards a unified solution that can to address building context-aware mobile applications.
An interview was also presented reflecting the expectations of expert users in relation to building approaches. Some of them expect that building approaches allow setting low-level details in order to have more control over the generated applications. In this case, approaches would require to be enriched with more low-level setting according to, for example, how each platform manages every sensing mechanism.
We are working on defining a building approach for context-aware mobile applications based on the taxonomy of factors proposed. Allowing in this way for developers not only generate applications for outdoor spaces, but also for indoor spaces. As a result of this, the taxonomy proposed could be enriched with new factors that may arise.
As a future work, we will explore how to enrich our taxonomy with others related existing works that explore complementary aspects to context-aware mobile applications. As for example, in [16] the authors present a taxonomy related to context-aware computing and dynamic software variability management. | 2018-11-13T14:13:09.739Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "3502fef76e4a4d10d35f5d2c98be6383086deb44",
"oa_license": "CCBYNCSA",
"oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/80752/Documento_completo.pdf?sequence=1",
"oa_status": "GREEN",
"pdf_src": "IEEE",
"pdf_hash": "01283800ddcd22b1bca7476f3645cc7b9b2edbbb",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54217803 | pes2o/s2orc | v3-fos-license | The Lorentz Anomaly via Operator Product Expansion
The emergence of a critical dimension is one of the most striking features of string theory. One way to obtain it is by demanding closure of the Lorentz algebra in the light-cone gauge quantisation, as discovered for bosonic strings more than fourty years ago. We give a detailed derivation of this classical result based on the operator product expansion on the Lorentzian world-sheet.
Introduction
More than forty years ago, 26 was noted as a critical dimension for the dual-resonance models that preceded string theory [1,2,3]. One way of obtaining the critical dimension has been to show that in light-cone gauge quantisation the longitudinal Lorentz-operators M i− , i = 1, . . . , d > 1, only commute in D = d + 2 = 26 space-time dimensions [4]. In this computation the generators M i− are normal-ordered infinite sums cubic in the oscillator modes of the quantised string. 1 Whereas the quantisation of the string is well understood, much less is known for general M -dimensional extended objects; however, in [6] it was noted that, as a consequence of Lorentz invariance, a dynamical symmetry exists -which might give a way to algebraically determine the spectrum if one can understand this symmetry in the quantum theory. Classically, these higher-dimensional objects can be described -similarly to the string -in the light-cone gauge, but the corresponding world-volume theories are not free as in the case of strings. Therefore one does not have an expansion in terms of harmonic oscillators that would make it possible to quantise the theory directly. On the other hand, one can still use field theory techniques like operator-product expansions (OPE) in the computations.
These considerations motivated us to rederive the critical dimension of bosonic string theory in the light-cone gauge quantisation by only using the operator product expansion on the Lorentzian world-sheet. As the world-sheet theory is free, the OPEs are simple and completely equivalent to the harmonic oscillator commutators of the corresponding modes (so that it is guaranteed that the result is the same as in the oscillator approach). The computation, however, turned out to be surprisingly tedious and subtle (it involves a careful treatment of composite and non-local operators). We decided to write it up and present it in this note, in the hope that the approach might be useful for higher dimensional extended objects (note also [7]), -as well as an alternative derivation of the critical dimension (see also [8] where yet another derivation of the Lorentz anomaly was presented). We should add that, similar to the oscillator computation, it seems that one can not pinpoint any particular step of the computation where the anomaly arises; it rather appears as a result of an interplay of several anomalous terms that arise due to the regularisations needed to define the composite operators. Note however that, because we work on the (Lorentzian) cylinder, there is no need to artificially introduce a normal-ordering constant in the computation; it is already set to the right value by using the most natural definition of composite operators.
Let us describe the computation in a language that can also be used for higher-dimensional extended objects (see [9] for some naive heuristic considerations). The string is parameterised by a map from the Lorentzian cylinder (time coordinate t and angular variable ϕ) to a flat Minkowski space. In the light-cone gauge the degrees of freedom are carried by the transversal fields x and their conjugate momenta p, as well as by the zero mode ζ 0 of the coordinate ζ = x − and by its conjugate variable η.
Classically the longitudinal generators of the Lorentz algebra are given by Here is the classical Hamiltonian density (corresponding to p − ), and is the reconstructed x − coordinate of the string that follows from ζ ′ = 2π so that the last term in (1.3) could be dropped, and ζ can be rewritten as That the M i− Poisson-commute (provided (1.4) holds) is a particular case of a result of Goldstone [10], who for arbitrary dimension M of the extended object solved for ζ in terms of x and p and some Green's function G (ρ is a density satisfying ρ d M ϕ = 1) and then showed that classically, for all M , the generators of the inhomogeneous Lorentz group can be consistently realised on the (η, ζ 0 , x(ϕ), p(ϕ)) phase-space constrained by the consistency of (1.6) (which for M = 1 and ρ = 1 2π simply becomes (1.4)). In the string case considered here, and the reconstructed x − coordinate is which reduces to the expression (1.5) given above. The paper is organised as follows. In section 2 we explain the OPE techniques that are needed to get to a quantum definition of the Lorentz operators. We then derive the commutation relations of all basic fields in section 3. Finally, we compute the crucial commutator [M i− , M j− ] in section 4. The three appendices contain some technical parts of the computation.
Operator product expansion and composite operators
The definition of the Lorentz generators involves products of fields which we have to define properly in the quantum theory. In a free theory this can be done by using an oscillator expansion of the free fields and then define normal-ordered products by moving annihilation operators to the right of creation operators. Alternatively we can use the operator product expansion (OPE) of the fields to define composite operators by subtracting the singular part of the OPE. This leads to an equivalent description for free fields, but it can in principle also be used in more general situations where the usual normal ordering prescription in terms of annihilation and creation operators is not possible.
In the case at hand, the x i are massless free fields on the two-dimensional cylinder, and their OPE reads (no summation over i) Then the OPE of the fields x ′ i is given by with the distribution S sing in two variablesφ and ϕ given by where P denotes the principal value. To define the operator "x ′ i (ϕ)x ′ i (ϕ)" we use point-splitting, so we evaluate (2.2) for ϕ = ϕ − ǫ and determine the singular piece, Note that away from ϕ =φ, S sing is a regular function, and we can replaceφ = ϕ − ǫ. This singular piece is then subtracted to define the product of x ′ i with itself, (2.5) Up to an additive constant this is equivalent to the normal ordering prescription using oscillators. Similarly we have and This then leads to the quantum definition of H, In the definition of ζ we also meet the product of p i and x ′ i . Their operator product expansion only has singularities of contact type, We therefore get a well-defined composite operator just by point-splitting, and we can define the quantum version of ζ as When we define the Lorentz generators M i− we also encounter the product of H and x i as well as the product of ζ and p i , which we have to regularise to obtain well-defined expressions. Let us start with the product of H and x i . The singularities in the operator product expansion follow via Wick's theorem from the individual contractions of x i and the x ′ j appearing inside H, Therefore we can define the quantum product of x i and H by (2.14) A little more work is needed to define the product of p i and ζ, because ζ is defined as a non-local expression in the fields. The possible singularities come from the contact singularity between p i and x ′ j inside ζ, and from the singularity between p i and the p j inside ζ. The contact singularity is avoided if we consider the symmetrised product p i ζ + ζp i , and we find The possible singularity in the symmetrised product of ζ and p i is then The possible singular part is therefore a total derivative in ϕ, which means that it does not matter in the expression for M i− , which involves an integration over ϕ.
Similarly, also the singular part of the product of x i and H is a total derivative (see (2.13)), which vanishes upon integration. Therefore the quantum definition of M i− using symmetrised products and point-splitting is given by This is our starting point for analysing the commutator of M i− and M j− .
Basic commutation relations
To compute the commutators of the Lorentz algebra generators we need to determine the commutators of the fields , which follow from the canonical commutation relations of x i and p j , and η and ζ 0 . We first list the results, and present the derivation subsequently. The commutators are The derivation is straightforward: where in the second step we integrated by parts, computed the commutator x i (ϕ), p(ψ) · x ′ (ψ − ǫ) and took the limit ǫ → 0.
• (3.5): The derivation of the commutator of H and ζ is straightforward, but slightly more involved, where we used that Note that the commutator of H and ζ contains a term that depends on the number D of space-time dimensions.
The crucial commutator
We now want to analyse the commutator of the generators M i− and M j− , which we defined in (2.20). It can be decomposed as Here we used the fact that x i (ϕ + ǫ) and H(ϕ) commute for ǫ > 0 (see (3.7)). There are four types of contributions: the commutator of the terms of the form xH, the two mixed commutators of xH and ζp, and the commutator of the terms of the form ζp.
At the end we want to analyse the behaviour when ǫ, δ andǫ,δ go to zero. It is not guaranteed that this limit exists, and indeed we will see that e.g. the commutator of the terms of the form xH alone is singular when the regularisation parameters go to zero; this singularity will go away when we combine all contributions to the commutator of M i− and M j− . On the other hand we expect that we can take one set of parameters to zero without getting a singularity in the different contributions: because the individual entries entering the commutator are regularised and do not show any singularity in ǫ, δ or inǫ,δ, the only way a new singularity can appear is through terms that become singular when both type of parameters go to zero (like (ǫ +ǫ) −1 ).
Our strategy will therefore be to always take the limitǫ,δ to zero first, and then consider the limit when ǫ and δ go to zero.
We start by analysing the commutator of the terms xH in detail. We then present the results for the remaining commutators, and evaluate the total expression.
Commutators of the form [x i H, x j H]
We want to analyse by using the commutation relations that we worked out in section 3. In a first step we obtain We now consider the behaviour whenǫ goes to zero, and we find where in the first summand we shifted the integration variable by ǫ. As expected there is no singularity whenǫ is taken to zero. We now want to analyse the possible singularities in ǫ. The singularities between normal-ordered expressions arise from singularities between the constituents, we have e.g.
OPE normal ordering is in general not associative, and we want to define the normal-ordering of several operators in a right-nested way, In the case at hand we have (see appendix C) so that we find The remaining singularity in ǫ will be cancelled by the contribution C xH,pζ ij that we discuss in the following.
Commutators of the form [x i H, p j ζ]
We now turn to the analysis of the contribution Evaluating the commutator we find (for i = j) (4.14) We expand this expression first inδ, and we obtain In the third and fourth line we have subtracted and added the singular piece of p j (φ)H(ϕ). In the third line, there is therefore no singularity coming from the operator part when ϕ andφ are close together, and we can just setδ to 0. The fourth line can be evaluated by writing S sing as a derivative and then using partial integration, Inserting this result into (4.15) we obtain We observe that the expression contains no singularity inδ. Now we expand in ǫ, The remaining singularity cancels the terms that we found in C xH,xH ij (see (4.12)), so that we find
Remaining commutators and final result
The remaining commutators can be computed analogously to the computations we displayed above, which is done in the appendices A and B, and the results are given in (A.5) and (B.10). Their sum does not contain any singularity, and one obtains Combining now all contributions we find for the quantum commutator the final result which vanishes if D = 26. The dimension-dependent term (linear in D − 2) came from the commutator of H and ζ, the other anomalous terms came from all commutators that occur in the computation. As already remarked in the introduction, we did not introduce a normal-ordering constant to define H. In principle such a shift (H → H + const. η ) could be considered (then one would derive from demanding a vanishing commutator [M i− , M j− ] that this constant is zero), but the definition of H via OPE normal ordering on the cylinder appears to be most natural. | 2014-12-21T21:39:12.000Z | 2014-12-21T00:00:00.000 | {
"year": 2014,
"sha1": "cf04fad8c242370b73825745ac3c1e01e5b3eb74",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1412.6838",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3f66edf3528b8e42d1e4be12fd5a88c27123804a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235238400 | pes2o/s2orc | v3-fos-license | Hypouricemia in type 2 diabetes mellitus without nephropathy: A case control study
Introduction: Some previous studies and our recent study had shown low serum uric acid (UA) in Diabetic patients compared to Non diabetics and it was suggested that low serum UA levels in Diabetics are probably due to uricosuric effect of urinary Glucose. This study was conducted to have an insight regarding the pathophysiology of low serum UA levels in Diabetics. Materials And Methods: Fasting blood glucose(FBG) , Post lunch blood glucose(PLBG), serum UA and 24 hr urinary excretion were estimated in Type 2 Diabetics without nephropathy (Cases) and in Nondiabetic inpatients(Controls) who got admitted into various Departments of KIMS hospital. The comparison of serum UA and 24hr urinary excretion between cases and controls and correlation between 24 hr urinary excretion, FBG and serum UA in cases was tested using SPSS 19 version. Result: Serum UA Mean is low in Diabetics compared to Non diabetics and this difference is significant whereas 24 hr urinary excretion is significantly higher in Diabetics compared to Nondiabetics. Significant negative association between FBG and serum UA and positive association between 24hr urinary UA and FBG and negative association of 24 hr urinary excretion with serum UA in Diabetics which is nonsignificant. Summary and Conclusions: At high concentrations of FBG there is increase in 24 hr urinary excretion providing an objective evidence to hypothesis that low UA levels in diabetics are probably due to inhibition of UA reabsorption in the proximal convoluted tubule of kidney by glucose.
Introduction
Several studies had shown that serum uric acid (UA) levels are associated with an increased risk of insulin resistance, 1 chronic kidney disease, 2 hypertension, 3 cardiovascular disease, 4 and for peripheral arterial disease. 5 Studies have shown conflicting results regarding the levels of serum UA levels in Diabetic patients. Some studies had shown high serum UA levels in Diabetics [6][7][8][9][10][11] and some low serum UA levels in Diabetics. [11][12][13][14]16 So to explore serum UA levels in Diabetics we had conducted a study to compare serum UA levels in diabetic patients and Nondiabetics and to correlate fasting blood glucose (FBG) with serum uric acid (UA). 17 In that study we had shown that UA Mean is low in Diabetics whose FBS is ˃126mg/dl compared to Nondiabetics whose FBS is ˂100mg/dl and this difference is statistically significant. FBG is positively correlated with serum UA in Nondiabetics & negatively correlated in Diabetics. The conclusion of that study was at high concentrations of FBG there is decrease in Serum UA level probably due to inhibition of UA reabsorption in the proximal convoluted tubule of kidney by urinary Glucose. There are very few studies which objectively prove that low serum UA levels in Diabetics are due to uricosuric effect of Glucose in urine. So this study was carried out to analyze the pathophysiology of low serum UA levels in Diabetic patients i.e whether the low serum UA levels in Diabetics are probably due to inhibition of UA reabsorption in the proximal convoluted tubule of kidney by Glucose or alternate metabolic abnormalities result in low serum UA levels in Diabetics compared to Nondiabetics. The objectives of present study are to estimate and compare serum UA and 24 hr urinary excretion in Type 2 Diabetics (FBG ˃ 126m/dl) and Nondiabetics (FBG˂110mg/dl) and also to correlate between 24 hr urinary excretion, FBG and serum UA levels in Diabetics.
Materials and Methods
The study was conducted in Kamineni institute of medical sciences, Narketpally, Nalgonda (District), Telangana, India. Institutional ethics committee clearance and valid informed consent from subjects was taken. Subjects for the study were screened from inpatients who admitted into the various Departments. Inclusion criteria: Cases: 50 Type 2 Diabetic Male subjects of age group 45 -60years whose FBG is ˃126mg/dl BMI between 25-30 Controls: 50 Male Nondiabetics of age group 45-60yrs whose FBG is between ˂100mg/dl BMI between 25-30 Exclusion criteria: Subjects with history of smoking, alcoholism, hypertension, hyperlipidemia.
Patients with diseases that can cause altered uricacid levels such as obesity, renal disease were excluded.
Diabetic Male subjects on insulin treatment. Subjects details history was taken. Their name, age, sex, occupation, history of Diabetes, Hypertension, other comorbid conditions and treatment history was taken by standard proforma. Subjects Height and Weight were recorded.
Sample collection: Serum: 5 ml of venous blood was drawn after an overnight fasting into a sterile disposable syringe under aseptic conditions. Samples are centrifuged at 3000 rpm for 5 mins. Plasma and serum were separated within two hours of collection of blood. Care was taken to prevent hemolysis of the samples. Lipaemic and icteric samples were discarded. The following parameters were estimated in BS380 autoanalyzer Fasting Blood Glucose by GOD -POD method. Serum Uric acid by Uricase method Urea by Berthelot reaction Creatinine by Jaffes kinetic method Triacylglycerol by GPO-POD method ESPAS Total cholesterol by CHOD -PAP -Colorimetric Method LDL by Direct enzymatic method HDL by Cholesterol oxidase method Urine: All patients provided 24 Hr urinary samples. Samples were semi-quantitatively (i.e., by dip stick) analyzed for Glucose, Ketone bodies, Blood and proteins. Dip stick scale spanned from "negative", "trace", "1+", "2+" to "3+" for Glucose. The following parameters were estimated in 24hr urine sample: Uric acid by uricase method, Creatinine by jaffes kinetic method and Albumin by immunoturbidimetric method.
Statistical Analysis
The statistical analysis was performed using SPSS software 19.00 version. The descriptive results are expressed as Mean ± S.D, significance of difference between cases and control group observed and assessed by using the unpaired student `t` test. The `p` values are expressed along with mean values and S.D. The `p` value < 0.05 was considered statistically significant. Pearson correlation `r` was used to assess the correlation between different parameters in the groups analyzed. The results were represented in the form of tables. 24hr excretion of UA is more in cases compared to controls and it is statistically significant. 24hr excretion of Creatinine and Albumin are normal in cases and controls and they are comparable. There is significant positive correlation between FBG and PLBG in Diabetics and Nondiabetics There is statistically significant negative association between FBG and serum UA and statistically not significant positive association between urinary UA and FBG. Statistically not significant negative association between urinary UA and serum UA.
Discussion
In the present study in Type 2 Diabetics without nephropathy (FBG ˃ 126 mg/dl) there is decrease in Serum UA levels compared to Nondiabetics (FBG ˂110mg/dl). Difference in mean serum UA levels between Diabetics and Nondiabetics is statistically significant ('p'˂0.05) as shown in Table 2. In Type 2 diabetics FBG showed significant negative correlation with Serum UA as shown in Table 5.These results are similar to our previous study and few other studies. [12][13][14][15][16][17] To know the pathophysiology of low serum UA levels in Type 2 Diabetic patients we have estimated 24 hr Urinary UA excretion rate in Type 2 Diabetics without nehropathy and Nondiabetics. Previous studies had shown that hyperinsulinemia increases serum UA in diabetics by increasing the rate of synthesis of UA through activation of the hexose phosphate shunt which results in more production of purines than needed by cell and also insulin may stimulate the urate anion transporter in the proximal convoluted tubule increasing reabsorption of uric acid from the kidneys. So we excluded from our study diabetics on insulin treatment 18,19 . We found that 24 hr Urinary UA excretion rate is significantly higher in Type 2 Diabetics whose FBG is ˃126mg/dl compared to Nondiabetics with normal FBG. These findings of significantly low serum UA levels and significantly high Urinary UA excretion rate in Type 2 Diabetics without nehropathy compared to individuals with normal FBS suggests uricosuric effect of glucose .This finding is in agreement with study conducted by NS Neki*, Himanshu Gupta**. They showed that in Diabetic patients without nephropathy low serum UA and high urinary UA excretion rate. 20 Boner G and Rieselback study has shown that in normal individuals urinary UA excretion is positively associated with urinary Glucose if serum glucose levels are more than renal threshold causing glycosuria. 21 Serum UA levels are determined by reabsorption of UA by PCT and rate of excretion of UA by kidneys. 22 After filtration, UA undergoes both re-absorption and secretion in the proximal convoluted tubules and this process is mediated by urate/anion exchanger and a voltage sensitive urate channel. Reabsorption of UA by PCT is by urate transportsome which includes URAT1 transporter and two sodium-coupled monocarboxylate transporters (SMCT1 and SMCT2), situated in apical membrane. 23 SLC2A9 transporter which is previously known as GLUT9 transports UA from lumen of PCT cells into blood. 24 This reabsorption of UA into blood by PCT is affected by several factors, such as inorganic and organic anions and several uricosuric agents like Glucose. As shown in Table 4, Diabetic patients with higher FBG levels are recognized to have more postprandial blood glucose levels with greater Glucose loading of the kidneys resulting in Glucosuria. High levels of excretion of Glucose in Diabetic patients results in high UA excretion rate and low levels of serum UA levels.
To explore the association between 24 hr urinary excretion, Fasting blood glucose levels and serum UA correlation study was done in Type 2 Diabetics as shown in Table 5. In this study we found that serum UA has significant negative association with FBG and no significant positive association with urinary UA. We also found out that there is no significant positive association of urinary UA with serum UA. The probable reason for statistically not significant weak association of urinary UA with FBG and serum UA is FBG and serum UA are single time estimates whereas urinary UA is 24hour estimate and relatively small sample size to extrapolate the findings to general population. Though the sample size is small it is a positive step in the direction to establish the cause for low serum UA levels which is an antioxidant in Diabetics.
Conclusion
At high concentrations of FBG there is increase in 24 hr urinary excretion providing an objective evidence to hypothesis that low UA levels in Type 2 diabetics are probably due to inhibition of uric acid reabsorption in the proximal convoluted tubule of kidney by glucose in urine. | 2019-08-19T01:04:10.705Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "42126b1c4fd32baf684d4d3b2632a6f40d02dad3",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijcbr.in/journal-article-file/6796",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c17dcae801b04fec479286d76a471e06751691f6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
18675375 | pes2o/s2orc | v3-fos-license | Antioxidant Activity and Cytotoxicity Effect of Cocoa Beans Subjected to Different Processing Conditions in Human Lung Carcinoma Cells
Lung cancer is a common malignancy in men and the second leading cause of cancer-related mortality in men in the western world. Phenolic cocoa ingredients have a strong antioxidative activity and the potential to have a protective effect against cancer. In the present study, we have evaluated the influence of cocoa beans subjected to different processing conditions on cell viability and apoptosis of human lung cancer cells (A549). We measured the viability of lung cells treated with cocoa beans, unroasted slates (US), roasted slates (RS), unroasted well fermented (UWF) cocoa, and roasted well fermented (RWF) cocoa for 24 h. Using an MTT assay, we observed a decrease in the viability of A549 cells after treatment with cocoa bean extracts. Flow cytometer analysis revealed that cocoa beans increased the percentage of cells in sub-G1 phase and promoted up to twofold increase of apoptotic cells when compared to the control group. Taken together, the present study suggests that cocoa beans may have a protective effect against lung cancer.
Introduction
Lung cancer is a major health concern since it is one of the leading causes of death worldwide [1,2]. It is estimated that the annual incidence rate is nearly 1.23 million. In Brazil, it is the second most frequent type of cancer and it has the highest mortality rate, due to late diagnosis and the aggressiveness of the tumor type. In 80-90% of cases, it is caused by smoking and exposure to pollutants. The existing therapeutic strategies for cell lung cancer include surgery, radiotherapy, chemotherapy, and physical therapy. The survival rate of nonsmall-cell lung cancer patients is less than 1% [3,4].
The proposals that cancer might be preventable, and that food and nutrition might influence the risk of cancer, were first made in the 19th and 20th centuries. Throughout recorded history, wise choices of food and drink, and of habitual behavior, have been recommended as a protective measure against cancer [5]. Researchers are still unsure about the role of diet in lung cancer. Bright yellow-orange beta-carotene is one of a number of carotenoids thought to have anticancer activity even greater than vitamin A. Other possible lung protectors are foods high in bioactive compounds such as vitamin C and other antioxidants present in fruits and vegetables. These nutrients may protect lung linings but cannot totally prevent damage [6].
Among the different bioactive compounds, phenolic compounds from fruits and vegetables have gained much attention over the years because of their antioxidant activity that indirectly reflects their potential effects on human health [7,8]. Some studies reported that phenolic compounds found 2 Oxidative Medicine and Cellular Longevity in cocoa beans may present different properties such as antioxidant, anticarcinogenic, and antiradical activities [9][10][11]. Polyphenols are the main antioxidant-active constituents of cocoa. Flavanols and procyanidins have previously been identified as the active antioxidant agents of cocoa [12]. The polyphenol content of cocoa products depends on many factors, especially the cultivated variety and the postharvest handling that includes fermentation, drying, and roasting of the beans and nibs. There is evidence that fermentation and roasting of the beans tend to reduce their flavanol content [13,14]. The antioxidant activity of the phenolic compounds is primarily due to their redox properties that allow them to act as reducing agents, hydrogen donors, and scavengers of reactive oxygen species (ROS) and metal ions [8,[15][16][17]. The emergence of natural extracts with antioxidant properties may help reduce the current dependence on synthetic drugs.
Potential mechanisms for cancer prevention of bioactive compounds include prevention of DNA adduct formation enhanced carcinogen elimination, inhibition of inflammatory processes, and a direct cytotoxic effect on tumour cells [18][19][20]. In line with efforts to balance the conservation of biodiversity and encourage the controlled exploitation of plant resources for economic gain, especially in biopharming, waste of valuable resources should be minimized [17]. The aim of this study was to evaluate and compare the antioxidant activity of cocoa beans classified as slate, roasted, and unroasted with well fermented beans submitted to the same processing and their cytotoxic effects on human lung carcinoma cell line (A549).
Samples and Extractions.
Samples of cocoa beans, unroasted slates (US), roasted slates (RS), unroasted well fermented (UWF) cocoa, and roasted well fermented (RWF) cocoa were harvested and preprocessed (fermented and dried) in the cocoa producing region of Ilhéus (Bahia, Brazil). These samples were classified according to their fermentation status and donated by a company of the cocoa sector from the same region. At least three different lots of each cocoa class were mixed to form the samples used in this study. The roasting as well as the fine grinding of all beans was carried out in the State University of Feira de Santana (BA). All samples were kept at −5 ∘ C and sent to the Functional Foods and Biotechnology Laboratory of the Federal University of Rio de Janeiro State (UNIRIO), where all analyses were conducted.
Extraction of Samples.
The samples of cocoa were extracted with 3 different solution extractors: methanol (I), methanol 50% (II), and methanol 50% : acetone 70% (1 : 1) (III). 1.25 g of sample was weighed and suspended in 10 mL of extracting solution for 1 hour under stirring, protected from light. The crude extracts were filtered and completed in 25 mL with distilled water. For cellular analysis, 1.25 g of each sample was weighed and suspended in saline solution (PBS) at 2% dimethyl sulfoxide (DMSO).
Total Phenolic Assay.
Total phenolic content of the extracts was determined according to the Folin-Ciocalteu method as described by Singleton and Rossi [21] with minor modifications. Aliquots of 0.5 mL of the extracts were added to 2.5 mL of Folin-Ciocalteu reagent and 2.0 mL of 4% sodium carbonate solution and the mixture was allowed to rest for 2 hours in the dark. Measurements were performed at 750 nm in triplicates, applying a Turner5 340 spectrophotometer. Gallic acid, in the concentration range of 0-100 mg/mL −1 , was used to construct a calibration curve. The concentration of total phenolic compounds in the extract was expressed as gallic acid equivalents, which reflect the phenolic content as the amount of gallic acid in mg/100 g dry weight of the samples.
Antioxidant Activity Analyses
2.4.1. DPPH Assay. Aliquots of 0.5 mL of the extracts were mixed with 2.5 mL DPPH methanolic solution (0.06 mM) and allowed to react for 1 hour, in the dark. Measurements were performed at 515 nm applying a Turner 340 spectrophotometer. The analysis was performed in triplicates; the decline in the DPPH radical absorbance concentration caused by the extracts was compared to a trolox standard. The results were expressed as mol trolox equivalents/g dry basis. [22].
Trolox Equivalent Antioxidant Capacity (ABTS/TEAC).
The TEAC •+ cation was prepared by mixing a TEAC stock solution (7 mM in water) with 2.45 mM potassium persulfate. This mixture was allowed to stand for 16 hours at room temperature until the reaction was completed and the absorbance was stable.
The antioxidant capacity assay was carried out by the improved ABTS/TEAC method as described by Rufino et al. [23]. TEAC solution (2.5 mL) was added to extracts or commercial antioxidant (trolox) and mixed thoroughly. Absorbance was recorded at 734 nm during 6 min. Aliquots of 5, 10, and 20 L of the extracts were tested and their volume was completed to 0.5 mL with water. Results were expressed as mol trolox/g dry basis.
Ferric Reducing Ability (FRAP).
The extracts were measured for antioxidant activity by FRAP according to Rufino et al. [24]. Aliquots of 2.7 mL of TPTZ reagent (ferric 2,4,6tripyridyl-s-triazine) were mixed with 0.5 mL of sample extract (aliquots 5, 10, and 20 L). After 30 min at 37 ∘ C temperature, the absorbance was read at 595 nm. The antioxidant capacity (FRAP) was expressed as Fe 3+ equivalents ( mol Fe 3+ /g dry basis). atmosphere. Stock flasks were grown to 70% confluence and subcultured routinely. Medium renewal was done 3 times weekly. For each experiment, cells were seeded at 3.5 × 10 5 cells/cm 2 and 2 × 10 4 cells/cm 2 densities in 6-well plates and 96-well plates for cell cycle and cell proliferation analyses, respectively. After 24 h, medium was removed and cells were treated with increasing concentrations of cocoa nibs extract (100 to 10000 g/mL) dissolved in DMEM. The controls, DMEM and DMEM + 2% DMSO, were included on each plate. The cells were then incubated for 48 hours.
Cell
Viability. Cell viability was monitored by MTT assay (Amresco, Solon, OH). MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) is a pale yellow substrate that is reduced by living cells to yield a dark blue formazan product. This requires active mitochondria, and even recently dead cells do not reduce significant amounts of MTT. Exponentially growing cells were adjusted to 2.0 × 10 4 /cm 2 with DMEM, plated in 96-well plates (Corning, Tewksbury, MA) at 200 L/well and incubated for 24 h according to the routine procedure. The cells were then incubated with cocoa nibs unroasted slates (US), roasted slates (RS), unroasted well fermented (UWF) cocoa, and roasted well fermented (RWF) cocoa (5-10 mg/mL) for 48 h (6 wells for each sample). Each well was also incubated with MTT (10 L/well; 5 g/mL) for 4 h. After 85 L/well the liquid was removed and 50 L/well sodium dodecyl sulfate was added to dissolve the solid residue. Finally, the absorbance was measured using a microplate reader (POLARIS, CELER5) at 570 nm. The cell proliferation inhibition rate (CPIR) was calculated using the following formula: CPIR = (1 − average value of experimental group/average value of control group) × 100%.
Cell Cycle Analysis.
Cells were rinsed briefly with calcium and magnesium-free phosphate-buffered saline and detached with trypsin at room temperature. After centrifugation, the cells were washed twice with phosphatebuffered saline; cells were resuspended in 500 L of ice-cold Vindelov solution [25] containing 0.1% Triton X-100, 0.1% citrate buffer, 0.1 mg/mL RNase, and 50 mg/mL propidium iodide (Sigma Chemical Co., St. Louis, MO). After 15 min of incubation, cell suspension was analysed for DNA content by flow cytometry using a FACSCalibur flow cytometer (Becton Dickinson, Mountain View, CA). The relative proportions of cells with DNA content indicative of apoptosis (<2n), G 0 /G 1 diploid (2n), S (phase >2n but <4n), and G 2 /M phase (4n) were obtained and analyzed using the CellQuest WinMDI 2.9. The percentage of cell population at a particular phase was estimated with FlowJo software. Cell dissociation procedure does not affect fluorescence under the experimental conditions that were used in this study or in any other studies of which we are aware. Nuclei of viable cells were gated according to FL-2W × FL2-A relation.
Apoptosis Assay.
To measure the rate of apoptosis, the cells were subjected to staining with Annexin V conjugated to FITC (BD Pharmingen, San Diego, CA). The nonadherent cells were collected, and adherent cells were quickly washed
Phenolic Compounds and Antioxidant Activity of Cocoa
Beans Samples. The method that yielded higher phenolic extracts was the one using acetone, in all samples tested, as can be observed in Figure 1. Dreosti [26] reported that 60% of the total phenolics in raw cocoa beans are flavanol monomers (epicatechin and catechin) and procyanidin oligomers (dimer to decamer) [27]. Benayad et al. [28], Cheng et al. [29], and Boulekbache-Makhlouf et al. [30] have shown that the use of acetone, when compared to the use of other polar organic compounds, potentiated the extraction of flavonoids and flavonols from different plant materials. In addition, extraction of procyanidins from cocoa with acetone solutions has been successfully accomplished at least since 1999. The present work found amounts of phenolic substances (Table 1) compatible with the results displayed by Kadow et al. [31] for both raw and "fermentation-like" treated samples. Unprocessed cocoa beans usually present a high phenolic content of about 12-18% (dry weight) [32]. The processing of raw cocoa includes a number of stages and each stage in the processing alters cocoa's chemistry and composition [11]. When extractor III was considered, there were significant differences among all samples tested and slates showed higher phenolic content when compared to well fermented samples. Generally, it is expected that well fermented beans show lower phenolic contents, because low mass phenols are related to astringency and anthocyanins are related to the purple color of unfermented beans, both considered undesirable characteristics in chocolate [33,34].
Figures 2(a)-2(c) clearly show that the overall higher values were obtained for the DPPH method, followed by TEAC and FRAP, respectively. The slate samples showed higher antioxidant activity compared to well fermented beans or nibs. All these findings were in agreement with previous results as it was expected to obtain higher antioxidant activity in extracts with higher concentrations of phenolic compounds [14,[35][36][37].
The roasted slate (RS) samples showed a decrease in antioxidant activity after fermentation. Roasting is considered one of the processing steps of the cocoa nibs that leads to the loss of phenolic compounds and should cause a decrease in the antioxidant activity, as may be seen when comparing UWF and RWF samples (Figure 2). It is possible that, during roasting, while phenolic compounds are degraded, other antioxidant potential compounds are formed through the Maillard reaction, especially reductones and melanoidins [38,39]. The reason why this formation may have been significant in slates but not in well fermented beans is probably related to the different compositions of these two samples before the roasting process.
FRAP assay showed a decrease in antioxidant activity after roasting. According to Apak et al. [40], FRAP methodology is not capable of measuring the antioxidant activity of compounds in complex matrix, since it takes more time to perform the analysis of antioxidant function, and not all antioxidants have the specific ability to reduce iron [41].
Recovery of antioxidant compounds from plant materials is typically accomplished through different extraction techniques, taking into account their chemistry and uneven distribution in the plant matrix. These compounds tend to present different polarities as well as other variable characteristics. Thus, the solubility in a particular solvent is a unique feature of the phytochemicals to be taken into account. Methanol and solutions of 50% methanol in water are commonly applied solvents that efficiently extract phenolic compounds. Methanol and acetone are also suitable solvents for anthocyanin extraction from various raw materials [42][43][44], and acetone-water mixtures have been suggested to grant better extraction results of procyanidins and phenols when compared with other extractors [45].
It is now recognized that diet and nutrients play an import role in cancer development and progress, with many dietary components found to be associated with cancer risk. However, almost all the clinical intervention trials with isolated nutrients, such as vitamin A, vitamin E, vitamin C, and phenolic compounds supplements, failed to demonstrate their protective effects against cancer. Due to the complexity of cocoa matrix, it is very difficult to characterize all components and even say which major component is responsible for the cytotoxic effect, due to the synergistic and antagonistic effects.
Effect of Cocoa Extracts on Cell
Viability. The slate nibs were the supplement which caused the largest decrease in viability compared to control (34.45%, Figure 3), while cells exposed to US had the highest percentage of viability, 78.07%, at a concentration of 10 mg/mL (Figure 3(b)). However, surprisingly, crisp sample obtained higher potential reduction in cell viability (Figure 3(c)), with cell viability of 77.15% (5000 g/mL) and 63.55% (10.000 g/mL).
Well fermented cocoa bean extract decreased the number of viable A549 cells within 48 hours. In UWF sample, cell viability decreased from the concentration of 100 g/mL by 45% compared with the control group ( < 0.05) (Figures 4(a) and 4(b)). The concentrations that caused the largest decrease in cell viability were 5000 and 10000 g/mL, reduced by 58.77% and 72.35%, respectively (Figure 3(b)). For RWF sample, the reduction was smaller (Figure 3 Cell culture studies constitute a useful tool to elucidate the molecular mechanisms of action of cocoa extracts and their polyphenolic compounds in different cancer cell lines. It has been shown that cocoa components induced a timedependent regulation of survival/proliferation pathways in HepG2 liver cells [46]. Moreover, a cocoa procyanidin fraction inhibited TPA-induced neoplastic transformation of JB6P+ mouse epidermal cells, COX-2 expression, and phosphorylation of MEK and p90 ribosomal s6 kinase and attenuated activator protein-1 (AP-1) and NF-B stimulations [47].
The physiological impact of polyphenols depends on their absorption. However, it is important to bear in mind that the most common polyphenols in diet are not necessarily the most bioavailable, since their structure plays an important role. Most native polyphenols in foods are in glycoside form (flavonols, flavones, flavanones, isoflavones, and anthocyanins), together with the less frequent oligomers (proanthocyanidins), which cannot be absorbed in the intestinal mucosa [48]. Through this, we can observe that the sample that most reduced the viability of human lung carcinoma cells was again the sample which had the parent compounds of the modified crude cocoa, such as well fermented sample, and also the sample which has not been roasted, since many bioactive compounds are lost during this processing.
Effect of Cocoa Extracts on Cell Cycle Progression.
Uncontrolled cell proliferation is a characteristic of cancer [49], and extracts of cocoa beans have been shown to inhibit the proliferation of A549 cells. Previous trials of antioxidant activity and MTT have determined which samples have the greatest potential for use in the analysis of cell cycle and apoptosis. To probe inhibition of cell growth mediated by RS and UWF extracts, we examined the cell cycle by flow cytometry. The effects of the RS and UWF extracts on cell cycle progression in A549 cells are shown in Table 1. The results are expressed as mean ± standard error and significant differences between cells treated with US and RS nibs extract (100-10000 g/mL) were compared using Tukey's test ( * < 0.05; * * < 0.01).
After 48 h of growth, the population control in the sub-G 1 G 0 /G 1 phase reached values between 6.65 and 1.37% and between 75.75% and 75.80%, respectively. Meanwhile, cells treated in the sub-G 1 population increased those treated in G 0 /G 1 phase decreased in a dose-dependent manner in both treatments. The loss of the ability to regulate the cell cycle is characteristic of cancer cells and results in uncontrollable proliferation. The cell progression through the first gap phase of the cell cycle (G 1 ) is a step which is frequently disordered in cancer [50]. Treatments with RS resulted in the highest percentage of cells in sub-G 1 (77.01%, 10 mg/mL) and in G 0 /G 1 (30.65%, 5 mg/mL) phases. As the G 0 /G 1 phase population increased the G 2 /M phase population of cells decreased, whereas the population of phase S cells showed no difference within 48 h. Treatment with UWF cocoa at a concentration of 10 mg/mL showed the highest value for sub-G 1 (50.27%) reduction within phases G 0 /G 1 and S and an increase in the G 2 /M phase. Treatment with 5 mg/mL of UWF cocoa showed the smallest number of cells in sub-G 1 and the G 0 /G 1 phases and therefore a greater cell population in the G 2 /M phase (33.95%).
Cocoa-derived pentameric procyanidin (pentamer) caused a G 0 /G 1 cell cycle arrest in human breast cancer MDA MB-231, MDA MB-436, MDA MB-468, SKBR-3, and MCF-7 cells and in benzo(a)pyrene-immortalized 184A1N4 and 184B5 cells, whereas normal human mammary epithelial cells in primary culture and spontaneously immortalized MCF-10A cells were resistant [51]. Similarly, procyanidinenriched extracts from cocoa caused growth inhibition with blockade of the cell cycle at G 2 /M phase in human colonic Caco-2 cells [52], and EC induced S phase arrest in the cell cycle progression in LoVo colon cancer cells [53].
Our results indicate that cocoa nibs extracts were able to modify cell cycle. The deregulation in cell cycle control is a fundamental aspect in the development of cancer. Faults in the cell cycle regulation process can cause a greater proliferation of cancer cells. However, the reversal of this process leads to a delay in growth and induces cell death [54].
Apoptosis.
We examined the effect of RS and UWF cocoa on apoptotic death in A549 cells. Table 2 shows the percentages of viable, early apoptotic, late apoptotic, and nonapoptotic cells treated with 5 mg/mL and 10 mg/mL and RWF cocoa nibs (c), after forty-eight hours on viability A549 cells after exposure using MTT assays. The experiment is expressed as mean ± standard error and significant differences between cells treated with UWF and RWF nibs extract (100-10000 g/mL) were compared using Tukey's test ( * < 0.05; * * < 0.01). Results are expressed as percentage of total cells. The experiment is expressed as mean ± standard deviation; significant differences between untreated cells (CT) and cells treated with lycopene (5-10 M) were compared by one-way ANOVA with the posttest of Tukey ( * < 0.05; * * < 0.01).
RS and UWF cocoa. Figure 4 shows the influence of the extracts of cocoa nibs on the rate of apoptosis. Cells treated with 5 mg/mL and 10 mg/mL RS and UWF cocoa for 48 hours resulted in a significant increase in the percentage of apoptotic cells compared with untreated cells (control).
The concentrations used in this cellular model promoted a change in cell cycle and induction of apoptosis by lower doses than isolated compounds or cocoa products used in studies with human models. [55][56][57]. This has been demonstrated in a study by Ottaviani et al. [57], where adult males were given Quantitative effects of UWF cocoa and RS at 5 mg/mL and 10 mg/mL on A549 cells after exposure for 48 h. The results are expressed as mean ± SD, with significant differences between untreated cells (CT) and cells treated with UWF cocoa and RS (5-10 mg/mL) compared by 1-way ANOVA followed by Tukey's multiple comparison post hoc test. * < 0.05. * * < 0.01.
1.5 mg/kg of a concentrated cocoa solution, and two hours after ingestion it was noted that there were many metabolites of flavonols in cocoa in the blood of the volunteers. Increase of early apoptosis was observed in A549 cells treated with 10 mg/mL RS, whereas increase in late apoptosis was observed in cells treated with 10 mg/mL UWF cocoa for 48 h. A549 cells incubated with 5 mg/mL and 10 mg/mL RS UQG for 48 hours showed a decrease in the population of viable cells and an increase of up to 4.3 times in the percentage of apoptotic cells compared with control, in a dose-dependent manner (Table 2 and Figure 5).
Apoptosis is characterized by a series of distinct changes in cell morphology, loss of cell attachment, cytoplasmic contraction, DNA fragmentation, and other biochemical changes, including the activation of caspases through extrinsic and/or intrinsic mitochondrial pathways [58]. Increased resistance to apoptosis is a hallmark of many tumor cells. The functional inhibition of specific antiapoptotic factors may provide a rational basis for the development of novel therapeutic strategies. Therefore, apoptotic deficiency is considered to be a major cause of therapeutic resistance of tumors, since many chemotherapeutic and radiotherapeutic agents act through the induction of apoptosis [59]. However, the apoptotic effect of the active ingredient of cocoa bean samples on A549 cells has not yet been studied in detail.
Conclusion
In conclusion, cocoa extract presented itself as a potent antioxidant agent, with antioxidant capability influenced by the processing method and extraction. Slate cocoa beans extract inhibited cell proliferation, arrested cell cycle in different phases, and increased apoptosis in human lung carcinoma cells, in a time-dependent and dose-dependent manner. Our study has far reaching health relevance as cocoa could be projected as functional foods which, in addition to providing nutrition, would provide preventive therapeutic value against the development of cancer. | 2016-05-12T22:15:10.714Z | 2016-03-13T00:00:00.000 | {
"year": 2016,
"sha1": "d0b618d60ad0de5250e911e8d8c47a97b6220ebb",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2016/7428515.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "692dfe4414bce716b6338b95b665e237456b645c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
11500058 | pes2o/s2orc | v3-fos-license | Interacting molecular motors: Efficiency and work fluctuations
We investigate the model of"reversible ratchet"with interacting particles, introduced by us earlier [Europhys. Lett. 84, 50009 (2008)]. We further clarify the effect of efficiency enhancement due to interaction and show that it is of energetic origin, rather than a consequence of reduced fluctuations. We also show complicated structures emerging in the interaction and density dependence of the current and response function. The fluctuation properties of the work and input energy indicate in detail the far-from-equilibrium nature of the dynamics.
Many models of molecular motors appeared in the literature since the pioneering work by Ajdari and Prost [23]. The basic mechanism is best elucidated in the models which rely on the ratchet mechanism [4,5,[24][25][26] and also bear the name Brownian motors. The basic idea can be viewed either as diffusive motion of a particle in spatially asymmetric time-dependent potential or as chemically driven transitions between a finite number of mechano-chemical states. The former view is more intuitive, but the latter is closer to reality and opens the perspective of fitting the underlying transition probabilities to experimental data.
More realistic models are rather built on Markov chains in the configuration space constructed as product of spatial and internal (chemical) coordinates [8,[27][28][29][30][31][32][33][34][35]. This approach resides perhaps on more solid experimental evidence, but the absence of explicit potential makes it very difficult to assess the energetic efficiency, the question of principal importance in this paper.
Indeed, one of the points of special interest here will be the question of the efficiency of molecular motors. Several measures of efficiency can be found in literature. We shall use the classical thermodynamic definition η = W/E in , where W is work performed and E in energy supplied to * Electronic address: slanina@fzu.cz the system from external source. Alternative measure takes into account viscous resistance [36], thus reflecting better the reality, at the cost that the inequality η < 1 is not guaranteed automatically. Yet other methods of measuring the efficiency involve explicitly the consumption of chemical energy [37], or the magnitude of the stopping force [9]. Note, however, that the former work ( [37]) explores the interacting motors and the mechanism if generating the non-zero current is related to spontaneous symmetry breaking and this it is principally different from the non-interacting case studied in [9]. Therefore, the direct comparison of the efficiency in these two cases is hardly possible. We are not aware of any work in which several measures of efficiency would be systematically compared on the same model.
The efficiency of canonical Brownian motors realized as either flashing or rocking ratchets was intensely studied [27][28][29][38][39][40][41][42][43]. It turns out that the energetic efficiency is rather low [5,41], while the experimental data on motor proteins, e. g. the kinesin [9,11], report high efficiency, sometimes even estimated to be close to 100 per cent. We are not in a position to judge the quantitative precision of these empirical estimates, although it can be suspected that the error margin is rather high. However, one is lead to a natural conclusion that the usual ratchet mechanism with diffusion as principal driving force is not an appropriate model for biological motors.
In idealized case we can distinguish between ratchet and power-stroke mechanisms for molecular motors [44], the latter relying rather on quasi-deterministic downhill motion in a free-energy landscape which evolves in time. Thus, the particles move as if trapped in a traveling potential wave. This idea was elaborated in a toy model of "reversible ratchet" [43,45,46], showing much higher efficiency, close to the biologically relevant figures. Of course, arbitrary combinations and mixtures of the ratchet and power-stroke mechanisms can be invented and indeed, they are believed to be found in reality, e. g. in the myosin V motor (see the review [35] and references therein). Nevertheless, it is useful to compare these two extremes. We should also note that high efficiency was characteristic of the models of either interacting [37] or non-interacting [47,48] motors, which do combine the ratchet and power-stroke mechanisms.
The second point we shall concentrate on in this work will be the mutual repulsive interaction of molecular motors. In the cell, the steric (hard-core) repulsion of motor proteins influences significantly their behavior. For example, in gene transcription and translation large number of motor proteins move along the same track [15,16], forming the so-called "Christmas tree" structures. Thus, they show themselves as a physical realization of the wellstudied asymmetric exclusion process, introduced first in the context of biopolymerization [49,50] and later solved exactly in great detail, using sophisticated techniques [22,[51][52][53]. The model was adapted for molecular motors, which may attach and detach with defined rates [54,55]. Later, this situation was studied theoretically for the cases of kinesin [56], ribosomes [57], and RNA polymerase [58], using the procedures developed in traffic models [59]. The influence of the geometry of the compartment in which the interacting motors diffuse after detachment from the track was studied e. g. in [60].
Interaction of motors brings about even more complicated collective effects. In the cell, kinesin and dynein molecules typically carry the cargo in groups [8,9,61], resulting in current reversals. Including explicitly the ratchet mechanism of driven diffusion of hard-rod particles leads to very intricate effects [62][63][64], if the particle size and the ratchet periodicity are incommensurate. The collective movement of coupled Brownian motors was studied [65][66][67][68] and in some cases the coupling was found to induce non-zero current and spontaneous oscillations even in mirror symmetric potential due to dynamical symmetry breaking [37,69]. In analogy with these works, the motion of a few rigidly bound motors was studied [70]. A special case of such interaction is the coordination of the two motor heads within single kinesin molecule, which leads also to non-trivial effects [34]. Finally, let us mention the interaction of the motors with the track, studied in "burnt-bridge" models, e. g. in Ref. [71].
In our previous paper [72] we introduced a model, which is a modified version of the "reversible ratchet". Spatial coordinate is discretized, as e. g. in [26]. Tun-able on-site repulsion between particles is introduced. We found in [72] that not too strong interaction leads to increase of efficiency. This effect was reproduced qualitatively in analytical calculations based on mean-field (MF) approximation. Quantitative agreement was reached in an improved MF treatment, developed in [73]. Here we investigate the model in depth by further numerical simulations. Especially, we elucidate the origins of the interaction-enhanced efficiency. We shall show that it stems from the energy balance rather than from suppression of fluctuations. At stronger interaction and/or higher density, current reversals and oscillations in response function are found. We also calculate the distribution of input energy and performed work, which is far from being Gaussian.
II. REVERSIBLE RATCHET WITH INTERACTING PARTICLES
Our model contains N particles occupying integer positions on a segment of length L, with periodic boundary conditions, and evolves in discrete time. The position of i-th particle at the instant τ is denoted x i (τ ). The particles move under the influence of a variable driving force with spatial period 3 and temporal period 4t. The potential of this force is at site x and time τ . The three independent values of the potential V a (τ ), a = 0, 1, 2 evolve in a four-stroke pattern, with V 0 (τ ) = 0 and the remaining two being delayed one with respect to the other by one quarter-period t. Thus, we prescribe We easily recognize the traveling-wave character of this potential, corresponding to the power-stroke mechanism of the molecular-motor movement. In all the rest of this paper, we fix the amplitude of the potential V = 1. The time dependence of the potential is illustrated in Fig. 1.
Besides the driving potential, there is also a uniform external force from the useful load F and, most importantly, the repulsive interaction from other particles. We suppose the interaction is on-site only and we tune its strength, in order to interpolate between the noninteracting and hard-core cases. The j-th particle feels the potential from all remaining ones. To formalize it, we denote n j (x, τ ) = N i=1 δ(i − j)δ(x − x i (τ )) the number of particles, except j-th particle, at site x. (We use δ(a − b) for Kronecker delta and δ(a − b) = 1 − δ(a − b).) Thus, the j-th particle moves in the potential The width of the sample is L = 18, temperature T = 10, quarter-period t = 4, external load F = 0. Each panel corresponds to different combination of two parameters, the number of particles N and the interaction strength g, whose values are indicated at corresponding columns and rows. Dots denote presence of exactly one particle at given space and time, the other symbols presence of more particles, namely two (•), three (△), four (⋄), five ( ), and six (▽). The diagonal straight lines are guides for the eye, indicating the movement of the minima of the potential V (x, τ ).
For g = 0 we recover the non-interacting case, while when g → 1 we approach the hard-core interaction of the exclusion process [22]. In contrast with the previous work [72], we use here different form of the interaction in order to see the limit of hard-core repulsion when g → 1.
Although it may cause some small difficulties when comparing the results of [72] with the present ones, the advantage lies in the possibility to see the transition from non-interacting case to hard-core repulsion on a finite interval g ∈ [0, 1].
The simulation algorithm proceeds as follows. At each integer time τ we instantly shift the potential according to (1). Then, we choose N times a particle randomly and let it try to make a jump. Therefore, on average every particle makes one attempt per one time unit, but the probability that a given particle performs actually k attempts approaches Poisson distribution with unit mean, P (k) = 1/(e k!), when N is large. For small N there is a deviation from the Poisson distribution, which induces slight finite-size effects, but in [72] we showed that they can be neglected for N larger than about 100. Note that in each of these N attempts the external potential V (x, τ ) is the same, but the potential U j (x, τ ) felt by the particle j may change, because the configuration of particles n j (x, τ ) changes after each particle jump.
In one attempt, the particle is allowed to jump one site to the right or left. The probability of the jump x → y of the j-th particle is if |x − y| = 1 and zero if |x − y| > 1. For convenience, we define the temperature T so that β = 270/T .
Let us now specify the main measurables. The simplest quantity of interest is the current or rather its time average per particle J = lim τ →∞ (τ N ) −1 τ τ ′ =1 J(τ ′ ). As we are interested in the energetics of the motor, we must define the energy input and the useful work done by the particle. The latter quantity, at time τ , is simply w(τ ) = F J(τ ). The energy pumped into the motor from outside relates to the change of the potential V a (τ ) while the particles stay immobile. Thus, the energy absorbed by the particle i between steps τ − 1 and τ is and the efficiency, accordingly, Later in this paper we shall investigate the distribution of the particle shift and also the joint distribution with the input energy where N is the appropriate normalization. Note that in both distributions there is implicit dependence on the time lag ∆τ .
III. ENHANCED EFFICIENCY
We show in Fig. 2 examples of typical evolutions of the particle configurations, for three densities (ρ = 1/2, ρ = 2/3, and ρ = 1) and three interaction strengths (g = 0, g = 0.5, and g = 0.9). We can see that without interaction, particles are bunched together and dragged by the traveling wave of the periodic potential. Interaction smears out this picture, suppresses the current and makes at the same time the local particle density more uniform.
The typical dependence of the current and efficiency on the interaction strength is shown in Fig. 3. At zero temperature, the dependence contains many steep steps with multiple maxima and minima. Therefore, for some values of the external load F the current changes sign several times when the interaction increases. For larger temperatures there are still visible traces of this complex dependence, although the singularities (sharp steps) are smeared out. We also observe that both the current and efficiency approaches zero for very strong repulsion (g → 1). We shall see later that this feature is special to some values of the particle density ρ = N/L, for example to ρ = 1, which applies to Fig. 3. The generic feature is that for interaction above about g ≃ 0.6 the current and efficiency approach a constant value.
The most important finding, from the point of view of practical use of the motors, is the increase of the efficiency when the interaction is switched on but is not too strong. For zero temperature we observe multiple maxima of the efficiency, which transform into a unique maximum at higher temperatures. The effect of efficiency enhancement was investigated in detail in our previous work [72]. In this paper we return to the origin of this effect later, when we shall discuss the energy balance and work fluctuations.
In Fig. 4 we can see three sets of results for the current, differing in the density of particles. Different curves in one set correspond to different external load F . Al three cases (and also the data shown in Fig. 3) exhibit minimum current, i. e. smallest effective driving, at interactions somewhere around g ≃ 0.4 to g ≃ 0.5. In order to see what is special in this value of the interaction, we should note that the change of the potential due to presence of a single particle g/(1 − g) is equal to the amplitude of the traveling-wave potential V = 1 just for g = 0.5. At this value of the interaction, one particle may block, or at least significantly hinder, the movement of the remaining particles.
We can see that for low density, ρ < 0.5, the asymptotic current for strongly interacting particles, g → 1, is positive at low load and at the same time is sensitive to the value of the load. On the other hand, for 0.5 < ρ < 1 the asymptotic current at zero load is negative, i. e, the interaction induces current reversal. Contrary to the previous case, the asymptotic current seems to be extremely weakly dependent on the load. The third panel shows again that the asymptotic current is zero for unit density, independently of the load.
Complementary information can be read from Fig. 5, showing the dependence of the current on the load. We can observe, how the current decreases with the interaction in the full range of observed F . As a consequence, also the stopping force, i. e. the value of F for which J = 0, decreases with increasing interaction. It is also interesting to note the non-linear decrease of the current with the load. So, the response function, defined as the derivative dJ/dF , depends on F .
In Fig. 6 we can see how the zero-load response function depends on the density, in the regime of very strong but finite repulsion (g = 0.9). Globally, the response is stronger at higher temperature, which is due to the fact that at low temperature the movement of the particles is determined to larger extent by the traveling wave, with lesser influence of the external load, provided the load is small. An interesting feature is the structure of the peaks and the minima seen in Fig. 6 at all temperatures. At integer values of the density the response approaches zero. The other minima are not so deep and are located at densities slightly above the values ρ = 1/2, ρ = 3/2 etc. Interestingly, the maxima are found at densities very close to the fractions ρ = 1/3, ρ = 2/3, ρ = 4/3, and ρ = 5/3.
As we already said, the response depends on the load, so we must distinguish from χ F =0 at least one more response function, defined at zero current We can compare these two quantities in Fig. 7. The difference between χ F =0 and χ J=0 is especially marked for low interaction strength, while at about g ≃ 0.3 they come close to each other and at g ≃ 0.5 the two become nearly indistinguishable. The source of this behavior can be understood looking at the dependence of the current on the external load is markedly non-linear, so that the derivative at F = 0 and J = 0 differ. Increasing the interaction, the non-linearity weakens and at g ≃ 0.5 we observe nearly linear dependence, resulting in nearly equal values of the derivative at F = 0 and J = 0. Note that the density is ρ = 1 in Fig. 7 and both response functions approach zero when g → 1, in accordance with the results shown in Fig. 6.
It is interesting that the dependence on g exhibits several peaks. The last (and highest) one is located close to g = 1/2 and has nearly the same shape both in χ F =0 and χ J=0 . However, at lower g the peaks in the two response functions are much different. We already mentioned that the interaction g = 1/2 is special, as the change in potential due to presence of a single particle just equals the amplitude of the periodic potential V (x). Also the second highest peak in χ F =0 seems to be located at a special value of the interaction strength, namely close to g = 1/3. We can also see a small peak close to g = 1/4. We believe these special values are due to special blocking configurations of particles, which enhance the sensitivity of the system to the presence of the external load. Indeed, g = 1/3 means that two particles on the same site contribute as much as the amplitude of V (x), at g = 1/4 the same holds for three particles at a site.
To avoid confusion, we do not claim that the configurations of one, two, three, etc. particles are more (or less) frequent at certain values of g. What we claim is the following. These configurations happen time to time. When they do happen, and if g has special values, they cause large sensitivity to the value of the load. For other values of g, the sensitivity to the load is weaker, whatever configuration of particles occurs.
We also looked at the density dependence of the current at high density and strong interaction. The results are summarized in Fig. 8. For the strongest interaction investigated, g = 0.9, the curve J(ρ) has a very peculiar zig-zag shape, with zeros at ρ = m/2, maxima at ρ = m + 1/3, and minima at ρ = m − 1/3, for positive integer m. When the interaction is weakened, the sharp cusps become mild waves, until the structure of maxima and minima vanishes at about g = 0.5. For smaller g, the current is a monotonously decreasing function of density.
Note that the motor with hard-core repulsion undergoes a current reversal at a density within the interval ρ ∈ (0, 1). This is in sharp contrast with the asymmetric exclusion process, where the current is proportional to ρ(1 − ρ). The reason for this difference lies in rather different way the particles are driven. In ASEP, there is constant and homogeneous drift, only hindered by the exclusion principle. In our model, the driving originates from the time dependence of the potential, therefore, it is also space-and time-dependent. The orientation of the current depends on precise timing of the potential changes at different places. The interaction changes the potential a particle feels and the current is susceptible to the details of the potential, so there is no guarantee that the orientation of the current will be the same with inter-action as it was without interaction. Indeed, the current reversal phenomenon demonstrates that the orientation of the current does change due to the interaction. Note also that the current reversal was observed (qualitatively correctly) also in the approximate mean-field calculation [73].
Some insight into the current reversal phenomenon can be gained from the statistics of forward and backward steps at different places and different times within the period. We define the measured weight P (∆x = ±1; x ′ , τ ′ ) as the average number of particles which jump forward ("+" sign) and backward ("−" sign) from site x at time τ , where x ′ = x mod 3 and τ ′ = τ mod (4t). Note that it is not a probability, because it is not normalized to unity. We can see a typical example in Fig. 9. We can see that without interaction the particles alternately prefer to jump forward from sites x ′ = 0, 1, and 2. The backward jumps are rare. This behavior is independent of the particle density by definition. If we add strong repulsion, g = 0.9, the picture differs substantially in the low and high density regime. For ρ = 1/3 the statistics of forward jumps differs little from the non-interacting case, and the frequency of backward jumps is increased, but remains low. At half filling, ρ = 1/2, the particles jump alternately forward and backward, at different times, so that the total effect is zero current, as seen already in Fig. 8. When the density is further increased to ρ = 2/3, the statistics is nearly a mirror image of the case ρ = 1/3. The particles preferably jump backward at specific places and times, and the forward jumps are rare. For comparison, we show in the last panel of Fig. 9 how the statistics is influenced by non-zero external load. The time dependence looks similar, but weight of forward jumps is suppressed and the weight of backward ones is enhanced.
As the probability of the jumps reflects the local potential, and therefore the local instantaneous configuration of particles, through the formula (3), the statistics of the jumps shown in Fig. 9 tells us, what is, on average, the local neighborhood of a particle at positions x ′ and times τ ′ . Change in the shape of the jump statistics reflects the reorganization of the local particle configurations due to repulsive interaction. We can clearly see that the reorganization of the particles can be so dramatic that the current changes sign.
For comparison, we show also the statistics of jumps in the presence of non-zero external load. The suppression of positive and enhancement of negative peaks is visible, as expected.
Similar analysis can also make more clear the argument stated before, that the peaks in the response function at special values of g are related to the enhanced sensitivity of certain configurations of particles to external load. For example, for g = 0.5 such sensitive situation occurs when a particle tries to hop to a site where there is already a single particle. To support this view we plot a similar statistics as in Fig. 9, but for the difference in the count for force F = 0.01 and opposite F = −0.01, on condition that the site to which the particle is moving, already contains exactly one other particle. We can write that quantity as (11) where x is the original position of the particle, x ± 1 the position after the move, n x±1 number of other particles at the site where the particle is about to move. We plot an example of this statistics in Fig. 10. We compare the situation at interaction g = 0, g = 0.4, and g = 0.5. We can see that the case g = 0.5 is indeed special. The sensitivity to the external load is larger. Moreover, the difference in statistics has the same sign for almost all instants within the time period (positive for forward moves, negative for backward ones), while both for g = 0 and g = 0.4 there are positive as well as negative differences.
IV. WORK FLUCTUATIONS
To understand better the effect of enhanced efficiency due to interaction, we shall look at the energy balance and fluctuations. First, we compare the values of current, efficiency and average input energy E in relative to their values at zero interaction, denoted J 0 , η 0 , and E in 0 , respectively. More precisely, we plot the typical interaction dependence of the quantities J = J/J 0 − 1, η = η/η 0 − 1 and E in = E in / E in 0 − 1 in Fig. 11. We can clearly see that the initial increase of efficiency for small g is accompanied by nearly no change in the current, while the input energy decreases. Therefore, the enhanced efficiency is due to lower energy input, while the output (the work) remains nearly unchanged. When the interaction strength increases further, the current starts decreasing as well and so does the work, which is proportional to J. This effect finally outweighs the lower energy input and the efficiency decreases again. This is the source of the maximum in the efficiency at moderate values of the interaction.
We can get a bit more detailed information if we split the input energy into its positive and negative parts. Recall, that according to the definition (8) the input energy ∆x/(∆τ /4t) E in |∆x /(∆τ /4t) is E in = τ +∆τ τ ′ =τ +1 a i (τ ′ ). We separate the contributions from times when a i (τ ′ ) is positive from those when it is negative. The former correspond to the shift of the potential V (x, τ ) upward, that latter to its downward move. So, is the Heaviside function. With this definition we have E in = E in+ − E in− . We then define the contributions from positive and negative potential moves to the quantity E in as E in + = ( E in+ − E in+ 0 )/ E in 0 and E in − = ( E in− 0 − E in− )/ E in 0 where, as above, the subscript 0 denotes the quantities computed at g = 0.
Hence E in = E in + + E in − . We show the dependence of E in ± again in Fig. 11. We can see that both positive and negative parts contribute to the decrease of the input energy. The contribution of the positive part is larger in the most interesting region of moderate g, where the efficiency grows with interaction, while for larger g the decrease of the negative part becomes more important. This leads to the following explanation of the effect of increased efficiency.
At not too high temperature, the particles are chiefly driven by the traveling wave of the periodic potential. This is the power-stroke mechanism of the molecular motor. When the interaction is switched on, but remains small, the particles move in an effective potential which differ little from the original traveling wave. So, the current remains nearly the same, as testified in Fig. 11, while the input energy is lowered, as is also seen in Fig. 11. This lowering could be understood as follows. On the other hand, the repulsion affects the distribution of particles within the period of the potential V (x). The minima become shallower, therefore the particles are less concentrated at them. But it is the minimum of the potential which is shifted above in the time evolution of the potential, so it is the particle at the minimum that acquires the energy from the source of the driving. Less particles at the minimum equals less input of energy, more precisely lowering of the positive part of the input energy. Conversely, the particles pushed off the instantaneous minima of the potential are found at the maxima of the potential. But these particles suffer lowering of the time-dependent potential, i. e. returning the energy back to the external source, therefore lowering also the negative part of the input energy. These two effects, i. e. unchanged current and lowered energy input, are the explanation of the increased efficiency. Of course, more subtle effects are also at work here. Especially, also the negative part of the energy input contributes. More importantly, if the interaction is strong enough, it changes the potential the particles move in to such extent that the current diminishes. At very small temperature, the current is sensitive to tiny changes in the shape of the potential and small changes in the interaction strength can cause big jumps in the current. We have seen these jumps in Fig. 3.
In addition to the averages, we measured also the full joint distribution function of particle shift and input energy (8). Because the work performed by one particle is proportional to its shift, we have in fact the joint distribution of performed work and input energy. As a first piece of information we plot in Fig. 12 the average energy input at fixed value of the particle shift, during the time interval ∆τ . We can observe the already discussed fact that interaction decreases the energy input. Here we can see that it holds also for most values of the shift, i. e. work performed by one single particle, separately. The probability distribution of the energy input at fixed shift is shown in Fig. 13. We can see that the shape is pretty close to a Gaussian. This is far from being true for the distribution of the shift, which is proportional to the work performed by a single particle, as shown in Fig. 14. The distribution is skewed; when we compare the shifts shorter and longer than the most probable value, we find that the shorter are significantly more probable than the longer ones. This is due to the far-from-equilibrium character of the process. We can also see that the distribution is composed of two separate branches. The first one, with higher probability, corresponds to shifts which are multiples of 3, the period of the potential. The other shifts have significantly lower probability. In fact, it comes as no big surprise, that after integer number of time periods the particles like to be shifted by integer number of spatial periods.
The most important finding, however, stems from the comparison of the distribution in the case with and without interaction. In Fig. 14 we make this comparison for such set of parameters where we know that the interacting case exhibits higher efficiency. By analogy with equilibrium statistical physics one might be tempted to guess that higher efficiency is accompanied, or even caused, by milder fluctuations. The opposite holds in our model of the molecular motor. The fluctuations of the work performed by a single particle are higher in the interacting case. Therefore, we conclude that the increase of efficiency is not accompanied by decrease of fluctuations. On the contrary, the study of the energy balance discussed above together with the fact of increased fluctuations shows that the enhancement of efficiency is purely an energy effect.
V. CONCLUSIONS
Interacting molecular motors moving in the powerstroke regime were modeled using a "reversible ratchet" model. Tunable on-site repulsive interaction leads to a host of intricate phenomena. The most important of them is the increase of energetic efficiency for small to moderate values of the interaction strength. We traced the origin of this effect to energy balance of the process. When the interaction is increased from zero, the performed work remains practically unchanged, while the input energy decreases. At the same time, the fluctuations of the performed work increase. This implies that the enhanced efficiency does not originate from the suppression of fluctuations, contrary to the situation in equilibrium heat machines.
Moreover, we observed that at very low temperature the dependence of current as well as efficiency on the interaction strength is rather complex, composed of many upward and downward steps. Hence, the efficiency has several, rather than single, local maxima as a function of interaction. As for the current, for suitable values of the parameters we can observe a sequence of current reversals when we increase the interaction strength. This complicated behavior is due to the interplay between size of steps in the external periodic potential, in which a particle moves, and the size of additional contributions to the potential from the interaction with other particles. However, this complicated dependence gradually disappears when the temperature increases. But the effect of current reversal due to interaction remains still visible.
We also investigated the response function of the current with respect to external load, both for zero load and for zero current. We showed that these two response functions differ substantially at zero or small interaction, but become identical when the interaction is large. We also revealed the structure with several peaks for both density and interaction dependence of the response function. Detailed study of the location of these maxima and minima showed that they correspond to specific fractional values of density and interaction. For example, the response is zero if the density is integer number and has maximum for densities equal to integer number of thirds, except the values which are themselves integers. In the interaction dependence, the peaks were found close to interaction strength equal to one half, one third and one fourth. We speculate that these special values are due to the fact that in those cases just one, two, and three particles on the same site, respectively, contribute to the potential by the value exactly equal to the amplitude of the external periodic potential. Contrary to the complicated step structure in the current, the peak structure in the response function survives also at higher temperatures.
The probability distribution of performed work and input energy reveals that the interaction leads to the increase of fluctuations, as we already mentioned. But we can see more. First, the distribution of work is far from Gaussian. It is skewed so that the lower particle shift (i. e. work performed by an individual particle) relative to the maximum is more probable. This is the sign of farfrom-equilibrium regime of the transport in the molecular motor. On the other hand, the input energy is Gaussiandistributed, when observed at fixed work.
There is also a very interesting principal question related to large-deviation properties of the fluctuation of the performed work. We made some simulations in this direction, which show that the work distribution, when properly rescaled, converges to a large-deviation function. In the last decade, there was a great surge of activity in the field of Fluctuation Theorems [74][75][76][77][78][79][80][81][82][83][84] but in our case the problem of applying these results lies in the choice of the proper quantity which would be both physically meaningful (or at least the physical meaning must not be enormously intricate) and satisfy the Fluctuation Theorem in some of the forms known so far. This question remains open.
Finally, we must also admit several drawbacks of our model, which can be lifted only by setting up a completely different scheme of particle movement. The first point is that the potential changes synchronously at all sites. This is unrealistic in biological motors, where each molecule has its internal "clock" telling in what phase of the chemical cycle the motor finds itself. It could be eas-ily possible to simulate an asynchronous version of the algorithm. On the other hand, in technological applications the synchronicity in the potential changes may be built in into the system. The second point concerns the tunable interaction used in our model. Motor proteins interact always by steric hard-core repulsion and th effective weak repulsion may occur only as projection of real three-dimensional situation onto one-dimensional effective model [85,86]. However, there is no principal obstacle to simulate three-dimensional case directly, if only sufficient computer power is available. Another way out is to generalize the asymmetric exclusion process in such a way that the maximum number of particles one site may accommodate is not one, but two, or three, etc. Simulations in this direction are under way. | 2011-05-25T15:05:41.000Z | 2009-12-28T00:00:00.000 | {
"year": 2009,
"sha1": "e7c82ec534dea12a33354d9e63bea4551dd0c849",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1105.5057",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e7c82ec534dea12a33354d9e63bea4551dd0c849",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
225561942 | pes2o/s2orc | v3-fos-license | Deep Learning Application in Plant Stress Imaging: A Review
: Plant stress is one of major issues that cause significant economic loss for growers. The labor-intensive conventional methods for identifying the stressed plants constrain their applications. To address this issue, rapid methods are in urgent needs. Developments of advanced sensing and machine learning techniques trigger revolutions for precision agriculture based on deep learning and big data. In this paper, we reviewed the latest deep learning approaches pertinent to the image analysis of crop stress diagnosis. We compiled the current sensor tools and deep learning principles involved in plant stress phenotyping. In addition, we reviewed a variety of deep learning applications / functions with plant stress imaging, including classification, object detection, and segmentation, of which are closely intertwined. Furthermore, we summarized and discussed the current challenges and future development avenues in plant phenotyping
Plant Stress and Sensors
Plant stress is one of the major threats to crops causing significant reduction of crop yield and quality [1]. The detection and diagnosis of the plant stress is urgently needed for rapid and robust application of precision agriculture in crop measurement. Presently, intensive studies focus on developing optical imaging methods for plant disease detection. Different from the conventional methods using the visual scoring, optical imaging is advanced to measure changes caused by abiotic or biotic stressors in the plant physiology rapidly and without contact. In general, the common imaging technologies have been employed for detecting the crop stress, including digital, fluorescence, thermography, LIDAR, multispectral, and hyperspectral imaging techniques [2]. The common optical sensors used for plant stress detection are shown in Figure 1. [3]; (b) multispectral imaging sensor for maize water stress [4]; (c) fluorescence imaging sensor for chilling injury of tomato seedlings [5]; (d) thermal imaging sensor for potato water stress [6], and (e) hyperspectral imaging sensor for apple water stress [7].
Digital imaging sensors acquire the visible range of wavelengths, i.e., RGB colored images with red, blue, and green channels to detect plant diseases. Such images provide physical attributes of the plants, such as canopy vigor, leaf color, leaf texture, size, and shape information [8]. Color and texture features are important for identifying the characteristic difference between healthy and symptomatic plants. Frequently used color features are RGB, LAB, YCBCR, and HSV spaces [9]. Additionally, contrast, homogeneity, dissimilarity, energy, and entropy features of images are descriptive facets of texture [10]. In other words, quantitative diagnosis features for identifying the symptomatic and healthy plants have been collected in these images.
Thermal imaging sensors obtain infrared radiating images ranging from 8 to 12 µm, which are often applied for predicting plant temperatures. Under the infection, the temperature of infected plant tissues varies and related to the impacts caused by pathogens. The temperature variance, other the hand, appears with a counter-effect on transpiration rate [11]. In other words, stress from the infection trigger both transpiration rate decrease and leaf temperature increase, resulting in stomatal closure in plants. In turn, based on these alterations, thermal imaging sensors could identify the infection diseases. Each pixel of the thermal image represents the temperature value of the object, which is expressed in manners of false color. In plant disease detection, the thermal sensor could be mounted to ground automated vehicles (GAV) and unmanned aerial vehicles (UAV).
Fluorescence imaging sensors are often utilized to identify variations of plant photosynthetic activity [12]. The differences of stressed and healthy leaves will be expressed in the differences of photosynthetic activities, which will be assessed by the photosynthetic electron transform using the fluorescence imaging sensor with an LED or laser illumination. For normal cases, 685 nm is the wavelength at which chlorophyll fluorescence is emitted from photo-system II (PSII). The stressed plants could change the patterns of chlorophyll fluorescence emission, which could be reflected and observed in the fluorescence imaging [13].
Based on the number of spectral bands in the optical sensing technologies, sensors contain 3-10 spectral bands are named multispectral imaging sensors. The multispectral imaging sensors normally extract a few or a stack of images from the visible to near-infrared spectrum [14]. Plant stress often causes an increase in visible reflectance, with a decrease in chlorophyll and absorption of visible light. Additionally, reduced near infrared (NIR) reflectance will happen due to changes of the leaf tissue. Thus, the most used band channels are green, red, red-edge and NIR. Multispectral imaging sensor combined with drones have been applied broadly in remote sensing for plant disease detection [15], while this type of sensors is limited to a few spectral bands and sometimes cannot quantify the diseased plants severity.
Despite many successful studies having been applied to crop stress detection using cheap passive imagery sensors, i.e., digital and near infrared (NIR), most of the applications require fast image processing and computational algorithms for image analysis. Among the image analysis techniques, supervised methods have been popular with training data being used to develop a system. Such methods include shape segmentation, feature extraction, and classifiers for stress diagnosis. In addition, machine learning algorithms search for the optimal decision boundary in the feature space with high dimensionality, which provides the basis for many available image analysis systems [16].
For improving the image analysis systems, deep learning has played a key role. Deep neural networks have many layers which transform input images to outputs (i.e., healthy or stressed) with learning deep features. The most applied networks are convolutional neural networks (CNNs) in crop image analysis. CNNs consist of dozens or hundreds of layers that process the images with convolution filters with a respective small size of batches [17]. Despite such initial successes, CNNs cannot collect momentum without the advances in core computing systems and deep convolutional networks become the current focus. In agriculture, deep learning shows accepted performance considering accuracy and efficiency based on large datasets. To build precise classifiers for improving plant disease diagnosis, the PlantVillage project (https://plantvillage.psu.edu/posts/6948-plantvillage-dataset-download) has obtained a large number of images of healthy and diseased crops for free [18]. Combined with the big data, deep learning has been put forwarded as the future promising method in plant phenotyping [19]. For example, CNNs can effectively detect and diagnose plant diseases [20] and classify plant fruits in the field [21]. The promising results promote studies carrying out other phenotyping tasks using deep learning, such as leaf morphological classification [22]. Thus, we read many references about the utilization of deep learning in image-based crop stress detection. Summarizing, with this paper we aim to:
1.
State the principle of deep learning in the application for crop stress diagnosis based on images.
2.
Search for the challenges of deep learning in crop stress imaging.
3.
Highlight the future directions that could be helpful for circumventing the challenges in plant phenotyping tasks.
Machine Learning
Machine learning is a subset of artificial intelligence which is used to operate specific tasks by computer systems [23]. In general, it is split into supervised and unsupervised learning methods. Supervised learning methods are expressed with an input matrix of independent x and dependent y variables. This dependent variable of y has few formats, varying based on solving problems. For classification issues, y is usually a scalar for representing the category labels, and it is a vector containing continuous values under regression [24]. Under segmented learning conditions, y is sometimes the ground truth label image [25]. Supervised learning methods often aim to find optimal model parameters, which could predict the data to the greatest extent based on the loss function.
Unsupervised learning methods operate data processing without dependent labels and aim to search for patterns (e.g., latent variables). Common unsupervised learning methods include principal component analysis (PCA), k-nearest neighbors clustering, and T-distributed stochastic neighbor embedding clustering [26]. Unsupervised training usually uses many different loss functions to process, such as reconstructing the loss function. The model must learn to reconstruct the loss function in a smaller dimension to reconstruct the input data [27].
Neural Network
A neural network is built to recognize patterns and provides the basis for most deep learning algorithms [28]. A neural network contains nodes that integrate input data with a set of coefficients and weights with amplify or dampen the input for learning the assigned tasks, e.g., the common activation function α and parameters Θ = w, β , here, w represents the weights and β represents the biases. An activation function is normally followed by an elemental nonlinear factor/coefficient σ, as a transfer function, as shown in Equation (1) [28]: Sigmoidal and hyperbolic tangent functions are the common transfer functions for neural networks. The multilayer perceptron (MLP) is the most popular one in traditional neural networks, with few conversion layers [28]: where W L is a matrix containing rows w k that is related with activation k in the output, and L is the final layer. The so-called hidden layers are the layers between input and output layers. A neural network with many layers is often called deep neural network (DNN), thence deep learning. The activation of the last layer is mapped to distribution on the class P (y|x; Θ) through a softmax function [28]: where W L i is the weight vector associated with class i to the output node. The typical diagram of deep neural network MLP is shown in Figure 2. Currently, stochastic gradient descent (SGD) is the famous method for fitting the parameter Θ to process a small population dataset. With SGD, a small batch is employed in each gradient and maximum likelihood optimization is used to minimize the negative impact of the log-likelihood. It tracks the log loss for a binary classification task and the softmax loss for multiclass classification. A disadvantage of this method is that it usually does not directly optimize the quantity of interest [28].
DNN became popular in 2016, when it performed layer-by-layer training (pre-training) in an unsupervised manner, and then supervised and fine-tuned the stacked network to obtain good performance. Such a DNN architecture includes a stacked autoencoder (SAE) and a deep summary network (DBN). However, such methods are often complex, which need a great deal of engineering to obtain acceptable results [28,29]. Recently, end-to-end training has been conducted on popular architectures in a supervised manner by streamlining the training procedure. The common architectures are CNN and recurrent neural network (RNN) [30,31]. CNN has been widely used for image analysis, and RNN is becoming more and more popular.
Convolutional Neural Network
The main difference between MLP and CNN is reflected in two aspects. First, weights of the CNN architecture are shared with a network when the architecture operates convolutions on the input image [32]. In this way, separate detector learning is not required for the same object appearing at different locations in the image. As a result, the network is equally variable in the translation of input images. In addition, the number of parameters to be learned is reduced.
During CNN training, the input images are convolved with a set of K kernels W = W 1 , W 2 , W 3, . . . W K and biases β = b 1, . . . , b K in the convolution layer, yielding a new feature map X k . Such features are exposed to a nonlinear transformation parameter σ and such process would repeat for each respective convolutional layer l [32]: Second, the main difference between MLP and CNN is the pooling layer. In such layers, the pixels of the neighborhood are added based on the permutation invariant function in CNN. This may prompt a certain amount of rendering invariance [33]. Then, the fully connected layers are usually added with constant weights after convolutional processing. Then, the softmax function is used to provide activation information in the last layer, resulting in a category assignment. A typical CNN architecture is shown in Figure 3 for identifying the ripeness of strawberry based on hyperspectral imagery [34].
CNN Architecture
CNN normally uses a 2D image as input, with a format of m × n × 3 (m × n × 1 for greyscale images), where m and n are the respective image height and width, and 3 is the number of image channels. The CNN architecture often contains a few different layers, including convolutional layers, pooling layers, and fully connected layers. The convolutional and pooling layers are initial layers. A set of convolutional kernels (also called filters) is used for each layer performing multiple transformations. The convolution operations extract the associated features from small slices divided from the full image. Each kernel is applied to the input slice and the output of each kernel is applied to non-linear processing units, making it capable of learning abstraction and embedding non-linearity in the feature space [35]. The non-linear processing provides different patterns of activations corresponding to different responses, which helps learn the semantic differences over the full image. Then, the subsampling is applied to the output of non-linear processing, with summarizing the results and making the input insensitive to the geometric deformation [36]. The CNN architecture has been applied to many aspects, including classification, segmentation, and object detection, etc.
Classification Architectures
Among the pre-trained networks, AlexNet is commonly used for images classification, which is relatively simple with five convolutional layers. The activation function of AlexNet is the hyperbolic tangent, which is the most common choice in CNNs [37]. Then, the deep pre-trained networks appeared, such as the VGG19 with 19 deep layers, winning the ImageNet challenge of 2014 [38]. These deeper networks use smaller stacked kernels and have lower memory during inference, which improves the performance of mobile computing devices, such as smartphones [39]. Later, in 2015, the ResNet architecture won the ImageNet challenge and was made up of the ResNet blocks. The residual blocks learn the residuals and pre-processes the learning mapping for each layer, thereby providing effective training performance for deeper architectures. Szegedy et al. (2016) developed a 22-layer neural network referred as GoogLeNet, which employed the inception blocks [40]. The advantage of using the inception blocks is that it could increase the training process efficiency while decreasing the number of parameters. The performance on ImageNet reached saturation after 2014 and crediting the better performance to the more complex architectures is biased. On the other hand, it is not necessary to perform plant stress detection with the deeper networks, providing a lower memory footprint. Therefore, AlexNet or other relatively simple methods, such as VGG16, are still practical for crop stress images.
Segmentation Architectures
Segmentation is important in crop stress image analysis. The pixel in the image could be classified by the CNN and the classified pixel could be presented with patches that extracted from neighboring pixels [41]. The disadvantage of this method is that the input patches overlap, and the same convolution is repeatedly calculated. Fortunately, the linear operators (convolution and dot product) can be written as convolutions [42]. With a fully connected layer, a CNN can have a larger input image than the trained image and can generate a likelihood map instead of the output of a pixel. Then, such a full convolutional network can be effectively applied to the full input image.
Hardware and Software
The dramatic increase of deep learning applications could be due to the widespread development of GPUs [43]. GPU computing started when NVIDIA launched CUDA (Computing Unified Device Architecture) and AMD launched Stream. The GPU is a highly parallel computing engine which offers a great advantage compared with a central processing unit (CPU). The Open Computing Language (OpenCL) unifies different GPU general computing application programming interface (API) implementations and provides a framework that can be used to write programs that execute on heterogeneous platforms composed of a CPU and GPU. With the hardware, deep learning on the GPU is much faster than on the CPU [44].
Open source software packages also promote the development and application of deep learning. These software packages allow users to operate the computing at a high level without having to worry about efficient implementation. By far the most popular packages include: Caffé, which offers C++ and python interfaces, developed by graduate students at UC Berkeley AI Research.
TensorFlow, which provides C++ and python interfaces, developed by Google Brain team.
Theano, which provides a python interface, developed by MILA lab in Montreal.
PyTorch, which provides C++ and python interface, developed by Facebook's AI Research lab.
Classification
Deep learning has been applied successfully in plant phenotyping combined with various sensors and specific tasks, including harvesting crop counting, weed control, and crop stress detection [17,[45][46][47]. Regarding crop stress detection, with various specific tasks, the image analysis methods are often varying among classification, segmentation, and object detection in crop stress detection combined with various sensors (Figure 4). Image classification is one of the earliest areas where deep learning contributed significantly to the analysis of plant stress images. In crop stress image classification, one or more images are usually used as input data, and a diagnostic decision is used as output (e.g., healthy or diseased). In this case, each diagnosis is a sample, and the size of the dataset is usually smaller compared to computer vision (thousands or millions of samples). Therefore, for such applications, the transfer learning should be popular for researchers. Transfer learning essentially uses pre-trained networks to try to meet the needs of deep network training on large datasets. At present, two transfer learning methods are commonly applied: (1) the specific pre-trained network is directly applied in images processing, and (2) fine-tuning the specified pre-trained network for the aiming objective images. Another benefit of the former strategy is that training a deep network is not necessary, making it easy to insert the extracted features into existing image analysis pipelines. However, it is still a challenge to find the best strategy. Barbedo (2019) used a CNN to classify individual lesions and spots on plant leaves instead of considering the entire leaf [45]. This identified multiple diseases that affect the same leaf. The accuracy obtained using this method was, on average, 12% higher than that obtained using the original image. While proper symptom segmentation is still required manually, preventing full automation. Also, in this paper, the authors applied deep learning to detect the individual lesions and spots for 14 plant species. Specifically, this study used a pre-trained GoogLeNet CNN for training the models. The images were split into two groups for addressing different objectives. The first group was aimed to image classification, to identify the origin of the observed symptom, while the second one was for object detection, which was to identify disease areas amidst healthy tissue and to determine if subsequent classification was conducted or not. The results showed that accuracies obtained using this approach were, in average, 12% higher than those achieved using the original images. The accuracies were higher than 75% for all the considered conditions or number of detected diseases, while the author also claimed that the resized input images for pre-trained neural network were not as advantageous as the original images under certain conditions. Other studies that applied the deep learning into the crop stress image classification are shown in Table 1.
Segmentation
Segmentation is used to identify the set of pixels or contours that make up the target object [70]. Segmentation is a common topic in papers applying deep learning to plant disease imaging. Various methods have been applied to segmentation, such as developing unique segmentation architectures based on CNNs and application of RNNs. The popular segmentation CNN architectures include U-Net and Mask R-CNN [71]. U-Net was investigated in biomedical image segmentation firstly [72], which was built upon a fully convolutional network (FCN). FCN is to provide one contracting network by continuous layers in which pooling layers are substituted by up-sampling operators. The continuous layer would learn to gather a more precise output, with an increase of the resolution of the output. U-Net is symmetric, that is, it has the same number of up-sampling and down-sampling layers. The skip connections in U-Net use a concatenation operator between the up-sampling and down-sampling layers [73]. This method connects the features in the contact path and the extension path. This means that the entire image is enabled to be processed forward through U-Net to directly generate a segmentation mapping. In this way, U-Net could consider the entire image, which make it more advanced than the patch-based CNN. Furthermore, Çiçek et al. (2016) built one 3D U-Net segmentation by replacing all 2D operations with their 3D counterparts [74]. Lin et al. (2019) applied a U-Net CNN to segment and detect cucumber powdery mildew-infected cucumber leaves obtained by an RGB sensor [46]. In this study, since the powdery mildew-infected pixels were less than that of non-infected pixels, the authors proposed binary cross entropy loss function to magnify the loss value of the powdery mildew-infected pixels by 10 times. The results showed that the semantic segmentation CNN model achieved an average pixel accuracy of 96.08% for segmenting the diseased powdery mildew on cucumber leaf images. It was still challenging to apply such deep neural network in field conditions. Different applications of deep learning into the crop stress image segmentation are summarized in Table 2. Cucumber leaf disease R-CNN combines rectangular region proposals with CNN features. Generally, R-CNN includes two-stage detection procedures. Firstly, the algorithm detects subset regions of an image which may contain an object and extracts CNN features from the region proposals. Then the object in each region is classified. R-CNN takes a large amount of training of the deep neural network when there are 2000 or more region proposals per image that need to be classified. Meanwhile, there is no learning procedure at the first searching stage as the selective search algorithm is fixed. As a result, it may lead to tricky candidate region proposals being generated [80,81]. During R-CNN processing, the region proposals need to be cropped and resized, while the Faster R-CNN detector processes the entire image. Thus, Faster R-CNN can be applied for real-time object detection. Additionally, Faster R-CNN is the backbone of Mask R-CNN. Faster R-CNN includes two outputs, that is, a class label and a bounding-box offset. A third branch is added to mask R-CNN upon faster R-CNN architecture, which outputs the object mask [71]. In addition, Mask R-CNN is one of the instance segmentation algorithms which produce a mask that uses color or grayscale values to identify pixels belonging to the same object. Except to feed the feature map to the region proposal network and the classifier, Mask R-CNN uses a feature map to predict a binary mask for the object inside the bounding box.
Object Detection
Object detection is a key part in imaging diagnosis and one of the most laborious tasks. Typically, the task involves locating and identifying objects throughout the image [82]. For a long time, the research goal of computer vision was to automatically detect objects, for improving detection accuracy, and reducing labor. The object detection based on deep learning uses CNN for pixel classification and then applies some post-processing to obtain object candidates [81][82][83]. Since the image classification is to classify each pixel in the image, which is basically equal to object classification, thereby the CNN architectures of segmentation are alike to those for the classification task, while the image labels imbalance, hard negative detecting, and efficient processing image pixels etc., still remain as the challenging issues to be addressed for object detection. Fuentes et al., (2017) applied Faster R-CNN and a VGG-16 detector to recognize tomato plant diseases and pests [55]. Diseases and pests could be identified using the bounding-box and score for each class being shown on each infected leaf. That is, the detection method provides a solution for detecting the class and location of diseases in tomato plants practically. R-CNN and Faster R-CNN have been applied to object detection as well, using the regions in the image to locate the object. Recently, the YOLO algorithm has often been applied for object detection, which uses a single convolutional network to predict the bounding boxes and classify such boxes [84]. The YOLO algorithm divides the image into an M × M grid, then m (m<M) bounding boxes are taken within each of the grids. The network yields a class probability for each bounding box. When the bounding boxes have higher class probability than a threshold value, they would be selected and applied for locating the objects in the image. The limitation of the YOLO network is that it sometimes cannot identify small objects in the images [84]. Singh et al. (2020) applied Faster R-CNN with an InceptionResnetV2 model and a MobileNet model on PlantVillage datasets to detect plant disease, which included 2598 images from 13 plants and over 17 diseases [85]. Other applications for object detection are summarized in Table 3.
Unique Challenges in Plant Stress Based on Imagery
Noncontact plant stress detection has been conducted on different application scales, i.e., laboratory, ground-based, and UAV. Additionally, the modality has been operated based on a variety of sensors, such as digital, thermal, multispectral, and hyperspectral imagery, with different numbers of spectral channels, from three to hundreds. Such sensors could monitor the size, shape, and structural features or crops based on the external views obtained from digital cameras. The digital sensors could be easily operated under the natural light environment. Hyperspectral imaging sensors could obtain the inside spectral signatures beyond the visible wavelength range which could reflect the healthy crop conditions in a wide range of spectra, while most of the commercial hyperspectral imaging sensors could only work in laboratory with controlled light conditions at present. On the other hand, the wind will make the crops move around. In general, for image acquisition, it is still challenging for field work.
Further, the crops are not static: the physiological properties change with their growth. Especially for biotic stress infected crops, the fungi or viruses in the crops have great impacts on the physiological changes. It will be difficult to detect the stress at an early stage without symptoms showing based on image analysis. Further, for the application of deep learning-assisted image analysis, a lack of datasets is a major obstacle as well. At present, the available open source images are mainly from the PlantVillage dataset. On the other hand, one significant challenge is that of ground-truth labelling, which is hugely laborious. The Amazon SageMaker Ground Truth provides a service for managing the labelling, including two features. One is annotation consolidation, which combines different people's annotation task results into one high-fidelity label. The second one is automated data labeling, which utilizes machine learning to label portions of the provided data automatically.
Moreover, to detect crop stress, the classification and segmentation are often used as binary tasks, i.e., healthy versus infected, target infected area versus background. However, since these two categories can be highly heterogeneous, this is usually a general simplification. For instance, the samples of the healthy class mainly consist of completely healthy objects but also rarely few objects showing early stresses. This could lead to classifiers that are able to exclude the healthy samples but cannot identify the few rare ones. The strategy for this case is to make a deep learning system with multiclass by giving it detailed annotations of all possible classes. Meanwhile, the within-class variance from images may reduce the sensitivity of the deep learning system. However, the between-classes variance from a dataset that may not be generalized to every image, such as the different severity of disease images, can obtain a pseudo-deep learning training architecture in one certain experiment, but obstruct the usefulness of its broad application to practical decision-making unless the nature of this dataset is precisely understood. Parameter optimization of the deep learning training models, i.e., batch size, learning rate, dropout rate, etc., is a remaining challenge as well. There is currently no exact method to achieve the best combinations of hyperparameters, which is often operated empirically, even though Bayesian optimization has been put forwarded.
Outlook
Deep learning has been applied successfully in plant stress (i.e., abiotic, and biotic stress) detection even though it still has many challenges. Most of the papers we have reviewed are based on the 2D images for symptomatic stages, for example the digital and greyscale images. Such images could be enabled to operate in the deep transfer learning architecture, such as Alexnet, VGG, GoogleNet, while such pre-trained transfer networks could not be applied to the 3D datasets, such as hyperspectral images, which are more sensitive to detecting the early-infected plants. In the future, deep neural networks that can be used for 3D images should be the focus and early detections of plant disease is pivotal to the precision disease management, especially for diseases without therapy using pesticide. On the other hand, many tasks in plant stress detection analysis could be granted, such as classification, and such a strategy may not be always optimal since it probably requires some post-processing, such as segmentation. Further, semi-supervised and unsupervised deep learning are worthy of being exploratory in the application of plant stress detection, though most of studies are based on supervised approaches. The advantage of unsupervised methods is that the networks training process could be operated without the ground truth labels. The unsupervised approach for detecting the plant stress are generative adversarial networks (GANs) [90], while another common unsupervised approach, i.e., variational autoencoders (VAEs), is rarely applied for crop disease diagnosis yet based on our knowledge [91]. Further, deep learning has been applied for other objectives in agricultural imaging, e.g., crop load estimation and harvesting, while image reconstruction remains unexplored, especially for LiDAR point cloud data. In general, deep learning has provided promising results in plant stress detection, which could accelerate the development of precision agriculture with the extension of field application.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-07-23T09:04:22.209Z | 2020-07-14T00:00:00.000 | {
"year": 2020,
"sha1": "ceaebf8515d5a58bf2cba026b1ed993127905f72",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/agriengineering/agriengineering-02-00029/article_deploy/agriengineering-02-00029.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "847946cb931c24c628b6e16ddbaa2fa1fa3ab9ee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
216454492 | pes2o/s2orc | v3-fos-license | A historical, scaled approach to climate change adaptation: the case of Vietnam
This article sheds light on how scaled, historical dynamics inform the framing of climate change adaptation programs. It looks particularly at the influence of domestic versus global rationalities in adaptation programs through a novel joint governance and political ecology framework. It does this in the setting of water management in Vietnam. Based on a historical view, semi-structured interviews and document and policy reviews, I examine historical water management in Vietnam and current water management programs identified as climate change adaptation. By analyzing how historical, scaled political rationalities inform the framing of current adaptation programs, I find that program formulation reflects domestic(ated) rationalities rather than novel global adaptation agendas. This suggests that universalizing accounts of climate change adaptation overlook the significance of situated, historical factors in the formulation of adaptation programs. Furthermore, the article illustrates how political rationalities can be rescaled, gaining traction within new institutional scales and the programs enacted from them. Finally, the article underlines the inherent power implications of the struggle to determine whose rationalities prevail in program formulation.
Introduction
Climate change adaptation programs are proliferating. However, the complex and deeply political nature of adaptation is often overlooked (Bassett and Fogelman 2013;Eriksen et al. 2015;Taylor 2015). Climate change adaptation encompasses inherent tensions: climate change is a global phenomenon with diverse local impacts. It intersects with policies and programs at and across various scales, and its impacts are informed by situated historical dynamics of environmental management, commercial investment, and socio-political relations (Adger et al. 2005;Taylor 2015: 17-18). Adaptation to climatic change as it is experienced through such mediating factors is inherently a question of governance -of managing people, land and resources (Taylor 2015: 18-19). Despite this, climate change adaptation efforts have evidenced a 'here and now' approach in which understanding how these situated historical dynamics influence adaptation has taken a back seat to the push from practitioners, political authorities, and academics to 'Adapt now!' (Leary et al. 2007). This is reflected by the relatively limited scholarly literature on the domestic politics of climate change adaptation (exceptions include Artur and Hilhorst 2012;Eriksen et al. 2015;Funder et al. 2018;Ojha et al. 2016;Smucker et al. 2015). This is in comparison to a greater focus on global climate policy, the mitigation of climate change rather than adapting to it (Bulkeley 2005;Hall and Persson 2017;Javeline 2014).
It is therefore increasingly vital to examine how diverse interests and influences shape adaptation efforts. In this article I focus specifically on the interplay between global and domestic interests in the formulation of adaptation programs. To do so, I develop a novel approach based on a joint governance and political ecology framework. On the governance side, I draw on the work of Rose and Miller (1992). They examine how political authorities problematize and manage a variety of topics and activities such as health, poverty or sexuality. They describe how programs of government -those programs implemented by political authorities -are tangible translations of political epistemologies and norms, which they call political rationalities. Studying climate change adaptation programs can therefore provide insight into which rationalities adaptation programs propagate and instantiate, and this has implications for adaptation broadly and in particular, program impacts. On the political ecology side, the article adopts a historical, scaled perspective and applies it to climate change adaptation rationalities and programs. Combining these two literatures allows for exploring scale in terms of governance, particularly, the (re)scaling of political rationalities and implications for governance programs. It does this in the context of water management in Vietnam.
Vietnam is projected to be one of the countries most affected by climate change, both in terms of extreme weather events (Kreft et al. 2016) and gradual climate change impacts (Wong et al. 2014). In Vietnam's Thua Thien Hue Province, or Hue Province, the geographic focus of this article, climate change is felt through sea level rise, changing precipitation patterns and swinging temperatures (CCAP 2012;Potsdam Institute 2013). Political authorities are implementing water management programs as adaptation, among a variety of other responses (Hue People's Committee 2007). At the same time, Vietnam has a long history of water management, which is so tightly intertwined with the country's governance, that it has been referred to as a 'hydraulic bureaucracy' (Evers and Benedikter 2009) and 'hydrocracy' (Benedikter 2014b). Vietnam therefore provides an excellent case for examining the influence of historical domestic dynamics versus novel global adaptation rationalities in the formulation of adaptation programs.
The following conceptual section expounds on the historical, scaled approach to examining governmental rationalities and programs. The article presents the examination of water management in Vietnam in two parts. First, a historical section outlines the development of water management in Vietnam through to its current dominance; to illustrate the analytical approach, it details the prevailing governmental rationalities and programs for each historical period. Secondly, I examine contemporary water management programs used as climate change adaptation in Hue Province. I consider current water management generally and present the specific program of Thao Long Dam, a salinity prevention dam whose complex history illustrates the importance of an extended temporal viewing of seemingly novel adaptation programs. The analysis draws on the robust literature on historical water management in Vietnam, review of documents and policies, and semistructured interviews held with provincial, district, and commune officials and water management practitioners.
The article's findings suggest that programs of adaptation may well reflect domestic(ated) political rationalities rather than novel global agendas. This emerges from the more general conclusion that pre-existing environmental management dynamics strongly drive current climate change adaptation initiatives. These findings suggest that global climate change adaptation agendas may not easily influence program formulation unless they have been adopted at the scale of formulation. However, the analysis also demonstrates how rationalities can be rescaled, gaining traction within new scales and the programs enacted from them. Ultimately, the article underlines how those defining climate change adaptation interventions define others' lived realities. It joins the growing body of literature that highlights the inherently political nature of climate change adaptation.
The analytical approach
The approach of the article -to analyze how scaled, historical political rationalities inform adaptation programs -draws on both governance and political ecology literature, and intersects with water management and climate change literatures. Below, I highlight relevant literature in more detail and outline my approach.
Governance perspective
Political rationalities and programs are key aspects of Rose and Miller's (1992) account of 'problematics of government.' These authors build on the work of Michel Foucault (e.g. Foucault 1991) and approach governance through the exercise of power rather than state structures. In this approach, government is conceptualized as "the historically constituted matrix within which are articulated all those dreams, schemes, strategies and maneuvers of authorities that seek to shape the beliefs and conduct of others…by acting upon their will, their circumstances or their environment" (Rose and Miller 1992: 175). Within this, 'political rationalities' provide the conceptual and moral foundation for the activity of government, which I refer to as 'governance.' Political rationalities have linguistic, epistemological, and moral aspects, providing the language to describe the reality to be governed, conceptions of what is to be governed, and principles and goals of how and to what ends these things are to be governed. While political rationalities are not fixed or beyond contestation, some gain more support and prominence than others and are expressed tangibly through programs of government (Rose and Miller 1992). Rose and Miller describe this movement from abstract rationalities to concrete programs as one of translation, "both a movement from one space to another, and an expression of a particular concern in another modality" (Rose and Miller 1992: 181). In the setting of water management infrastructure, these could be exemplified by political rationalities of physical security from storms being translated into concrete programs of dyke construction.
The article's analysis of the relationship between political rationalities and programs of water management in Vietnam draws on the extensive literature on the country's historic water management (e.g. Biggs et al. 2009;Evers and Benedikter 2009;Fforde 2010;Fontenelle 2001). In addition, it builds on subnational data collection, with focus on the political rationalities evidenced in policies, official documents and interviews with sub-national bureaucrats. Sub-national bureaucrats, in this case at provincial, district and commune levels, play a central role in program formulation and implementation (Lipsky 1980), so their rationalities can be extremely important for the tangible programs implemented and outcomes of the same. These data sources are supplemented by analysis of Vietnamese climate change adaptation policies (e.g. Zink 2013).
Political ecology perspective
Political ecology as a discipline grew out of the need for an analytical approach combining environmental and political perspectives (Bryant 1992;Bryant and Bailey 1997). It draws on political economy and ecology, though contains diverse approaches within this broad framing. These approaches share a common understanding of environmental problems as the "manifestation of broader political and economic forces" (Bryant and Bailey 1997: 3), produced through the historic interplay of these forces across spatial scales. A political ecology approach to adaptation has been comprehensively elaborated by Marcus Taylor (2015), who illustrates the historical and diversely scaled political and economic forces shaping how climate change is experienced, understood and acted on. This work, as well as political ecology's theorizing on scale, offer important contributions to the article's analytical approach.
Since the 1990s, discussions of scale have increasingly challenged seemingly fixed labels of, for example, local, national and global. Scholars have illustrated how scales are socially and politically constructed as well as contested and reformed in what is called rescaling (see Brenner 2001: 591-592;Norman et al. 2012: 53-54). These critical discussions have extended through political ecology (Neumann 2009) and increasingly into water management literature (Mollinga 2008;Norman et al. 2012). This literature works, among other things, to clarify what the politics of scale means in water management (Lebel et al. 2005;Moss and Newig 2010;Norman et al. 2012); emancipate analyses of scale from the constructed fixity of river basins and watersheds (Feitelson and Fischhendler 2009;Molle 2009); and interrogate how the management of water resources contributes to rescaling (Budds and Hinojosa 2012;Johnson 2012;Norman and Bakker 2009). The article draws on this literature in its consideration of scalar influences in Vietnamese water management, specifically how differently-scaled management rationalities influence the types of programs instantiated. It applies political ecology's extensive theorizing on scale, specifically rescaling, to an adaptation context. Rescaling describes the "continuous reshuffling and reorganization of spatial scales" (Swyngedouw 2004: 33), a concept I use here to describe the movement of rationalities between institutional scales of governance (drawing, for example, on the rescaling of regulatory practices as described by Swyngedouw 2004: 37).
Within adaptation work, discussions of scale are growing (Adger et al. 2005). Adaptation has often been presented either as a large-scale phenomenon requiring global action and policy tools or as a local phenomenon predicated upon endogenous capacities and norms. The former takes its starting point in understandings of climate change itself as shared global phenomenon, a culmination of global environmentalist discourses (Jasanoff 2004) and global climate change discourses more particularly (Miller 2004). The latter emphasizes the localized nature of changes to the climate, the role of local socio-economic dynamics in determining how climate changes are felt, and the centrality of local institutions and politics in determining adaptation outcomes (Agrawal 2010;Miller 2004). It is reflected in community-based adaptation (Ayers and Forsyth 2009) and discussions of 'autonomous adaptation', implicit in which is a focus on individuals and communities (Forsyth and Evans 2013). Generally, the two discourses have not intersected, and the role of other scales in adaptation initiatives have only begun to be explored more recently (Adger et al. 2005;Urwin and Jordan 2008). Overall, critical reflection on scale in adaptation remains limited. Existing literature tends to focus on adaptation at a particular scale (Cartwright et al. 2013;Jeffers 2011) rather than across scales (Adger et al. 2005). Attention to the latter is growing, particularly regarding cross-scalar governance of adaptation (see e.g. Amundsen et al. 2010;Juhola and Westerhoff 2011), yet these are often practice-oriented accounts. Critical consideration of how and to what extent rationalities move between scales -and implications for adaptation policy, programming and outcomes -remains lacking. The article aims to address this gap.
The joint approach
This article draws on the literature described above to facilitate a joint governance and political ecology approach. Specifically, integrating rescaling with the activity of government helps explain how political rationalities from different scales influence adaptation programming and how these are rescaled over time. This approach highlights how power, in the form of dominant political rationalities, is manifested in the process of adaptation programming. This is reflected in the politicized knowledge and discursive aspects of climate change adaptation. That is, in whose rationalities come to dominate or, through the lens of political ecology, the "manner in which power relations may be reflected in conflicting perceptions, discourses and knowledge claims about development and ecological processes" (Bryant 1998: 87). In formal adaptation initiatives, the application of particularly scaled language, epistemologies, and morals structures the lives of those targeted and colors adaptation efforts more broadly. This is not to attribute undue moral or epistemological fixity to rationalities emanating from certain scales, but to acknowledge 'regularities' (Rose and Miller 1992: 178). Regularities are evident, for instance, in normative global governance discourses within international development, which have often closely cleaved to neoliberal epistemologies and ideals (Rose 1999: 16). Such neoliberal regularities are also emerging within both academic (see Bassett and Fogelman 2013) and practitioner-oriented adaptation literature (see Taylor 2015: 83-84), with fundamental consequences for how adaptation is understood and what forms it takes.
In terms of scale, the article's approach allows for a consideration of whether concrete adaptation programs reflect globally or domestically scaled rationalities and the implications this has for adaptation. While local and global influences are not mutually exclusive in adaptation, the weighting of one over the other will necessarily impact the framing of climate change adaptation needs and solutions as well as outcomes and implications. This binary may seem a conceptual blunt object, ignoring the produced nature of scales and the diversity of rationalities within certain scales. However, the basic distinction underlines the point I aim to make, namely that the scale of the political rationalities framing adaptation programs matter. They entail moral, epistemological, and linguistic regularities with ramifications for both adaptation and governance more generally. This particularism challenges the 'here and now' of dominant climate change adaptation discourses, which divorces adaptation from "trajectories of socio-ecological change that have a longer time frame and whose causative forces stretch beyond the places in which they manifest themselves" (Taylor 2015: 64). The article therefore critiques the assumptions of prevalent understandings of climate change adaptation; that is, for instance, as a universally applicable framework of local vulnerability to an external shock, as laid out in authoritative IPCC reports (Field et al. 2014), or as a set of global solutions that should be cascaded down through fixed scales (Bulkeley 2005).
The article's joint approach also requires a historical perspective. This proceeds from the understanding that adaptation enters into and interacts with existing social, political and ecological relations, which are themselves the product of long, often contentious histories (see Taylor 2015: 122-142). In addition, the 'rescaling' of political rationalities entails a temporal as well as a scalar movement. This requires a historical approach, which is central to the governance and political ecology literatures I draw on. Rose and Miller describe government as a "historically constituted matrix" (1992: 175) and political ecologists recognize environmental change as precipitated by historical dynamics including social relations, trading practices, practices of subsistence or accumulation, and political marginalization and control (Bryant and Bailey 1997).
Water management historically
This section traces the scaled political rationalities of water management programs from the earliest Vietnamese civilization to the present hydraulic bureaucracy. The review proceeds through precolonial, colonial, and communist periods. The precolonial section illustrates the long history and political import of water management in Vietnam. The colonial outlines the implementation of colonial exploitation and modernist conceptions of development through water management, with far-reaching consequences for the built environment as well as social, economic and political relations. The communist details how global communist rationalities of high modernism were domesticated and how water management programs became a tool for state-building. Overall, the section illustrates both the power of political rationalities to shape the built environment through political programs and the rescaling of political rationalities over time.
The precolonial period
Precolonial water management unfolds in two distinct periods. The first is the early pre-colonial period, in which the Vietnamese civilization was located in the Red River Delta of what is now northern Vietnam. A distinct Vietnamese civilization has been identified by historians as emerging in the Red River Delta at around 2000 to 1400 BC (Cima 1987). At this time, rationalities of physical and food security translated into programs of dykes and irrigation. By the sixth century BC, "an important aspect of this culture […] was the tidal irrigation of rice fields through an elaborate system of canals and dikes. The fields were called Lac fields, and Lac, mentioned in Chinese annals, is the earliest recorded name for the Vietnamese people" (Cima 1987: 1). Wet rice cultivation necessitated some level of water management in the turbulent climate of northern Vietnam, where the annual rainy season brings powerful typhoons and huge volumes of water (Porter 1993: 1). Protective
Journal of Political Ecology
Vol. 27, 2020 110 infrastructure was needed to prevent devastation of settlements, to enable wet rice agriculture, and to prevent the destruction and unrest that accompanied floods and resulting famines. Early water management thus coalesced around rationalities of securing physical and food security. It also formed a basis for political authority. The building and maintenance of water management infrastructure helped legitimize central political authorities and the structures of taxation such infrastructure required (Cima 1987). The water environment and management of the same were thus foundational aspects of life, political organization, and political rationalities in the Red River Delta. The second period is that of southwards movement of the Vietnamese, when a rationality of territorial expansion was translated into programs of irrigation and canals. Around 1000 AD the Vietnamese people began an extended southward movement from the Red River Delta termed nam tien (Cima 1987). This movement brought with it the customs of wet rice production and associated irrigation schemes along the coastal plains of Vietnam (Hardy 2005: 4). Vietnamese settlers reached the Mekong Delta in the 17th century, which was at that time a sparsely-populated part of the Khmer empire (Biggs 2012). Here, water management again played a role in the extension of central authority as the Vietnamese emperors expanded their territorial control through strategic hydraulic works, particularly canals that aided in transport (such as the Bao Dinh canal) or supported settlement and territorial security (the Thoai Ha and Vinh Te canals) (Biggs 2012;Dieu 1999). The map of current-day Vietnam in Figure 1 below illustrates areas of interest in red -the Red River Delta in the north, the Mekong Delta in the south, and Hue Province located centrally along the coast. The red edging along the coastline illustrates the path of population movement during nam tien.
The significance of programs of water management and their governance for the Vietnamese state historically led to what Benedikter (2014) terms the 'hydraulic paradigm' as an organizing principle of society and state. In the precolonial period, this paradigm was driven by domestic political rationalities of security and territorial expansion translated into water management programs of dykes, irrigation, and canals.
The colonial period
The second half of the 1800s heralded the beginnings of colonial influence as French forces gained a foothold in the Mekong Delta. In this period, global colonial rationalities of exploitation and modernist development were translated into large-scale dredging programs. Exploitation and development first required control, which the French gained by expanding their military and transport capabilities in the Mekong Delta; they initially used established waterways and canals and later dredged new routes of transport and access (Biggs 2012). In 1879, French engineers began developing a plan for water management of the delta, ushering in ideals of environmental control through master planning (Brocheux and Hémery 2009). This reflected the modern conceptions of nature as an input to growth and progress (Uggla 2010) as well as the high modernist "aspiration to the administrative ordering of nature and society" (Scott 1998: 88). Between 1890 and 1910, over 165 Mm 3 of soil was dredged as waterways were expanded, extended and constructed (Biggs et al. 2009. In comparison, the Panama Canal required dredging 210 Mm 3 , Biggs 2012: 42-43). Dredges, "more than gunboats, locomotives, or machine guns…meant immediate ecological and social change" (Biggs 2012: 43). This modernist approach and the ramifications it would have on the water environment, modes of production, and administration entailed a new era in the relationship between nation and nature (Ehlert 2012).
The canals enabled both exploitation and purported development by providing transport and drastically expanding access to land which could be used for agriculture. They were supplemented by new modes of exploitation through changed land and labor relations, which allowed for double-cropping and dramatic increases in rice production (Biggs 2012;Brocheux and Hémery 2009: 121-122). In the 1930s, however, economic recession and a build-up of social grievances and ecological problems in the colony of Indochina led to increasing challenges to French rule. Colonial administrators, in turn, pointed directly to their achievements of hydraulic management and booms in rice production, population and cultivated land as proof of the development their presence had wrought (Biggs 2012: 87-89). Colonial dredging, while offering the vehicle for the expansion of the colonial state, also came to be used as a justification for its presence. Rationalities of high modernism continued to inform hydraulic planning and interventions in the southern Republic of Vietnam under American involvement (Biggs 2006). This was partly due to constraints of earlier hydraulic interventions that had left their imprint on the landscape, but was also driven by American adherence to the 'hydraulic mission' of
Journal of Political Ecology
Vol. 27, 2020 111 high modernist management and the Confucian authoritarian-style rule of the government in the south (Biggs 2008;Reis 2012: 36). These global colonial rationalities of exploitation and high modernist development led to programs that were much less compatible with local conditions than previous water management programs driven by domestic rationalities. This was partially due to the scale of such interventions under high modernist programs. While the Vietnamese had also built canals to aid in transport and territorial control, the French and later the Americanbacked administration extended this system significantly and, in doing so, disturbed the carefully balanced ecological, social, and economic conditions that had already been constructed (Biggs 2012: 41). It was also due to diverging perspectives inherent in local rationalities versus those of global high modernism. The latter viewed nature as something to be ordered by human administration, whereas local rationalities generally accepted the limitations of human interventions. This was evident in divergent perceptions of the silted 'dead points' that quickly built up in canals, sometimes rendering them impassable. Colonial engineers understood these as hindrances, but locals considered them meeting points and places to stop and rest on a journey. This reflected the more general approach of living with natural water flows instead of trying to manage them (Biggs 2012: 32-34).
The communist period
In the new Democratic Republic of Vietnam (DRV) established in the north of Vietnam in 1945, domestic rationalities of state-building informed and hinged on water management infrastructure. Declassified
Journal of Political Ecology
Vol. 27, 2020 112 American intelligence reports following the development of the DRV in the 1950s give valuable insights into the role of irrigation at that time (Table 1). Rationalities of political control, social stability, food security, and economic development all coalesce in the water management of this period. They supported the broad goal of state-building through agricultural collectivization (Fontenelle 2001) and were in turn supported by the French and earlier Vietnamese hydraulic interventions on which they often were founded (Fforde 2010).
July 19,1955 The rapid rehabilitation of transportation and irrigation facilities, which has been the principal These same state-building rationalities extended into the post-war period with the establishment of the current Socialist Republic of Vietnam. Here, the new nation-state faced three main problems: "…how to develop rural areas in order to curb uncontrolled urbanization, how to achieve national food security for a growing number of people and how to develop and modernise rural areas to stabilise them politically" (Benedikter 2014b: 37). The solution to these foundational challenges? Irrigated agriculture. A 1977 World Bank report, written at the genesis of Bank involvement in Vietnam, states that "the clear priority of the Government's development program is agricultural development…with investment initially being concentrated on irrigation infrastructure" (World Bank 1977: 53). In Vietnam, irrigation is generally taken broadly to include all of the components supporting irrigation -not only irrigation channels, but also the pumping stations, drainage functions, dams and reservoirs, and water retention pools that make up the entire system. Irrigated agriculture thus represented an entire set of water management programs.
The new nation continued to incorporate irrigation infrastructure as a main pillar of its development. A 'rice everywhere' campaign promoted the establishment of irrigation across the country, addressing multiple challenges with one initiative (Hoanh et al. 2014). First, irrigation contributed to social and political stabilization, being "focused primarily to occupy the labour force…as the government feared that a free labour force might result in or contribute to the country's political instability" (Hoanh et al. 2014: 68). Public labor campaigns were organized by new irrigation authorities at provincial and district levels to harness available manpower for work on canals, embankments and irrigation channels (Benedikter 2014b). Second, the spread of irrigation was also a way of institutionalizing the new communist systems (Hoanh et al. 2014). Labor conducted on these irrigation schemes was reimbursed with stamps under the subsidy system, which could be redeemed for food and other goods. In practice, however, there were often shortages and a quite limited selection of goods available. In addition, subsequent management of irrigation systems by cooperatives or districts shifted decision-making away from households in line with collectivization policies (Benedikter 2014b). Finally, as described as some of the earliest priorities of the state, irrigation works were also central in securing food security for the population. Through these multiple rationalities and pathways, "water was capitalised by the new regime in order to create a new social order based on socialist institutions led by the oneparty state" (Benedikter 2014b: 41). Thus, in irrigation schemes, several state-building goals -political stabilization and institutionalization as well as food security -were served through water management schemes at a time when resources were very limited.
While domestic rationalities of state-building initially drove these irrigation programs, global rationalities of high modernist development fueled their proliferation. In this more recent iteration, high modernist rationalities were ushered in through the revolutionary communist government, which espoused "the rational design of social order" including "an increasing control over nature" (Scott 1998: 89). This continued adherence to high modernist rationalities -now linking to global communist movements -contributed in the water sector to a 'hydraulic bureaucracy' emerging in Vietnam. Literature on bureaucracies instantiated through water management -hydraulic bureaucracies -boomed after Wittfogel's provocative (1957) thesis that administering water control could lead to authoritarian styles of government. This line of argument, while heavily critiqued, continues to inspire scholarship on the relationship between political authorities and water management Obertreis et al. 2016). It has continued to be influential in literature on Vietnam due to the tight linkages between water management and its experts, political authority, and the lived water environment (Benedikter 2014a(Benedikter , 2014bBiggs 2008Biggs , 2012Biggs et al. 2009;Evers and Benedikter 2009;Fforde 2010;Reis 2012;Reis and Mollinga 2015).
The culmination of this long history of translation of political rationalities into water management programs was the establishment of a hydraulic bureaucracy in Vietnam. The hydraulic bureaucracy is characterized by well-established, influential hydraulic institutions and extensive hydraulic works and policies. Hydraulic institutions in Vietnam manage water levels and flows, maintain and operate hydraulic infrastructure, and design and construct new hydraulic interventions. They are an integral part of daily life -from the water on tap, to the seasonal flooding and draining of rice paddy, to protection from storms. While adaptive approaches are gaining renewed traction in Vietnam, high modernist approaches grounded in a faith in technical solutions and mastery of nature remain paramount (Biggs et al. 2009;Mollinga 2008). Huge government expenditures, representing multiple percentage points of the government's own budget, go into maintaining and extending water management infrastructure. The vast majority of all water use in Vietnam is accounted for by irrigation (Evers and Benedikter 2009). Water management secures entire cities, regions, crops and exports. It has become foundational for Vietnamese society as it is currently ordered. This, however, again illustrates the difference in rationalities emerging from global rather than domestic scales. Intensive programs of water management driven by high modernist rationalities are often at odds with natural waterscapes and must be constantly maintained (Biggs 2012: 37-38), creating an infrastructural juggernaut upon which the current Vietnamese society and economy is dependent.
Political rationalities of water management in Vietnam have changed over time, prompting shifts in concrete hydraulic programs and the lived environment. Such programs have so changed local conditionsenvironmental, institutional, and economic -that in some areas it would be extremely difficult and costly to shift away from them. This illustrates the extent to which water management programs in Vietnam have organized the lived environment to conform to dominant political rationalities, demonstrating the inherent power of rationalities and those who wield them. Political rationalities have also been re-scaled in the domestication of global rationalities. In Vietnam, this is evident in the domestic adoption and institutionalization of global high modernist rationalities over the course of decades (Benedikter 2014b;Biggs 2012). Together, these two points indicate that political rationalities from various scales powerfully shape dominant understandings of the environment and how authorities act upon it (and the societies that inhabit it) through concrete programs. This history -and its physical legacy -sets the stage for current climate change adaptation initiatives.
Scaled rationalities in climate change programs
The rationalities influencing historical water management in Vietnam shape present programs of water management as adaptation. This section outlines how current water management programs identified as adaptation more clearly reflect these historical, domestic(ated) rationalities and their outcomes, rather than novel global climate change adaptation agendas.
The water management and climate change context
The water management interventions of the past and the rationalities that drove them have transformed Hue Province through myriad hydraulic programs. The development of water management in Hue Province has occurred on the backdrop of a complex water environment influential for life and livelihoods in the province. The province's main river, the Huong River, has a basin that extends over half of the province's area and is characterized by a complex hydrology, where the western mountains swiftly descend to lowlands and the eastern shore (JICA 2017). This water environment is managed through extensive infrastructure systems: In the field of irrigation, a total of over 550 constructions have been built in the province, including two saltwater intrusion prevention dams (Thao Long and Cua Lac), two medium-sized reservoirs (Hoa My and Truoi), 251 small-sized reservoirs, hundreds of small dams at mountainous districts, and 295 small and medium-sized electrical pumps in the plains. Moreover, four large reservoirs Ta Dams, canals, pumps, and reservoirs are interposed across the landscape and, in some areas, unite to form a closed water system tenuously linked to its surroundings by systems of pumps and inevitable precipitation (Huong Phong Commune 2015). Current programs focus on optimizing existing systems, completing major projects seen as cornerstones of the province's water management (DARD 2015b), securing the availability of adequate fresh water resources, preventing flooding and, to a lesser extent, on generating electricity (Hue People's Committee 2007;JICA 2017;Thao Long Management Company 2015).
These systems of hydraulic control support the fulfilment of high modernist development rationalities. In the words of the Province's Irrigation Master Plan, they support Hue's "potential to develop the general economy and the opportunity for economic growth in the period 2005-2010 and from 2011-2020" (Hue People's Committee 2007: Part III). They allow for agricultural intensification, especially on land otherwise unsuited to intensive agriculture, supporting the usually poorer population living in these areas. They also improve physical and economic security from flooding and extreme events and help provide a more constant, reliable flow of fresh water for residents for daily use as well as for industry (CCAP 2012;Hue People's Committee 2007). Such infrastructural programs support an above average provincial growth rate and a growth rate in the agricultural sector of over 4 percent 'despite difficult conditions' (Hue People's Committee 2007).
Beyond embodying ideals of managing the environment as an input to development, these water management systems create the context within which current adaptation operates. Climate change is significantly affecting daily life in the province in a variety of ways. Some of the points highlighted in the Province's Climate Change Action Plan (CCAP) include increasing frequency, intensity, fluctuations, and extremes in 'dangerous weather phenomena' as well as swings in temperature, water inundation and shortages, and salinity intrusion (2012). This range of impacts is seen to affect everything from human health and safety to industrial and agricultural production. Importantly, these issues are generally problematized as issues of water management rather than unsuitable development trajectories, so critical reflection on the premises of high modernist development is sidelined in favor of finding the proper technical solutions (CCAP 2012). This reflects a broad tendency towards depoliticization in dominant climate change adaptation approaches (Lindegaard 2018;Nightingale et al. 2019;Swyngedouw 2010: 64-65;Taylor 2015: 51-52).
Among the climate change impacts in the province are increasingly drastic and frequent extremes in temperatures and precipitation, which contribute to flooding on the one hand and water shortages and droughts on the other. The latter have become a more or less annual occurrence with major repercussions for socioeconomic development (Hue People's Committee 2007). From 1993-1994, droughts damaged almost 13,000 ha of rice (out of roughly 48,500 ha (GSO 2017)), leading to an estimated loss of 20,000 tons of paddy (CCAP 2012); and during the dry season of 2002, salinity intrusion into the Province's Huong River System extended up to 30 km inland. Due to such severe saltwater intrusion, the tap water of the entire city of Hue, located roughly 10 km upriver of the coast with a population of over 300,000, sometimes turned salty in the dry season (DARD 2015b). This drier weather was also problematic for local rice production, especially the second rice crop, which is typically planted long after the deluge of the rainy season had drained out into the rivers, brackish lagoon, and ocean (Thuan Hoa 2014).
To deal with climate change impacts, sub-national officials at provincial, district, and commune levels have sought to improve irrigation infrastructure. Provincial planning documents clearly illustrate the linkage between climate change adaptation and irrigation infrastructure. An advocacy document on raising climate change awareness produced by the provincial Department of Natural Resources and Environment (DONRE) discusses "using water resources effectively in the irrigation system taking into consideration the impact of climate change" and "programs to implement irrigation management methods on rice fields" amidst a general focus on larger-scale projects (DONRE 2013a). Furthermore, the CCAP (2012) mentions irrigation over thirty times, and the provincial irrigation master plan states: "due to the extreme changes of weather, heavy rains, prolonged drought, floods caused by typhoons, flash floods, landslides, etc., the irrigation and water supply capacity, flood control, and a program of completing the existing irrigation constructions should be reviewed" (Hue People's Committee 2007). Related irrigation programs include major dams currently under construction in the mountainous areas of the province, which provide reservoirs, and concreting irrigation channels to optimize use of available fresh water.
Irrigation programs function as a critical adaptation tool. They are communicated as being a way to mediate the impact of climate change on provincial development, reflecting the continued influence of the rationality of modernist development. The CCAP specifically includes a section on irrigation under 'Socioeconomic development planning in Thua Thien Hue province in 2015 and orientations towards 2020' (CCAP 2012). This focus is also evident in the irrigation master plan, which notes: Due to the natural conditions with risks caused for the economy in the past few years and the impact of irrigation schemes in the province area, in order to grow and develop the economy in a sustainable way, Thua Thien Hue needs to develop irrigation, especially the construction of works for integrated use. (Hue People's Committee 2007: Part III) In a setting of intensifying climatic changes, irrigation programs are thus presented as a way in which political authorities retain control of the environment and continue on the pathway of modernist development.
Thao Long Dam
The specific case of Thao Long Dam is illustrative of these political impetuses as well as of the complex historical dynamics and rationalities of current adaptation. Thao Long Dam is a roughly 600 m wide barrage dam (composed of a series of gates) by the mouth of the Huong River. It sits as a gatekeeper, strategically releasing or retaining the fresh water that drains from the Huong River System. Particularly, it prevents the incursion of saline water inland during the dry season, protecting freshwater resources for irrigation, industry, and domestic consumption. It is presented as having "thoroughly controlled" the province's "salinization situation" (CCAP 2008: 82). The importance of this is evident in the province's CCAP, which notes that "[o]ne of the greatest and the most specific effects of climate change on water resources in Thua Thien Hue is the expanding range of the influence of salinity intrusion" (CCAP 2012: 82). Thao Long follows on the heels of earlier salinity prevention interventions (Hue People's Committee 2007; Thao Long Management Company 2015). The previous construction at the mouth of the Huong River, built before 1970 and repaired after reunification, was rudimentary, composed of a cement and stone foundation and an array of wood attached to cement poles to form the dam (Thao Long Management Company 2015; Hue People's Committee 2007). Consequently, the dam could not be opened to allow floodwaters to escape, and the wooden barrage was eventually penetrated by saline water, becoming, in the words of one water management director, "completely useless" (Irrigation Management Company No. 5 2015). Thao Long Dam, completed in 2007, was built in its wake.
The case of Thao Long Dam seems clear-cut. It is presented in official documents as a key infrastructural solution to managing the province's climate change challenges. Yet what emerged in interviews with government officials was something else entirely. The dam was first conceived of in the 1980s as part of a vast provincial water management scheme, typifying the high modernist planning approaches of both colonial and communist governments. In this scheme major upland dams would be built to create reservoirs to store additional freshwater that could then be released slowly during the dry season. Thao Long would help retain this released water and prevent incursion of brackish water, securing freshwater for the secondary and tertiary irrigation canals -and a second rice crop -as well as for industry and domestic consumption. It could also be opened to allow for drainage of upstream floodwaters, which is important in a situation of increasingly extreme and erratic rainfall events. This system would provide the hydraulic engineering necessary to ensure continued exploitation of water resources to support the planned development of the province.
The development of this master scheme began at a time where Vietnam was undergoing profound transition. Critical shortages of food and the means of production had contributed to crisis, and the reform policies of the 1980s were praised as having "brought the country out of crisis and bankruptcy and saved its people from starvation" (Nguyen 2006: 332). It was in this setting that a far-reaching hydrological intervention was framed, yet funds were lacking. The province requested financial support from the national government, which supported the concept. This was unsurprising in light of the national government's stated goals of economic growth and considering the development challenges posed by salinity intrusion. The dam project also incorporated a bridge, which has helped link peripheral coastal areas to the rest of the province, another important contribution to rural development. However, the national government, in the face of seeming overwhelming resource demands across the country, only provided financial support after the increasing agricultural devastation resulting from droughts in the mid-1990s.
The initial steps for the project were undertaken from the late 1990s under the purview of the Ministry of Agriculture and Rural Development (MARD), with the Ministry of Construction also involved. These steps included the pre-feasibility study, project appraisal, feasibility study, design, and design appraisal, with each of these six phases involving a different consultancy or design company (some if not all of which were likely government companies) (Irrigation Management Company No. 5 2015). An environmental impact assessment was not required at the time and was not conducted, perhaps reflecting high modernist approaches to the subjection of nature (Thao Long Management Company 2014). Project construction began in 2001, ending in 2007. The dam went into operation under the responsibility of the Department of Agriculture and Rural Development (DARD), the line ministry of MARD at the provincial level, and under the direct management of Central Vietnam's semi-public Irrigation Management Company No. 5 (ibid). A department within the company was established to oversee the everyday operation of the dam, consisting of management, technical staff, and administrative personnel. This example of design, construction, and management illustrates Vietnam's extensive hydraulic bureaucracy.
The dam has indeed helped with issues of salinity intrusion. During the dry season, the dam is able to manage water flow quite precisely by opening and closing one or more of its 15 gates as needed; during the rainy season, some or all of the gates can be opened to allow runoff to drain. This has aided industry, tourism, and agriculture in the province, as well as daily life (CCAP 2012). Locally, the dam has also transformed almost 300 hectares of marginal agricultural areas close to the sea, with 274 ha converted from single to double rice cropping and 8 ha of previously unused land now being farmed (Huong Phong Agricultural Cooperative 2015; Huong Phong Commune 2014).
Tracing the framing of this ostensible adaptation program captures the complexity of the process of translation from rationality to program. Such translation occurs not in a vacuum, but in a pre-existing policy and physical environment, which influences the formulation and subsequent portrayal of new programs. Thao Long Dam was framed in the transition from state-building to global high modernist rationalities. The initial conception of the program in the 1980s occurred against a backdrop of volatile national policy and lack of local development. In the 1990s, when the program framing coalesced, conditions were still severe. In 1993 -during the period of extreme drought in Hue -over half of Vietnam's population lived on less than US$1.90 a day (compared to just 3 percent today) (World Bank 2017). Issues of basic development and subsistence were at the forefront of official concerns as political authorities sought to implement sweeping reforms and re-forge the socialist development path. At this time, water management programs in the province were generally smallscale. Interviews indicate that it was roughly at this time that the digging of irrigation infrastructure began to be mechanized (DARD 2015b). Yet the dam reflected a forward-looking vision of high modernist control of the province's water resources through a set of dams envisioned to one day be a controlled water environment in the province. By the time the dam was under construction in the 2000s, rationalities of economic development through high modernist management had come to the fore; concerns with subsistence had largely faded, and surpluses, industry, exports, and growth targets had gained ground. These provided a rubric upon which the government's high modernist development could be judged, and government legitimacy became increasingly linked to meeting economic targets (Thayer 2010). This is evident in the framing of Thao Long in current interviews with officials, who present the dam as a development intervention. Officials specifically highlight its ability to control salinity intrusion and retain fresh water, supporting economic development across sectors (DARD 2013(DARD , 2015a(DARD , 2014Huong Phong Commune 2014. In interviews, officials rarely linked Thao Long to rationalities of climate change adaptation (ibid). This likely reflects the fact that adaptation as a domestic political rationality emerged later in the dam's lifetime. In the 2000s, when Thao Long was built, climate change as a political agenda was still gaining momentum. The formulation of domestic policies on climate change commenced with the initial 2003 National Communication under the United Nations Framework Convention on Climate Change (GoV 2003), and it was only after the completion of the dam that more extensive national climate change policies were revealed (Zink 2013: 142-157) -one in 2008, one in 2011, and two in 2012(MONRE 2012. This process was largely donor-driven, yet reflected existing political interests. Donor-driven climate change policies and activities were seen as an opportunity to address what Vietnamese policy makers saw as more pressing needs (Zink 2013: 142). One of these may have undermined economic development due to salinity intrusion. The third sentence of the 2011 National Climate Change Strategy notes that: "Higher temperature and sea level rising will cause inundation and water salinity which can bring about negative effects on agriculture and high risks to industry and socioeconomic systems in the future" (GoV 2011: 1). This rebranding of existing interests as climate change adaptation is also evident at provincial level and within the specific program of Thao Long. At the provincial level, a Climate Change Action Plan was produced in 2012 in accordance with national (donor-driven) legislation, but mainstreaming across departments and sectors has been uneven (CCAP 2012; DONRE 2013b). Many of the departments with responsibility for sectors affected by climate change have quietly been dealing with it for years, and the 'actions' of the CCAP evidence rebranding of existing project proposals. With Thao Long, while sub-national officials highlight aspects of the dam that offer crucial adaptation functions, these functions are grounded in established rationalities of economic development. The representation of Thao Long as a central adaptation intervention for the province seems first to emerge in the CCAP of 2012, which can be traced back to a push from donors for climate change policies (Zink 2013: 142-157).
The rationalities evident across roughly thirty years from the conception of Thao Long Dam to its present portrayal have a long history. They draw on rationalities of security in the Red River Delta, which were translated into water management programs, and the territorial expansion of the Vietnamese, which spread these rationalities and programs to what is now Hue Province. They then extend through the high modernist visions of colonialism and communism and enter the near past of Hue Province at a time of political crisis and palpable need. Ultimately, a new rationality of climate change adaptation has emerged, yet only after program completion. Though Thao Long Dam is indeed a central adaptation initiative in the province in that it helps address climate change impacts, it was not formulated in reference to rationalities of adaptation. Instead, it can be seen as a provincial mascot of national political strategies, which are in turn linked to global climate change agendas through donors. The fact that it can be cast as adaptation illustrates how many adaptation initiatives overlap with broader development and environmental management rationalities and the reality that many government officials (as well as many others) have been conducting 'adaptation' for years, just not under that label. Excavating this history of rationalities is instructive and illustrates how governmental programs can grow out of a diversity of rationalities. This is especially true of large-scale interventions with a lengthier lifetime. In addition, the case illustrates how the label of climate change adaptation can be misleading. When viewing concrete programs as translations of governmental rationalities, the case of Thao Long suggests that they should be analyzed as expressions of shifting rationalities, intertwined across scales.
Adaptation: rationalities, scales, and implications
The case of Thao Long Dam provides broader insights into climate change adaptation. First, it illustrates the weaknesses of the dominant rationalities driving global adaptation efforts. Their universalizing tendencies ignore that adaptation enters into diverse contexts shaped by historical rationalities and resulting programs (Taylor 2015: 17-18); this entails particular institutional structures and processes, approaches to environmental management, and socio-economic configurations, among other things. Approaching adaptation as a separate, decontextualized area of intervention, for instance as seemingly universally applicable technical solutions, can limit its relevance in light of pre-existing rationalities and programs.
Second, domestic(ated) rationalities dominate adaptation. This is especially evident in the context of strong political institutions in Vietnam, but suggests caution in gauging the reach of global rationalities and programs more generally. Rationalities behind Thao Long are largely domestic or domesticated ones of global origins. Those global rationalities that emerge in project framing have been domesticated at national and subnational scales over decades, through a process of rescaling political rationalities. The national planning organs, engineering programs, technical organizations, local management offices, and particular infrastructural interventions this entails illustrate how a global high modernist mindset has truly been taken up throughout the political and organizational landscape of Vietnam. It has melded with existing rationalities in what can be seen as a dual process of 'glocalization' (Swyngedouw 2004). Institutions, processes, and physical constructions both result from and reinforce this trajectory and continue to influence the room for formulation of new programs. In practice, this suggests that global climate adaptation rationalities that are not emic to or influential at the site of adaptation formulation, may have difficulty gaining traction in program formulation. However in Vietnam, this process of domestication is beginning to emerge in climate change adaptation. The Province's CCAP is a direct result of national policy requirements. National climate change policy in Vietnam, in turn, has been extremely donor driven. Donors have supported the drafting process and have provided funding contingent on the passing of national climate change legislation (Zink 2013: 142-157). Yet, policies reflect not only donor wish-lists, but also domestic interests, echoing similar findings from other countries showing how global adaptation policy processes are used as a vessel for domestic politics (Funder et al. 2018;Smucker et al. 2015).
A third important aspect that emerges from the case is the power dynamic evident in determining which rationalities come to be translated into programs. Programs "express and re-constitute physical, social, cultural, economic or political power relations" (Swyngedouw 2007: 10). In terms of scales, the comparative weighting of local versus global (or national, meso, community, household, etc.) rationalities, or the rescaling of these, will necessarily impact the framing of adaptation needs and solutions as well as outcomes and implications (Bulkeley 2005). In the case of Thao Long Dam, rationalities of climate change adaptation were added long after program formulation and implementation, reflecting a broader trend in Vietnam. An analysis of programs identified as climate change up to 2006 against 2007-2008 show that in the first period, only 35% of programs were conceptualized specifically in terms of climate change, while in the subsequent period, 88% were (Zink 2013: 129). Thao Long Dam falls squarely in this first group. Officials have invoked adaptation only after its construction, seemingly in reference to the extension of global climate change policy interventions into the domestic political scene. Program formulation drew instead on domesticated rationalities of economic development through high modernist environmental control. Implementation then instantiated these rationalities through the built environment. The future of climate change adaptation programs in Vietnam and beyond, and the understanding and experience of climate change for those affected, is thus dependent on which rationalities inform adaptation programs. This question of 'which rationalities?' points to the power of those wielding rationalities in program formulation and suggests the need for further exploration of trusteeship (Li 2007: 4-5) and expertise (Rose and Miller 1992: 187-189) in climate change adaptation.
Finally, climate change adaptation programs have ramifications not only for adaptation, but also for governance more generally. Indeed, "state policies and interventions on climate change adaptation in development can be understood as a form of governmentality through which state agencies seek to assert control over rural citizens and resources" (Funder et al. 2017: 2). In the case study area, climate change adaptation initiatives lean heavily on water management infrastructure, which structures daily life and perpetuates certain forms of environmental control (Lindegaard 2018). They reinforce current pathways of development and associated social and political dynamics and disparities. They feed into authorities' efforts to "shape the beliefs and conduct of others…by acting upon their will, their circumstances or their environment" (Rose and Miller 1992: 175). The approach taken here situates climate change adaptation programs within broader efforts to order societies and environments. It forefronts the power dynamics of adaptation, which often fall victim to depoliticizing discourse and practices (Ojha et al. 2016;Swyngedouw 2010;Taylor 2015: 64-65).
Conclusions
The article formulates a joint political ecology and a 'problematics of government' (Rose and Miller 1992) analytical framework, which allows for a novel understanding of the interplay between scales, governance and program formulation in climate change adaptation. In this article, it entails a scaled, historical approach to understanding how diverging political rationalities shape climate change adaptation programs.
Program formulation is more heavily influenced by historical, domestic(ated) rationalities than by novel global agendas. This rebuffs understandings of climate change adaptation as a universal enterprise (Field et al. 2014). However, it also documents how political rationalities can be rescaled, gaining traction within new scales and thus factoring into subsequent program formulation. This is a contribution to existing literature of the politics of scale and rescaling, offering a new approach through which to apply scale to studies of governance.
The article also probes the power dynamics evident in programming. Water management and climate change adaptation programs reproduce the rationalities they instantiate. Those wielding rationalities in program formulation have power, and further exploration is needed of trusteeship and expertise in climate change adaptation in particular. This has implications beyond adaptation or water management. Such programs further the premises of the systems of governance that engender them, thereby extending existing power relations and discrepancies. This article thus joins emerging literature that highlights the inherently political nature of climate change adaptation (Eguavoen et al. 2013;Eriksen et al. 2015;Funder et al. 2018).
For policy and practice, the findings suggest a critical approach. Analyses of national adaptation policies have consistently indicated the major influence of international organizations and donors in their formulation, for instance in Zambia (Funder et al. 2018), Nepal (Ojha et al. 2016) and Vietnam (Zink 2013: 142-157). This poses a risk of the imposition of universal rationalities of adaptation that are unsuitable for local realities. In Zambia and Vietnam, national elites and politicians have to some extent exploited donor agendas for their own ends, incorporating their own political rationalities in adaptation policy. However, the role of localized rationalities remains limited. This is out of sync with the significance of situated, historical factors in both climate change impacts and adaptation programs. It indicates a mismatch between policy and practice that may well undermine efforts to respond to climate change. | 2020-04-02T09:08:54.577Z | 2020-03-27T00:00:00.000 | {
"year": 2020,
"sha1": "67a7742b16675a8882806f78622cc30b02afe1ae",
"oa_license": "CCBY",
"oa_url": "https://journals.uair.arizona.edu/index.php/JPE/article/download/22049/22381",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "803cf7af15b9fa986bec11a54c492e563f93a998",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
255567440 | pes2o/s2orc | v3-fos-license | Riboflavin Targets the Cellular Metabolic and Ribosomal Pathways of Candida albicans In Vitro and Exhibits Efficacy against Oropharyngeal Candidiasis
ABSTRACT Oropharyngeal candidiasis (OPC), which has a high incidence in immunocompromised and denture stomatitis patients, is commonly caused by Candida albicans infection and in some cases develops into disseminated candidiasis throughout the throat and esophagus, resulting in high mortality. New drugs are needed to combat OPC because of the limited treatment options currently available and increasing resistance to existing drugs. Here, we confirmed that riboflavin (RF), a cofactor of flavin adenine mononucleotide and flavin adenine dinucleotide, has broad-spectrum anti-Candida activity. The formation of C. albicans hyphae and biofilm was inhibited by RF. Mechanistically, RF disrupted membrane and cell wall integrity, as well as promoting reactive oxygen species and pyruvate accumulation. Furthermore, RF targeted multiple essential pathways via functional disruption of thiamine and RF metabolic pathways, central carbon metabolism, and ribosome metabolism. Similar to the results in vitro, the inhibitory effect of RF on C. albicans hyphae was confirmed in a mouse model of OPC. Moreover, after 5 consecutive days of intraperitoneal injection, RF exhibited therapeutic efficacy, as demonstrated by phenotype investigation, the fungal burden, and histopathological analysis. These findings revealed that RF exerts a multifaceted anti-Candida effect and has potential benefits in the treatment of OPC. IMPORTANCE Candida species are common pathogens in fungal infections, causing mucosal infection and invasive infection in immunodeficient patients. Given the limited classes of drugs and resistance to these drugs, new antifungal agents need to be developed. Drug repurposing is a potential method for antifungal drug development. This study demonstrated that riboflavin (RF) exhibited broad-spectrum anti-Candida activity. RF affected multiple targets involving the membrane and cell wall integrity, the accumulation of reactive oxygen species and pyruvate, and the altered metabolic pathways in C. albicans. Moreover, RF exhibited efficacy in the treatment of C. albicans in an oropharyngeal candidiasis mouse model. Taken together, the antifungal activity and the promising clinical application of RF were highlighted.
RESULTS
Broad-spectrum activity of RF inhibits Candida growth in vitro. The MICs of RF for four standard strains and five clinical isolates were determined. RF exhibited broadspectrum activity against Candida species, with an MIC of 0.4 mg/mL. With the dose of RF increased, the anti-Candida effect became more significant in a spot assay (Fig. 1A). A growth curve assay further characterized the fungicidal effects of RF. FCZ was used as a positive-control drug. Compared with the control group, 1Â MIC of RF inhibited more than 90% of C. albicans, Candida krusei, and Candida parapsilosis grwoth between 2 and 24 h (Fig. 1B), and the antifungal activity of 1Â MIC of RF was more effective than that of the FCZ group (64 mg/mL).
RF inhibits Candida hypha and biofilm formation in vitro. The formation of Candida hyphae plays an important role in the invasion and damage of epithelial and endothelial cells, as well as in causing bloodstream infections (23,24). The effect of RF on C. albicans hyphal formation under a hypha-induced medium was elucidated. Hypha cells and both tubular and multicellular forms of C. albicans were inhibited by RF in liquid medium (Fig. 1C). The antihypha capacity of RF was further confirmed by the transition from crenulated to smooth colonies on solid medium (Fig. 1D).
Filamentous development is critical for biofilm formation, and biofilm formation on medical devices is the main cause of nosocomial fungal infections (25,26). The effect of RF on biofilm formation was further evaluated. A crystal violet (CV) assay and a 2,3-bis-(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide (XTT; Macklin, Shanghai, China) assay were used to analyze the effect of RF on the formation, development, and maturity of biofilms. As shown in Fig. 1E and F, 1Â MIC of RF seriously affects the early biofilm formation and has a minor effect on destroying the mature biofilm. To better define the effect of RF on biofilm formation, confocal laser scanning microscopy (CLSM; Leica, Beijing, China) investigation was performed and revealed that C. albicans cells were compact and connected into sheets in the absence of RF treatment. Conversely, cells treated with RF showed low density and could not connect into sheets (Fig. 1G). Subsequently, the transcription levels of genes related to hyphal and biofilm formation were evaluated by reverse transcription-quantitative PCR (RT-qPCR). Hyphalformation-specific genes (including ALS3, HGT2, and HWP1) and a biofilm developmentspecific gene (BCR1) were analyzed. As expected, all genes were downregulated after RF treatment (Fig. 1H). Moreover, to further confirm the anti-hyphae and anti-biofilm activity against other Candida species, a similar effect was observed on C. parapsilosis ATCC 22019 hyphal and biofilm formation after RF treatment (see Fig. S1 in the supplemental material).
RF treatment damages Candida cell wall integrity. The fungal cell wall is involved in cell morphological maintenance and polarized growth (27). Candida can survive by altering its cell wall composition and structure in changed environments (28). As shown in Fig. 2A, calcofluor white (CFW; Sigma, Shanghai, China) was homogeneously distributed in yeast cells in the absence of RF treatment. However, more CFW was aggregated and distributed in the outer layer of C. albicans cells, especially at the budding end following RF treatment (red arrows). Even the elliptical structure of some yeast cells was disrupted (yellow arrows). Furthermore, investigation of CFW fluorescence indicated that the chitin content was increased by RF (Fig. 2B), and the expression of CDA2 (encoding chitin deacetylase, which catalyzes chitin to chitosan [29,30]) was obviously downregulated, implying that the increase in chitin occurred as a result of the blocked consumption pathway (Fig. 2C). In addition, the aniline blue staining assay showed that total glucan levels were decreased following RF treatment (Fig. 2D). Investigation of the susceptibility of C. albicans cells to RF under CFW stress conditions also showed that pretreatment with RF inhibited cell growth (Fig. 2E, red rectangle). These data suggest that RF remodels the C. albicans cell wall structure.
RF treatment alters Candida cell membrane permeability. The transport of RF has been reported to be associated with the cell membrane in Bacillus subtilis (25). To further explore the effect of RF on the structure of Candida, its cell membrane integrity was evaluated. Propidium iodide (PI) enters cells through a damaged cell membrane and binds to nucleic acids, producing red fluorescence. As shown in Fig. 3A, the number of red cells increased after RF treatment, while there were no red cells in the control group. Similarly, a flow cytometry assay further confirmed that 36.96% and 58.00% of C. albicans cells were stained fluorescently following RF treatment with 1Â MIC and 2Â MIC, respectively ( Fig. 3B and C). It is worth noting that cells with damaged cell membranes cannot survive after being spread onto yeast extract-peptone-dextrose (YPD) agar (Fig. 1B). Additionally, there is a similar effect on the C. parapsilosis ATCC 22019 cell membrane (Fig. S2). Ergosterol, the main component of the Candida cell membrane, has become a focus of the development of antifungal drugs (31). In this study, the genes involved in the ergosterol synthesis pathway (including ERG2 and The biomass of C. albicans biofilm was observed by a crystal violet assay. (F) Metabolic activity of C. albicans biofilm was determined by a 2,3-bis-(2-methoxy-4-nitro-5-sulfophenyl)-2H-tetrazolium-5-carboxanilide assay. The results are presented as relative percentages. (G) The three-dimensional structure of C. albicans biofilm was stained with calcofluor white and observed by confocal laser scanning microscopy. (H) The expression of biofilm-related genes was determined. C. albicans treated with 1Â MIC of RF was incubated for 0, 6, and 12 h before RNA extraction. 1Â MIC, 0.4 mg/mL of RF; 2Â MIC, 0.8 mg/mL of RF. Data were analyzed by a t test or one-way analysis of variance (one-way ANOVA) (ns [not significant], P > 0.05; *, P , 0.05; **, P , 0.01; ***, P , 0.001; ****, P , 0.0001). ERG11) were analyzed by RT-qPCR. As shown in Fig. 3D, the expression of ERG2 and ERG11 was downregulated after RF treatment. However, compared with the control group, the content of ergosterol in C. albicans showed no statistically significant difference after treatment with 1Â MIC and 2Â MIC of RF (Fig. 3E). In short, RF enhanced cell membrane permeability without affecting the ergosterol content.
RF treatment induces ROS accumulation. ROS are superoxides produced in mitochondria, and excessive levels of intracellular ROS affect the integrity of cell walls and cell membranes (32). Thus, ROS production was measured. As shown in Fig. 4A, significantly increased fluorescence was observed after treatment with RF. Flow cytometry further confirmed that 65.45% and 77.23% of cells were ROS positive after RF treatment with 1Â MIC and 2Â MIC, respectively, while only 10.95% of cells were ROS positive in the control group (Fig. 4B). These results suggested that RF markedly promotes the excessive production of ROS (Fig. 4C).
RF induces coenzyme metabolic disorder. The accumulation of ROS induces protein, lipid, and nucleic acid damage, resulting in C. albicans metabolic disorder and cell death (33). To analyze the mechanism of action of RF in greater depth, RNA-sequencing (RNA-Seq) was performed. The results showed that 563 genes were upregulated and 469 genes were downregulated. The downregulated differentially expressed genes were further analyzed and were found to be enriched in energy production and conversion, amino acid transport and metabolism, and carbohydrate transport and metabolism (Fig. 5A). Unexpectedly, genes involved in coenzyme transport and metabolism were most significantly altered, especially those in the coenzyme thiamine and RF metabolism pathways. Therefore, the expression of key genes in the thiamine (THI4 and THI13, Fig. 5B) and RF (RIB4, and RIB5, Fig. 5C) metabolic pathways was subsequently analyzed. The results showed that exogenous RF significantly decreased the expression of RIB5, THI4, and THI13, while the expression of RIB4 was increased by RF treatment (Fig. 5D). RF-treated C. albicans was spotted onto YPD medium containing thiamine pyrophosphate (TPP) and RF, which are the metabolites of the coenzyme metabolic pathway. As shown in Fig. 5E, the growth of RF-treated C. albicans was inhibited following the addition of TPP or RF stress (red rectangle). These data suggested that RF induces thiamine and RF metabolic disorder in C. albicans.
RF affects central carbohydrate metabolism. Coenzymes FAD and TPP, as members of the pyruvate oxidative decarboxylase complex, are involved in the enzymatic conversion of pyruvate to acetyl coenzyme A (34, 35) ( Fig. 6A and B). It has been confirmed that pyruvate accumulation is the first line of fungal cell defense against ROS caused by heat stress (36). Thus, the content of pyruvate was measured. As shown in Fig. 6C, the concentration of pyruvate was increased in C. albicans after treatment with RF. In addition, pyruvate is an important regulator of central carbohydrate metabolism in the cell, linking the (E) C. albicans cells with or without RF treatment were spotted onto YPD agar plates containing exogenous thiamine or RF. TPP, thiamine pyrophosphate. Data were analyzed by ANOVA (ns, P > 0.05; *, P , 0.05; **, P , 0.01; ***, P , 0.001; ****, P , 0.0001).
Anti-Candida Activity of Riboflavin
Microbiology Spectrum glycolytic pathway to the tricarboxylic acid cycle (34). Suppression in central carbohydrate metabolism was confirmed by RNA-Seq (Fig. 5A) and the downregulation of related genes (including ADH2, FDH1, MLS1, PCK1, PGM2, RHR2, and TPI1) (Fig. 6D). These results implied that exogenous RF targets internal RF and the thiamine metabolic pathway, increasing the concentration of pyruvate, which in turn induces central carbohydrate metabolic disorders in C. albicans. RF affects ribosome metabolism. The upregulated differentially expressed genes were also analyzed, and 134 out of 563 upregulated genes were enriched in ribosome metabolism (Fig. 6E). To confirm this finding, eight genes were selected for additional RT-qPCR analysis. Ribosomes consist of a large 60S subunit (encoded by RPL3, RPL4B, RPL10, RPL17B, and RPP1A) and a small 40S subunit (encoded by YST1, RPS14B, and ASC1). The expression of all of these genes was obviously downregulated following RF treatment for 6 or 12 h, with the exception of RPP1A (Fig. 6F). Ribosomes are responsible for reading mRNA to synthesize protein (37). Therefore, we measured the total protein content of C. albicans following RF treatment and found it to be significantly decreased (Fig. 6G), indicating that RF induced the dysfunction of ribosome metabolism.
RF improved the symptoms of OPC in vivo. The efficacy of RF in vivo was investigated in an OPC murine model. First, immunosuppressed mice were infected with C. albicans. After 2 h of incubation, RF or normal saline was injected intraperitoneally (Fig. 7A). As shown in Fig. 7B, the back of the tongue was ruddy and smooth in the control group, whereas a thick white plaque was evident on the dorsum of the tongue in the Ca group (OPC mice treated with normal saline). After administering 0.125 or 1 mg of RF per kg of body weight or 13.6 mg/kg of FCZ for 5 days, the white plaques were thinner and showed less coverage. In addition, the loss in body weight in the RF-treated group was less than that in the Ca group (Fig. 7C). Importantly, compared with the Ca group, there was a statistically significant reduction in the fungal burden on the tongue and kidneys in the RFtreated group (Fig. 7D and E); however, there was no statistical difference in the liver (data not shown). To assess the curative efficacy of RF in vivo, histopathological changes on the mouse tongue were evaluated by periodic acid-Schiff (PAS) staining. In the Ca group, the tongue epithelium covered a large number of hypha cells destroying the papillae (Fig. 7F). In contrast, fewer hyphal cells and thin and loose biofilm were found in the tongue epithelium of the RF-treated group. In addition, for the RF-treated group, damage of the tongue papillae was relieved (Fig. 7F). These results indicated that RF has an antifungal effect in an OPC murine model.
DISCUSSION
The prevalence of novel pathogens has been on the rise over the past few decades, as evidenced by the emergence of novel coronavirus COVID-19/SARS-CoV-2, Ebola viruses, Middle East respiratory viruses, C. auris, and novel mutant bacteria (38)(39)(40). However, drug development has not kept pace with the rise of novel pathogens and the resistance to antimicrobial drugs, which provides an opportunity for drug repurposing (11). Drug repurposing is the search for new uses or characteristics from approved drugs, such as molecular targets, mode of action, or pharmacological effects (11). Such a strategy reduces the risk, time taken, and cost expended on new drug development. Recent studies have emphasized the important role of B vitamins in the human nervous system, immune response, and tumor development (41)(42)(43). In addition, B vitamins also have value in patients with kidney disease, sepsis, and COVID-19 (44)(45)(46). Our previous review summarized the anti-infection effect of RF and suggested that RF may have direct antimicrobial effects (14). Here, in vitro and in vivo phenotypic investigations confirmed that RF has anti-Candida ability. In vitro, RF is able to inhibit biofilm formation. Adhesion, which is the first step in biofilm formation, was also found to be reduced after RF treatment (see Fig. S3A and B in the supplemental material).
Low doses of ROS are signaling molecules involved in the normal physiological functions and development of fungi, while excessive ROS doses can lead to pathological processes or cell death through their toxic effects (47). In this study, RF disrupted the membrane and cell wall integrity, resulting in the ROS accumulation. Although mitochondria are one of the main sources of ROS production, our investigation found that RF did not affect the mitochondrial respiratory chain (Fig. S4). The production of ROS may connect total metabolism via thiamine metabolism, RF metabolism, and ribosome metabolism caused by exogenous RF (Fig. 8) (48)(49)(50)(51). More specifically, exogenous RF targets the thiamine and RF metabolic pathways, thus disrupting central carbon metabolism. The production of ROS was accompanied by an accumulation of pyruvate, which is consistent with previous work (36,52). Ribosome metabolism was also affected by exogenous RF via the production of ROS. These data were confirmed by the downregulation of ribosome-related genes and a reduction in the total protein concentration (Fig. 6). Moreover, our findings were consistent with those of a previous investigation which showed the damaging effect of ROS on the structure and activity of proteins (50,53). Thus, the accumulation of ROS induced by RF may contribute to the dysfunction of total metabolism. Taken together, our biochemical and transcriptome results revealed the multiple mechanisms of action of RF against C. albicans, leading to damage to the membrane and cell wall integrity, the accumulation of ROS, disordered central carbon metabolism, and protein damage (Fig. 8).
Central carbon metabolism plays an important role in fungal cells in obtaining energy and sustaining their survival. Our results suggested that exogenous RF may target endogenous thiamine and RF metabolism, resulting in a central carbohydrate metabolic disorder. Moreover, the growth of RF and thiamine metabolism-repressed C. albicans was not restored by the addition of extra RF or TPP, and extra RF or TPP had only a minor inhibitory effect on C. albicans (Fig. 5E). This was contrary to previous results showing that strains mutated in the RF or thiamine metabolic pathways would recover growth after supplementation with RF or thiamine (54,55). Based on our findings, we speculated that exogenous RF may influence a key factor in the vitamin regulatory system (including endogenous synthesis and exogenous uptake) of C. albicans. However, further investigations are needed to confirm this key factor. Importantly, these results indicated that the endogenous RF and thiamine metabolic pathways could provide favorable targets for the development of antimicrobial drugs. Reasons for this include that thiamine and RF metabolism specifically exists in pathogens and does not exist in humans (14), and these metabolic pathways are essential for pathogen virulence (56). In fact, other B-group vitamins are important cofactors for fungal cells, including niacin (VB 3 ), pantothenic acid (VB 5 ), pyridoxine (VB 6 ), biotin (VB 7 ), folate (B 9 ), and cobalamin (VB 12 ), which are also potential candidates for the development of antifungal drugs (56).
In vivo, the inhibitory effect of RF on C. albicans virulence factors, including hyphae and biofilms, was also observed in the phenotype of tongues and histopathological analysis in an OPC mouse model. Additionally, the results of histopathological analysis and fungal burden qualitatively confirmed that the infection of mice was reduced by FIG 8 Schematic diagram of the mechanism of action of RF against C. albicans in vitro. Exogenous RF destroys the membrane and cell wall integrity, increasing the accumulation of ROS in C. albicans. Meanwhile, exogenous RF targets the coenzyme (including endogenous RF and thiamine metabolism) and ribosome metabolism, resulting in total metabolic disorders. These effects together lead to the death of C. albicans. Specifically, repression of endogenous RF and thiamine metabolism caused by exogenous RF induces the accumulation of pyruvate (red arrow), which therefore affects central carbon metabolism. Exogenous RF induces ribosome metabolism disorders, inducing a block in protein synthesis (green arrow). This further aggravates the disorder in thiamine metabolism, since histidine is the substrate for thiamine synthesis. Moreover, ROS are capable of inducing protein damage and interacting with endogenous RF and thiamine metabolism (35,(48)(49)(50)(51). The accumulating pyruvate may defend against ROS (36). However, in the presence of exogenous RF, the relationship between ROS and endogenous RF or thiamine metabolism, proteins, and pyruvate (dashed line) requires further study. RF (Fig. 7). Interestingly, the effective concentration of RF (0.125 or 1 mg/kg) in vivo is much lower than that of FCZ (13.6 mg/kg). RF treatment showed a 2-order-of-magnitude reduction in kidney fungal burden, which was superior to the 1-order-of-magnitude reduction in the tongue. The metabolism of RF is inextricably linked to the kidney (12), suggesting that understanding the absorption and transport of RF is vital to studying the mechanism of anti-infection activity in vivo. In a mouse ulcer model, RF alleviated oral ulcers by exerting antioxidant and antibacterial activity and reducing the local secretion of inflammatory factors (57). Previously, 20 to 100 mg/kg of RF combined with azithromycin was shown to reduce the expression of tumor necrosis factor alpha (TNF-a), gamma interferon (IFN-g ), interleukin-6 (IL-6), and monocyte chemoattractant protein 1 (MCP-1) in S. aureus infection (58). In addition, mucosa-associated invariant T cells are reportedly involved in the pathogenesis of oral diseases and can be activated by RF in an MR1-dependent manner (59). These data imply that RF may play a strong role in immune regulation in vivo. However, the mechanism of action of the direct anti-Candida effect and the immune reaction to RF in the OPC mouse model require further investigation.
In summary, the present study reveals the anti-Candida effect of RF in vitro and in vivo. Explorations of the anti-Candida mechanisms in vitro confirmed a multitarget effect. Among the repressed metabolic pathways, the thiamine and RF metabolic pathways were uniquely affected by the action of RF, which provides insight into the metabolic targets of RF.
MATERIALS AND METHODS
Strains, media, and chemicals. Nine Candida species were used in this study. Standard strains C. albicans ATCC MYA-2876, C. albicans ATCC 90028, C. parapsilosis ATCC 22019, and C. krusei ATCC 6258 were purchased from the American Type Culture Collection (ATCC). Clinical isolates C. parapsilosis Cp1 and Cp2, Candida tropicalis Ct2 and Ct3, and Candida glabrata Cg1 were from the affiliated hospital of Southwest Medical University. RF (Sigma, Shanghai, China), RF injection (commercially available), and RPMI 1640 medium (HyClone, Chengdu, China) were purchased as reagents. Fungal cells were cultured and maintained in YPD medium (1% yeast extract, 2% peptone, and 2% glucose).
Antifungal susceptibility assay. According to CLSI standard M27-A3, a microdilution broth susceptibility assay was conducted to determine the MIC (60). Briefly, the activated yeast cells were adjusted to 0.5 Â 10 4 to 2.5 Â 10 4 cells/mL and 100 mL of yeast suspension was added into 96-well plates. To each well, 100 mL of working solution ranging from 0.0 to 1.6 mg/mL of RF (diluted in RPMI 1640 medium) was added. The plate was incubated for 24 h at 37°C, and then the optical density at 600 nm (OD 600 ) was measured. The MIC that suppressed fungal growth by 90% was defined as the lowest concentration.
Next, a spot assay determined the inhibitory activity on the solid medium. The activated cells were adjusted to 1 Â 10 2 to 3 Â 10 2 , 10 3 , and 10 4 cells/mL, and then 3 mL was spotted onto YPD agar, which contained 0.1, 0.2, 0.3, or 0.4 mg/mL RF. Phosphate-buffered saline (PBS) was added to the YPD plate as a control. After incubation for 48 h at 37°C, images of the colonies were recorded using a digital camera.
Growth curve assay. The strains grown overnight were diluted to 5 Â 10 5 cells/mL in YPD medium, containing 0.4 mg/mL of RF or 64 mg/mL of FCZ. The control was free of RF and FCZ. The cells were incubated at 37°C with constant shaking (200 rpm). At 0, 2, 4, 8, 12, 16, and 24 h, cells were harvested and washed with PBS before being spread onto YPD agar medium. Samples supplemented with water were considered controls. The fungal CFU were determined after being incubated for 24 h at 37°C.
Hyphal formation. The activated yeast cells were adjusted to a final concentration of 1 Â 10 6 to 3 Â 10 6 cells/mL in RPMI 1640 plus 10% (vol/vol) fetal bovine serum (FBS) liquid medium or YPD plus 10% (vol/vol) FBS solid medium containing 1Â MIC or 2Â MIC of RF. Sterile water was added as a control group. Liquid and solid media were incubated for 6 h and 5 days at 37°C, respectively.
Biofilm assay. Biofilm susceptibility was measured by CV and XTT assays (61). In brief, the activated yeast cells were suspended in RPMI 1640 (1 Â 10 6 to 3 Â 10 6 cells/mL) and 200 mL was added to 96-well plates. For the initial phase, cultures were preincubated for 1.5 h and RF was added for 6 h at 37°C. Then, the supernatant was removed and incubation was continued for 45.5 h. For the developmental phase, cultures were preincubated for 12 h and RF was added for 6 h at 37°C. Then, the supernatant was removed and incubation was continued for 36 h. For the maturation phase, cultures were preincubated for 48 h and RF was added for 6 h at 37°C. The biofilm biomass and activity were measured by the CV and XTT methods, as follows. (i) For the CV method, 100 mL of 15% CV was added to each well and incubated at room temperature for 10 min. After washing with PBS, 200 mL of 15% acetic acid was added, and after 30 min at room temperature, the OD 600 was measured. (ii) For the XTT method, 1 mg/mL of XTT (in PBS) and 0.4 mM menadione (in ethanol) were prepared in a 5:1 ratio. To each well, 200 mL PBS and 12 mL XTT-menadione solution were added, and after 2 h of incubation at 37°C in the dark, the OD 490 was measured.
Confocal laser scanning microscopy. The biomass and three-dimensional structure of the biofilm were analyzed by CLSM (62). The activated fungal cells were resuspended in fresh RPMI 1640 at a final concentration of 1 Â 10 6 to 3 Â 10 6 cells/mL. Then, 3 mL of fungal cells was added to 6-well plates with circular microscope cover glasses in the bottom of the wells. After incubation for 90 min at 37°C, RF was added to a final concentration of 1Â MIC and continuously incubated for 6 h. For the control groups, sterile water was added. Subsequently, plates were washed with PBS and samples were incubated with fresh RPMI 1640 medium for 24 h at 37°C. The biofilm was washed with PBS and stained with 1 mL of CFW for 3 min in the dark at room temperature. The circular microscope cover glasses were then transferred into antifade mounting medium, before being placed upside down on a glass slide, and the biofilm was observed through a TCS SP8 CLSM.
Cell wall assay to determine cell wall integrity and glucan and chitin content. The fungal cells were treated with 1Â MIC or 2Â MIC of RF for 12 h at 37°C with shaking at 200 rpm. Sterile water was added as a control. Cells were harvested and washed with PBS, and then the fungal suspension was treated in three ways. (i) To the fungal suspension, 3 mL of 1% KOH and 7 mL of CFW were added, and after 2 to 3 min in the dark at room temperature, the cell morphology was observed through a DP80 fluorescence microscope (Olympus, Tokyo, Japan). (ii) The fungal suspension was adjusted to 1 Â 10 2 to 3 Â 10 2 , 1 Â 10 3 , and 1 Â 10 4 cells/mL and then spotted onto YPD agar medium containing 50 mg/mL of CFW. After being incubated for 5 days at 37°C, the colonies were recorded using a digital camera. (iii) The fungal suspension was adjusted to 5 Â 10 7 cells/mL, and then the OD 600 was measured. To determine the chitin content, CFW (at a final concentration of 3.5 mg/mL) was added and staining was carried out at 37°C for 10 min. The fluorescence intensity was measured at 365-nm excitation and 435-nm emission wavelengths on a BioTek Varioskan Synergy H1 plate reader (Thermo Fisher Scientific, Shanghai, China). To determine the total glucan content, aniline blue (1%) was added and incubated at 80°C for 15 min in the dark. Fluorescence intensity was measured at 398-nm excitation and 508-nm emission wavelengths.
PI staining. The cell membrane integrity was determined by PI (Solarbio, Beijing, China) staining (60), an ergosterol content assay, and gene expression analysis. Sample collection was the same as for the cell wall integrity assay. The cells were resuspended to 0.5 Â 10 7 to 1.0 Â 10 7 cells/mL. Then, PI (10 mg/mL) was added and incubated for 30 min at 37°C in the dark, followed by PBS washing. The cell membrane permeability was determined using a FACSAria flow cytometer (BD Biosciences, NJ, USA) and an inverted fluorescence microscope. The blank group was not treated with PI. The positive-control group was pretreated with hydrogen peroxide for 20 min at 37°C.
HPLC assay. The ergosterol content of Candida was evaluated by high-performance liquid chromatography (HPLC) (1260 Infinity II; Agilent). In brief, after centrifuging and washing, the wet samples were adjusted to 0.5 g. Then, 25 mL of methanol was added to samples and the total weight was recorded, followed by an ultrasound for 1 h. Following cooling to room temperature, methanol was added to the original weight and the solution was filtered through a 0.45-mm Millipore filter and stored at 220°C. The ergosterol was determined at 283 nm by HPLC.
ROS level measurement. The fresh fungal cells (1 Â 10 6 cells/mL) were treated with 1Â MIC or 2Â MIC RF for 6 h at 37°C with constant shaking (200 rpm). Samples were resuspended to 0.5 Â 10 7 to 1.0 Â 10 7 cells/mL in PBS and stained with 10 mM dichlorodihydrofluorescein diacetate (DCFH-DA) (Sigma, Shanghai, China) for 30 min at 37°C. After centrifuging and removing excess DCFH-DA, cells were resuspended. A FACSAria flow cytometer and inverted fluorescence microscope were used to analyze the ROS levels.
Spot assay. The fresh fungal cells (1 Â 10 6 cells/mL) were treated with 1Â MIC or 2Â MIC RF for 12 h at 37°C with constant shaking (200 rpm). Cells were washed with PBS and diluted over a gradient. To investigate the effect of RF on C. albicans RF and thiamine metabolism, 3 mL of fungal suspension was spotted onto the YPD agar (containing RF or thiamine pyrophosphate) and cultured for 2 to 4 days at 37°C. To assess the effect of RF on the ability for carbon source utilization, a gradient dilution of the fungal suspension was spotted onto YPD agar (glucose source) or glycerol agar (1% yeast extract, 2% peptone, 2% glycerol, and 2% agar), and cultured for 2 to 4 days at 37°C.
RNA sequencing. C. albicans ATCC MYA-2876 (1 Â 10 7 cells/mL) was treated with 1Â MIC RF or without RF for 12 h at 37°C with constant shaking (200 rpm). Samples were collected by centrifugation for 2 min at 12,000 Â g and 4°C. Then, total RNA was isolated using yeast processing reagent (TaKaRa, Dalian, China). The transcriptome data were processed by Biomarker (Beijing and Qingdao, China) using Oxford Nanopore Technologies Long Read Processing (63). The standard for the screening of differentially expressed genes was a jfold changej of $2 and a false-discovery rate of #0.01.
RT-qPCR analysis. The activated C. albicans ATCC MYA-2876 was adjusted to 1 Â 10 7 cells/mL and grown in YPD liquid medium at 37°C with shaking at 200 rpm. Samples were collected at 0, 6, and 12 h. Total RNA was isolated as previously described. The PrimeScript RT reagent kit with genomic DNA (gDNA) eraser and TB Green Premix Ex Taq II (TaKaRa, Beijing, China) were used to generate reverse cDNA and for RT-qPCR. The transcript of the b-actin (ACT1) gene was used as an internal standard. All primers are shown in Table S1 in the supplemental material. The threshold cycle (2 2DDCT ) method was used to determine the relative change in gene expression (64).
Total protein assay. C. albicans ATCC MYA-2876 (1 Â 10 7 cells/mL) was treated with 1Â MIC or 2Â MIC RF or without RF for 12 h at 37°C with constant shaking (200 rpm). Samples were collected by centrifugation, washed three times, and adjusted to 1 Â 10 8 cells/mL. Total protein was collected by a yeast total protein extraction kit (Sangon Biotech, Shanghai, China). Subsequently, the protein level was tested by an enhanced bicinchoninic acid (BCA) protein assay kit (Beyotime, Shanghai, China).
Quantification of the pyruvate concentration. C. albicans ATCC MYA-2876 was cultured as described for the total protein assay. Samples were adjusted to 2 Â 10 7 cells/mL and 5 Â 10 7 cells/mL for the determination of pyruvate and acetyl coenzyme A (acetyl-CoA), respectively. Samples were collected by centrifugation and ground with liquid nitrogen, and then the concentration of pyruvate was quantified by a pyruvic acid content assay kit (Boxbio, Beijing, China).
Antifungal effect in vivo. All experimental protocols were approved by the Southwest Medical University Institutional Animal Care and Use Committee (2020540). C57BL/6J male mice (6 to 8 weeks old; SiPeiFu, Beijing, China) were fed at 25 6 1°C for 1 week before the experiment. The mice were assigned to five groups of eight mice each. One day before infection, and on the 1st and 3rd days after infection, mice were subcutaneously injected with 200 mL of the immunosuppressant cortisone acetate (225 mg/kg), which was dissolved in normal saline containing 0.5% Tween 80 (65,66). On the day of infection, all groups were narcotized by intraperitoneal injection with pentobarbital (50 mg/kg) and the dorsum of the tongue, free of redness and hemorrhage, was scratched with a scalpel. Subsequently, a cotton ball saturated with 100 mL of C. albicans ATCC (2 Â 10 8 cells/mL) was placed in the oral cavity for 90 min. Two hours after infection, RF (0.125 or 1 mg/kg) or FCZ (13.6 mg/kg) was intraperitoneally injected into the treatment group, and normal saline was intraperitoneally injected into the control group and the Ca (infection) group. When the intraperitoneal injection volume is 100 mL, 0.125 mg/kg and 1 mg/kg of RF correspond to 1/16Â MIC and 1/2Â MIC of RF. As previously described (65), 13.6 mg/kg of FCZ was used to treat oropharyngeal candidiasis in mice. The mice were injected once every 24 h for 5 consecutive days. After the last treatment, the mice were euthanized and the tongue, kidney, and liver tissues were collected for determination of the fungal load and pathological analysis.
Statistical analysis. The in vitro experiments and RNA-Seq were performed three independent experiments, except the animal experiments. The differences between the groups were compared by a t test or one-way analysis of variance (ANOVA) followed by a least significant difference (LSD) test or Tamhane T2 test using IBM SPSS Statistics 26 (IBM SPSS Inc., Chicago, IL, USA). P values of ,0.05 were considered statistically significant. GraphPad Prism 9.0 software was used to generate all of the figures.
Data availability. Sequence data were deposited in the Beijing Institute of Genomics Genome Sequence Archive (accession no. PRJCA007860).
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.5 MB. | 2023-01-11T06:18:23.608Z | 2023-01-10T00:00:00.000 | {
"year": 2023,
"sha1": "6068605ff87f51a16f42aecdddbc1ef5e835b828",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ASMUSA",
"pdf_hash": "94c4c3f2b1716e204d6a8840bce90d9371fdf14f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232365799 | pes2o/s2orc | v3-fos-license | COVID-19 related risk of in-hospital death in Silesia, Poland.
INTRODUCTION
The situation around COVID-19 in Poland is rapidly evolving. Because of this, it is important to investigate COVID-19 mortality and its predictors in one of the most densely populated regions of the country, the Silesian Voivodeship.
OBJECTIVE
The goal of this study was to assess the level of in-hospital mortality due to COVID-19 as well as the impact of sex, age, and coexisting diseases on the risk of death.
PATIENTS AND METHODS
Data analysis was based on discharge reports of COVID-19 patients hospitalized between March and June 2020 in all hospitals in the region. Age, sex, hospital discharge status, and the presence of coexisting diseases were abstracted from the charts.
RESULTS
In a group of 2830 COVID-19 in-patients, 325 died during hospitalization. COVID-19 deaths were associated with male sex (OR (Odds ratio), 1.52; 95%CI, 1.17-1.96), older age (OR, 6.11; 95%CI, 4.50-8.31), and the presence of three or more coexisting diseases (OR, 4.78; 95%CI, 3.52 - 6.49). The most prevalent coexisting diseases were chronic cardiovascular and respiratory diseases.
CONCLUSION
The estimated in-hospital fatality ratio of COVID-19 was 11.5% and is smaller than the average COVID-19 fatality ratio in other European countries. The risk of in-hospital was associated with sex, age, and the number of coexisting diseases such as chronic cardiovascular and respiratory diseases.
Hospitalized patients with COVID -19 are predominantly adults and 70% are aged 50 years or older. 5 The age structure explains the high prevalence of comorbidities in this population; for example, among patients hospitalized with COVID -19, approximately 35% had coexisting cardiovascular disease, 20% had coexisting chronic lung disease, and 40% had coexisting metabolic disease. 5 Several studies have documented the role of coexisting diseases in COVID -19 mortality. 6,7 The results of a recent comprehensive review and meta -analysis showed that 70% of COVID -19 fatalities were associated with such comorbidities as hypertension (48%), diabetes (25%), and respiratory disease (11%). 8 However, cardiovascular diseases did not appear to relate to fatality, a finding that the authors explained as being due to the small sample size.
In Poland, by the end of 2020, the confirmed cumulative number of COVID -19 cases was 1 257 799 and the related overall fatality rate was 2.1%. 1 The results of molecular, antigen tests for statistical analysis The data analysis involved reports with a diagnosis of COVID -19 on discharge. 12 The available information included age, sex, hospital discharge status, and coexisting medical conditions. Comorbidities were grouped into categories based on the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD -10) codes and included cardiovascular diseases (I00-I99), respiratory diseases (J00-J99), and metabolic diseases (E00-E99). The age of the patients was presented as mean (SD) while categorical values were presented as absolute and relative frequencies.
Differences between independent groups were tested by the Mann-Whitney test for continuous measures (non -normal distribution according to the Shapiro-Wilk test) or the χ 2 test for categorical measures. Statistical testing of a significant trend was assessed using the χ 2 test for trend. The associations between age, sex, and the number of comorbidities and death were calculated using raw odds ratios (ORs) and 95% CIs. Results of simple analyses were verified using multivariable logistic regression with survival status as the dependent variable and number of comorbidities, age, and sex as explanatory variables. A P value below 0.05 was considered significant throughout the analysis, which was performed with the Statistica 13.0 package (TIBCO Software, Inc., Palo Alto, California, United States). The study protocol was approved by the Ethics Committee of the Medical University of Silesia (PCN/0022/KB1/61/20). As we were working on secondary, anonymous data collected at the province level, written informed consent to participate in the study was not required.
We analyzed the data provided by the available hospital discharge records from all hospitals in Silesia Province. We extracted data on age, sex, diagnosis of COVID -19, coexisting diseases, and the outcome of hospitalization (death or survival). In particular, we aimed to examine the effect of the number of coexisting diseases and of major types of coexisting diseases, including cardiovascular, respiratory, and metabolic diseases.
REsuLTs
The cumulative (March-June) in--hospital COVID -19 fatality rate was 11.5%. The mean (SD) age of all 2830 hospitalized patients was 58 (19.4) years. Women made up 48.4% of the study population. The age distribution of the study group by sex and death is presented in TAbLE 1. Those who died were older on average than survivors.
SARS -CoV -2 in Silesia Province indicate that in the first half of 2020 (the first wave of the epidemic), the infection rate was approximately 5% and the symptomatic form of the disease made up approximately 20% of cases. 9 An analysis of the frequency of infection and mortality in the region revealed substantial variation in both; however, the authors emphasized that this difference could not be explained due to the analysis being descriptive and the epidemiological data being secondary.
However, the proportion of in -hospital mortality due to COVID -19 within the overall mortality due to COVID -19 remains unknown. Between--population variation in COVID -19 mortality may depend on many factors and coexisting chronic diseases are likely to contribute to this apparent variability. With this in mind, we analyzed available registry data to assess in -hospital mortality due to COVID -19 and the impact of coexisting diseases on the risk of COVID -19 death.
PATIENTs ANd mEThOds
The source of data for this study was hospital discharge reports available from March to June 2020. Data were obtained from the Department of Health's Department for Monitoring, Analyses, and Medical Statistics of the Silesia Province Office in Katowice, Poland, as part of routine reporting on "Analysis of Statistical Data." They covered all COVID -19-related hospital deaths (deaths due solely to COVID -19 and deaths due to comorbidity). 10 The reports were submitted from all hospitals (private and public) located in Silesia Province, Poland, though it should be pointed that all COVID -19-dedicated hospitals in Poland are public.
In this large administrative district (the population is 4.5 million and the population density is 366.3 people/km²), the number of monthly COVID -19 hospitalizations significantly increased between March and June 2020. Over the same period, the absolute number of COVID -19 deaths also increased, reaching 122 fatalities in June, which resulted in an in -hospital CFR of 9.2% (FIGuRE 1). According to the World Health Organization guidelines, a confirmed case of COVID -19 was defined as a positive result from a real -time reverse transcriptase-polymerase chain reaction assay of nasal or pharyngeal swab specimens. 11 whAT's NEw?
The risk of death in patients hospitalized with COVID -19 is greater in men than in women (odds ratio [OR], 1.35; 95% CI, 1.07-1.7), increases with age (on average, survivors are 18 years younger than deceased patients), and increases with the number of coexisting diseases, mostly with chronic cardiovascular, respiratory, and metabolic diseases. Sex, age, and comorbidities are independent factors that influence the risk of death from COVID -19. Having 3 or more comorbidities increases the risk of death in all patients; however, its impact is greater in patients under the age of 65 years (OR, 13.32) than in older patients (OR, 6.48).
The association between the number of coexisting diseases and COVID -19 deaths for all patients as well as by sex and age is presented in TAbLE 2. In all patients and all subgroups, the risk of death increased with an increasing number of comorbidities. The coexistence of 2 or more diseases was more strongly associated with the risk of COVID -19 death in younger hospitalized patients (<65 years) than in older ones.
The results from simple analyses were verified using a multivariable logistic regression model with the number of comorbidities, age category, and sex as explanatory variables. The results of the multivariable analysis are presented in The results of the multivariable analysis confirmed that after adjustment for age and sex, the risk of death due to COVID -19 was related to the number of coexisting diseases and that it followed an exposure -response pattern.
The second objective of the study was to investigate the impact of 3 major disease categories (cardiovascular, respiratory, and metabolic In total, the frequency of death was 9.9% in women and 12.9% in men. The difference was significant (P = 0.014) and the association showed an increased risk of death for men (OR, 1.35; 95% CI, 1.07-1.7). When comparing age groups, the respective proportions of those dying were 3.5% and 22.8% for those under 65 years of age and those 65 years or older, respectively (P <0.001; OR, 8.01; 95% CI, 5.97-10.75). (106.6/100 000), Belgium (167.4/100 000), or Spain (108.4/100 000). The mortality rate is lower in Germany, where it stands at 39.5/100 000. 13 In our study, we focused not on general COVID -19 mortality but on in -hospital mortality, including the COVID -19 fatality rate.
The results of our study show that the unadjusted cumulative in -hospital COVID -19 fatality rate was 11.5% over the period from March to June 2020. A comprehensive review of evidence published in mid -2020 provided slightly higher in -hospital mortality figures across Europe and the United States, at 22.2% and 22.9%, respectively. 14 The lower value of the cumulative in -hospital COVID -19 fatality in our study, compared with foreign centers, may result from different eligibility criteria for hospital admissions. Polish rules are regulated by an ordinance of the Chief Sanitary Inspector. 15 The figure obtained in our study is lower than expected, even considering differences in the clinical status of admitted patients and other unknown factors. Moreover, this figure does not reflect the dynamics of in -hospital mortality over time that have been seen by others. 16 As expected, the in -hospital COVID -19 fatality rate is larger than the population -based COVID -19 fatality rates because of the different denominators used in the calculations. In Poland, the fatality rate of COVID -19 observed in the general population (where the denominator represents all cases) is 2.2%, as compared with 2% in Germany or 3.5% in Italy. 17 Our findings confirm that the risk of death among hospitalized male patients with COVID -19 is greater than for women. This observation is in line with evidence published in other reports. 4, 18 Moreover, the risk of mortality increases with the increasing age of patients, a finding which is consistent with the results of the SARSTer study. 19 In our study, the mean ages of surviving and deceased patients were 55.9 and 73.7 years, respectively. The mean age at death of our patients with COVID -19 was similar to that reported in other studies. 6,7 The experience gained during the Italian epidemic points to age as being one of the most important risk factors for COVID -19 mortality, 20 and this conclusion is supported by findings from Spain. 21 Regarding the second objective of our study, our findings confirm that the risk of death due to COVID -19 increases with the increasing number of coexisting diseases, reaching a peak when there are 3 or more comorbidities. Moreover, the results of our analysis suggest that sex, age, and occurrence of comorbidities are independent factors that influence the risk of COVID -19 death, and that the impact of the last factor (number of comorbidities) follows an exposure -response relationship.
Chronic coexisting cardiovascular, respiratory, and metabolic diseases appear to contribute to the risk of COVID -19 death. Such a constellation of diseases is similar to the published evidence diseases) on the risk of death due to COVID -19. Among patients with only 1 comorbidity, there were no cases of metabolic diseases, and among patients with 2 comorbidities, there were no cases with coexisting diagnoses of cardiovascular and metabolic diseases. Among 504 patients with a single comorbidity, the most prevalent diagnoses were cardiovascular (19.2%) and respiratory (28.3%) diseases. The respective COVID -19 fatality rates in these subgroups were 6.2% and 11.2%. The remaining 264 patients with a single comorbidity had other diagnoses and in this subgroup the COVID -19 fatality rate was 3.8%. Two coexisting diseases were found in 219 patients. The coexistence of cardiovascular and respiratory diseases (n = 38) was associated with a COVID -19 fatality rate of 31.6%. In the remaining 181 cases with 2 comorbidities, the COVID -19 fatality rate was 21%. The 3 abovementioned diseases were not found to coexist in the study group. Cardiovascular disease, either by itself or as one of the coexisting comorbidities, was found in 511 patients and was associated with a COVID -19 fatality rate of 25.6%. Respiratory disease was found in 431 patients and was associated with a COVID-19 -fatality rate of 32.5%. dIsCussION The serious epidemiological situation of COVID -19 in Poland justifies the analysis of mortality in one of the most densely populated regions of the country, Silesia Province. Recent data (reported on December 31, 2020) revealed that the total mortality rate due to COVID -19 remains high in Poland (74.1/100 000 population) although somewhat lower than in Sweden (86.1/100 000) and significantly lower than in Czechia (108/100 000), the United Kingdom severity, treatment, and duration of COVID -19 and coexisting chronic diseases. In addition, there may be residual confounding due to the lack of additional sociodemographic information. The last unknown factor potentially affecting our findings is the possibility of the reported diagnoses being misclassified, which has also been discussed by others. 25-27 However, given the very high public health concern and the availability of all necessary diagnostic procedures in the hospitals of Silesia Province, such confounding seems very unlikely.
Conclusions
In conclusion, the estimated unadjusted in -hospital fatality rate of COVID -19 in Silesia Province (Poland) is 11.5%. This value is lower than the mean COVID -19 fatality rate in other European countries and this finding deserves more specific investigation. In -hospital mortality was associated independently with sex, age, and the number of coexisting chronic diseases, in an exposure -effect manner concerning the latter factor. The increased risk of death in patients with COVID -19 is associated with the presence of chronic cardiovascular and respiratory diseases. REFERENCEs on the role of coronary artery disease, cerebrovascular disease, and dyspnea, all classified as independent risk factors for COVID -19 mortality. 6 Another disorder that has been confirmed to increase the risk of death is arterial hypertension. 7 The results of a meta -analysis suggest that hypertension may be associated with up to a 2.5 -fold higher risk of severe or fatal COVID -19, especially in older individuals. 22 A significant contribution from sex, age, and coexisting diseases to COVID -19 fatality has been reported in several studies. A report from the United Kingdom highlights the impact of male sex, greater age, and the presence of diabetes, severe asthma, and various other medical conditions. 18 Age, sex, and the presence of pneumonia, diabetes, arterial hypertension, obesity, immunosuppression, and end -stage kidney disease were identified as major independent risk factors for COVID -19 mortality in Mexico. 23 An interesting observation provided by our study is the greater risk of death in younger (<65 years) than in older (65+ years) patients with comorbidities. In the younger age group, the occurrence of 3 or more comorbidities moved the risk of COVID -19 death to an OR of 13.32, whereas in the older age group it was 6.48. A similar between -group difference in ORs was found when the condition of 2 coexisting diseases was taken into account (3.58 vs 2.37, respectively). A resulting "mirror image" is difficult to explain given the lack of specific information concerning the clinical condition of the patients. It cannot be excluded, however, that the presence of multiple comorbidities in younger people has a larger overall impact on health status (and related susceptibility resulting in a more severe course of SARS--CoV -2 infection) compared with the impact of the more natural presence of multiple comorbidities among the elderly. The format and content of our source data hamper a deeper explanation of this finding. It cannot be excluded that other factors contribute to this gradient and sometimes the likely cause exceeds biological phenomena. For example, the higher COVID -19 mortality of patients younger than 19 years in Brazil was ascribed to insufficient provision of intensive care unit beds for this age group, 24 though such a cause is not likely in Poland.
Our study has several important limitations. The protocol covers a relatively short follow -up (from March to June 2020) and current patterns remain unknown. However, this is unavoidable due to delays in data releases; likewise, other published reports dealing with hospital records also have a limited follow -up. Another limitation stems from the fact that we used secondary epidemiological data. Despite this, the hospital discharge reports came from all hospitals in a large administrative district. A sample of this size justifies extrapolating the findings to the full national population. The format of the discharge files was restricted to basic information, so it was not possible to extract any information regarding clinical | 2021-03-27T06:16:36.969Z | 2021-03-26T00:00:00.000 | {
"year": 2021,
"sha1": "b968a4a8c1527aad1aba7cc2aa9e4c41e4aed3bf",
"oa_license": null,
"oa_url": "https://www.mp.pl/paim/en/node/15893/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "38e7f10e7313d1112d8b9ecabe3fbc9174ed816e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233610292 | pes2o/s2orc | v3-fos-license | Fundamental Arbitrage under the Microscope: Evidence from Detailed Hedge Fund Transaction Data
We exploit detailed transaction and position data for a sample of long-short equity hedge funds to study the trading activity of fundamental investors. We find that hedge funds exhibit skill in opening positions, but that they close their positions too early, thereby forgoing about a third of the trades' potential profitability. We explain this behavior with the limits of arbitrage: hedge funds close positions early in order to reallocate their capital to more profitable investments and/or to accommodate tightened financial constraints. Consistent with this view, we document that hedge funds leave more money on the table after opening new positions, negative returns, or increases in funding constraints and volatility.
" [The] approach of exiting a position when it is no longer as compelling as other opportunities means that we often are selling stocks that we still believe offer meaningful upside. However, if that investment is no longer one of our most compelling, then we redeploy that capital into a stock that is." -Lee Ainslee III., quoted from Pedersen (2015) Fundamental trading-i.e., trading on information acquired through fundamental research-resembles an arbitrage: while "standard" (relative-value) arbitrage exploits price discrepancies between (almost) identical assets, "fundamental arbitrage" exploits the difference between an asset's market price and its fundamental value. Like other forms of arbitrage, fundamental trading is crucial for price efficiency. Indeed, without it, prices could be far away from fundamentals even though they might look "right" relative to each other. 1 While several papers have studied relative-value arbitrage (e.g. Pontiff (1996), Gagnon and Karolyi (2010), Fleckenstein, Longstaff, and Lustig (2014)), we know very little about fundamental trading-about its constraints, how severe they are, and how they affect actual trading behaviour. Indeed, fundamental arbitrage opportunities are notoriously difficult to observe and fundamental investors are secretive in trading on them, making it difficult to identify the limits of fundamental arbitrage in practice.
In this paper, we conduct the first detailed study on the limits of fundamental arbitrage by exploiting a rich proprietary transaction dataset for a sample of 21 hedge funds over a ten-year period. 2 Two features make the data uniquely suitable for our purpose. First, it exclusively covers discretionary long-short equity hedge funds, which routinely undertake independent long and short investments ("directional bets")-making them archetypical fundamental arbitrageurs. Second, our data comprises the funds' entire equity trading histories as well as daily position updates, allowing us to exactly pinpoint the dates when they enter and close their arbitrage positions. This level of detail is crucial: By studying post-opening returns, we confirm that our sample hedge funds are skilled fundamental traders. By studying post-closure returns, we gain insight into the nature and severity of their constraints. Indeed, we argue that, as suggested by Lee Ainslee III's quote above, constrained fundamental arbitrageurs close stock positions early in order to redeploy their scarce capital into other, more profitable opportunities. 3 This makes foregone profits from prematurely closed arbitrage positions a gauge that allows us to quantify the severity of arbitrage constraints.
We find that the long-short equity hedge funds in our sample behave like informed but constrained fundamental investors. Specifically, we show that their openings of long and short positions are followed by significant 4-factor alphas with an absolute magnitude of about 1% over the next 125 trading days, suggesting that these hedge funds are skilled. When measured over the holding period (i.e., from opening to close), the difference in 4-factor alpha between long and short positions amounts to 2%. In stark contrast, we find that closing trades are followed by returns in the opposite direction of the closing trade. When we design a trading strategy that goes long in stocks in which hedge funds just closed a long position (long sells) and shorts stocks from closed short positions (short buys), we obtain a significant 4-factor alpha of about 0.9% over the next 125 trading days. This figure implies that the hedge funds in our sample forgo about a third (≈0.9%/(2%+0.9%)) of the trade's potential profitability. We thus establish that the constraints faced by long-short equity hedge funds are economically important as they force them to "leave substantial money on the table." Early position closures arise from the limits of arbitrage in a world in which investment opportunities exhibit alpha decay: as the expected profitability of an existing position declines, new trading opportunities become more attractive. This triggers a reallocation of the funds' limited risk capital into these more profitable opportunities, explaining why hedge funds close positions that continue to generate alpha going forward. An immediate implication of our argument is that, at any point in time, the profits from newly opened positions should exceed the profits from older existing positions, which should in turn exceed the foregone profits from closed positions. We test and confirm these predictions in our data: over the next 125 trading days, the foregone alphas from closed positions are 0.4% lower than the alphas of positions held by the same fund at the same point in time that are not closed; and they are 0.6% lower than the alphas of newly opened positions.
Having established that hedge funds profitably reallocate their capital across positions, we next investigate the nature of the constraints that give rise to this behavior. To this end, we conduct multiple sample splits for the trading strategy built around hedge funds' closing trades-i.e., going long (short) in stocks from closed long (short) positions-which yields an estimate of how much return hedge funds forgo by closing early. We start by examining whether this strategy is more profitable when hedge funds experience a tightening of funding constraints. First, financial constraints should tighten when hedge funds face higher opportunity costs in the form of new trading opportunities. Indeed, we find that our sample hedge funds forgo substantially more return after an increase rather than after a decrease in the number of open positions.
Next, we conduct sample splits based on past returns for the specific stock and the rest of the hedge fund's portfolio. We find that in both cases funds leave more money on the table after negative returns.
Interestingly, this effect is slightly stronger for past returns of the specific stock, suggesting that hedge funds' financing constraints operate both at the fund and the position level. Finally, we examine a split by fund flows over the prior month. We only find a modest difference in this sample split, consistent with hedge funds successfully managing redemption risk by means of advance notice periods and/or holding cash buffers.
Our next set of sample splits investigates whether certain fund characteristics are associated with larger foregone post-closing returns. We start by examining leverage as a direct measure of the severity of hedge funds' financial constraints. We find that highly-levered funds leave more money on the table, thereby providing micro-level evidence in support of theories of financially-constrained arbitrage (e.g., Gromb and Vayanos (2002), Brunnermeier and Pedersen (2009)). We next analyze two characteristics-a fund's track record and portfolio liquidity-that can help alleviate financial constraints stemming from agency frictions (e.g., Kreps et al. (1982), Hart and Moore (1994)). Consistent with this argument, we find that funds engage in fewer premature position closures if they have a longer track record, a more successful track record, or more liquid assets. We then test a recent idea by Gupta and Sachdeva (2018): hedge funds with high inside ownership (by their managers) may not want to dilute the returns on this inside capital and may thus be reluctant to raise additional equity capital from investors. We indeed find that hedge funds with high inside capital leave more money on the table, in line with these funds choosing to operate on a smaller scale.
Our results on leverage hint at the importance of debt as a funding source for hedge funds. To corroborate this view, we conduct several sample splits by changes in market-wide funding constraints in the financial intermediary sector. Using four different measures of funding constraints (the TED spread, the He, Kelly, and Manela (2017) intermediary risk factor, VIX, and primary dealer stock returns), we consistently find that hedge funds leave more money on the table after market-wide funding constraints tighten. Overall, our results suggest that, for the hedge funds in our sample, funding constraints feeding through the lending channel appear to be more important than equity outflows.
We then move on to study the role of risk for explaining early position closures. If hedge funds operate under a risk constraint, we would expect them to close positions earlier after an increase in volatility and this is indeed what we find. We also examine whether hedge funds pay particular attention to industry risk exposures or a specific position's contribution to overall portfolio risk. We find that post-closure returns are larger for stocks in industries to which the hedge fund recently increased its exposure, as well as for stocks that contribute positively to portfolio risk. These findings suggest that hedge funds use sophisticated risk management strategies to guide their position closure decisions.
To summarize, we find that our hedge funds' opening trades are profitable, but that they close their positions prematurely in response to tightened constraints. We provide an in-depth study of the sources of these constraints and show that they are related to both fund-specific characteristics as well as market-wide funding squeezes and that both cash flow shocks (such as negative returns) as well as changes in volatility matter. We also show that the emergence of new investment opportunities, by raising the opportunity cost of capital, can constrain the trading in existing positions. To the best of our knowledge, we are the first to document this interdependence of trading positions, thereby providing support for recent multi-asset models on the limits of arbitrage (e.g., Gromb and Vayanos (2018), Dow, Han, and Sangiorgi (2019)). Moreover, our approach allows us to provide a first quantitative estimate for the severity of the constraints faced by real-world arbitrageurs-a task usually made impossible by the inability to observe the would-be trades prevented by the constraints. We find that the limits of fundamental arbitrage are economically severe as they force hedge funds to forgo one third of the potential profitability of their trades.
In supplementary analyses, we discuss the representativeness and potential biases of our data as well as alternative explanations for our results. First, we document that our hedge funds have very similar factor loadings as the Credit Suisse long-short equity hedge fund index and funds in the comprehensive hedge fund database studied in Kruttli, Patton and Ramodorai (2015). Second, we note that our funds represent a variety of different sizes, trade across industries and invest in equity markets worldwide with a tilt toward larger stocks. All this is typical for long-short equity hedge funds. Third, we show that our sample hedge funds rarely engage in popular relative-value arbitrage strategies such as pairs trading or merger arbitrage, and that their trades predict subsequent earnings surprises. Hence, the long-short equity hedge funds in our sample behave as fundamental traders. Fourth, we show that our data are unlikely to be plagued by survivorship or back-filling bias. Fifth, we emphasize that a key part of our analysis is about describing how long-short equity hedge funds respond to the existence of financial constraints. As such constraints are pervasive, we expect these qualitative results to generalize to the broader population. Finally, we entertain the possibility that our results, instead of being due to the limits of arbitrage, are driven by the disposition effect, biased beliefs, lack of skill, or price pressure effects in illiquid stocks. As we argue in detail in the robustness section, neither of these alternative explanations is able to explain our collective results.
Our paper contributes to several strands of research. First and foremost, we contribute to the literature on the limits of arbitrage. Theoretical papers in this field have highlighted different channels as to why 6 arbitrageurs may be forced to liquidate their positions (De Long, Shleifer, Summers, and Waldmann (1990), Shleifer and Vishny (1997), Kyle and Xiong (2001), Gromb andVayanos (2002, 2018), Brunnermeier and Pedersen (2009), Acharya andViswanathan (2011), Liu andMello (2011)). 4 We contribute by documenting how these frictions affect the trading activity of fundamental investors. We thereby complement existing empirical work that is mostly at the macro-level and explores, for example, how liquidity, price dislocations and risk premia respond to aggregate funding shocks (Hameed, Kang and Viswanathan (2010), Nagel (2020)). We contribute to this literature by providing evidence for the limits of arbitrage at the transaction-level. Our study thereby offers a unique glimpse into the process by which hedge funds "recycle" their limited arbitrage capital-i.e., how and when they close existing positions and redeploy their capital.
Our second contribution is to the literature on hedge funds. Existing research mostly focuses on selfreported returns or quarterly snapshots of long-only holdings data and reaches mixed conclusions about hedge fund performance. 5 We add to this literature by examining hedge funds' trading skill using complete equity trading and position records for both long and short positions. We find that long-short equity funds in our sample possess the skill to identify mispriced stocks, thereby complementing previous work that emphasizes hedge funds' role as liquidity providers (Aragon and Strahan (2012) (1996) for evidence of the noise trader risk channel in the realm of closed-end mutual funds. 5 For studies based on returns, see for example Ackermann, McEnally, and Ravenscraft (1999), Amin and Kat (2003), Kosowski, Naik, and Teo (2007), Jagannathan, Malakhov, and Novikov (2010), Agarwal, Boyson, and Naik (2009), Patton and Ramodarai (2013, Aragon and Martin (2012), Agarwal, Fos, and Jiang (2013), Bali, Brown, and Demirtas (2013), Bali, Brown, and Caglayan (2011, 2012, 2014. For studies based on quarterly holdings, see Griffin andXu (2009), Cao et al. (2018), Grinblatt et al. (2020). For comprehensive surveys, see Agarwal, Mullally, and Naik (2015) or Getmansky, Lee, and Lo (2015). positions gleaned from merging institutional transaction data from ANcerno with quarterly holdings from 13F. They find that position openings in their sample do not predict long-term returns and that short positions are profitable only over the short-term (up to 5 trading days), suggesting that these hedge funds make the bulk of their profits from liquidity provision. Our data, while comprising fewer funds, is more comprehensive and covers the trading activity for one particular class of hedge funds-discretionary longshort equity-as opposed to the trading by different hedge funds belonging to the same hedge fund family. 6 We argue that our focus on (long-term) fundamental traders explains why we find different results for the long-term predictability of stock trades.
Third, we contribute to the literature on short selling. Several papers focus on the opening of short positions or the change in short interest and find that both predict future returns. 7 We contribute by examining the profitability of both the opening and closing of short positions. The only other paper examining returns following the closing of short positions is Boehmer, Duong, and Huszar (2018). Contrary to us, they show evidence of positive return predictability for closing trades. However, their analysis is based on the mandatory disclosure of very large positions and may thus be influenced by price impact and signaling effects.
Finally, we note that our paper is related to Di Mascio, Lines and Naik (2016), who study a transaction dataset for a sample of long-only mutual funds from the same data provider. Their focus is on how mutual funds open and strategically build-up their positions in order to limit price impact, while we focus instead on position closures and show how they relate to the limits of arbitrage. Notably, their findings for longonly mutual funds are consistent with ours. Indeed, they similarly find positive returns after both the opening and closing of long positions, but do not explain the latter. 6 In Internet Appendix B.1, we describe in detail why the ANcerno data is not suitable for the purpose of studying the limits of fundamental arbitrage. First, we show that the hedge funds in ANcerno, unlike the funds in our data, are not representative of longshort equity funds in terms of portfolio size and average holding period. Second, we demonstrate that it is impossible to accurately measure the timing of hedge funds' position openings and closings with the ANcerno data, rendering it ill-suited for our analysis. 7 See for example Desai, Thiagarajan, and Balachandran (2002), Boehmer, Jones, and Zhang (2008), Diether, Lee, and Werner (2009), Asquith, Pathak, and Ritter (2005, Engelberg, Reed, and Ringgenberg (2012). The remainder of this paper is organized as follows. Section I lays out our empirical predictions. Section II describes the data and provides summary statistics. Section III focuses on the post-order profitability of (long and short) position openings and closings. Section IV examines the drivers of hedge funds' early position closures. Section V discusses additional results regarding data representativeness, selection concerns, as well as alternative explanations for our results. Section VI concludes.
I. Hypotheses
Discretionary long-short equity hedge funds resemble fundamental traders; that is, they take on a limited number of long and short bets on individual stocks based on fundamental analyses. The starting point of our empirical investigation is to see whether the long and short stock positions opened by hedge funds in our sample deliver risk-adjusted returns (alpha). Prior research on hedge fund performance and managerial skill are hampered by data constraints and reach mixed conclusions (see, for instance, the survey by Agarwal, Mullally, and Naik (2015)). Given the novelty and granularity of our data, our performance analysis constitutes a valuable contribution in its own right. The focus of our analysis, however, lies on studying hedge funds' position closures. Indeed, we argue that the returns following position closures are particularly revealing about the nature and severity of hedge funds' arbitrage constraints. We review-and ultimately dismiss-alternative interpretations for post-closing returns in Section V below.
To understand why position closures are driven by arbitrage constraints, we develop in Internet Appendix F a simple trading model in which a hedge fund (1) faces a risk constraint, (2) incurs position monitoring costs, and (3) new investment opportunities exhibit alpha decay. These assumptions are realistic and enjoy empirical support. Alpha decay-i.e., a declining profitability of available trading opportunities-arises naturally in models of informed trading with multiple speculators (Foster andViswanathan (1996), Back, Cao, andWillard (2000), Bernhardt and Miao (2004)) and appears in our data (Internet Appendix E.1) as well as elsewhere (Chen, Da and Huang (2016), Di Mascio, Lines and Naik (2016)). The risk constraint captures, in a simplified way, common risk management practices such as risk parity investment (see Pedersen (2015)). 8 A straightforward implication of this constraint is that position sizes are inversely related to changes in portfolio volatility; a prediction that we confirm empirically (Internet Appendix E.2). The position monitoring cost represents a fixed attention cost for monitoring a given position (the hedge fund may want to check, for example, whether the trading signal that induced the opening of the position is still valid after the arrival of new information). 9 A natural implication of this assumption is that larger funds have more open positions; a prediction that we again verify in the data (Internet Appendix E.3).
Our model predicts that hedge funds trade off diversification benefits with position monitoring costs: because hedge funds do not want to spread their limited capital too thinly, they close existing stock positions even though they are still expected to be profitable when more promising investment opportunities arrive. 10 Put differently, we expect our hedge funds to reallocate their capital efficiently; that is, we expect newly opened positions (or positions that are kept open) to be followed by higher risk-adjusted returns than closed positions.
In follow-up analysis, we carefully dissect foregone post-closing returns in order to shed light on the question where the limits of arbitrage ultimately come from. The predominant view in the literature emphasizes the importance of financial constraints for understanding why arbitrage is limited (see, e.g., Gromb and Vayanos (2010) for a survey). Our model predicts that a tightening of financial constraints forces hedge funds to close their positions earlier, thereby leading to higher foregone post-closing returns.
We therefore predict that hedge funds leave more money on the Concerns for risk are another important driver of position closing decisions. Specifically, our model implies that, when portfolios become more volatile, hedge funds must downscale their positions in order to satisfy the risk constraint. Because of the fixed monitoring costs, this downscaling leads to additional position closings and thus to larger foregone post-closing returns. In addition, we expect hedge funds to close earlier positions with a large contribution to overall portfolio risk and/or after increases in the position's industry risk exposure.
A. Inalytics data
Our data on long-short equity hedge funds is provided by Inalytics Ltd., a company that provides portfolio monitoring services for institutional asset owners as well as consulting services for asset managers. Inalytics mostly works with long-only equity mutual funds, and this data has been previously studied in Di Mascio, Lines and Naik (2016). In addition, Inalytics has obtained data from a small number of long-short equity hedge funds, and we are the first to obtain and work with this data.
There are two ways in which a hedge fund can enter our database: Either the hedge fund submits its trading data directly to Inalytics to obtain feedback on and verification of its trading performance, or an institutional client, e.g. a plan sponsor, asks Inalytics to monitor the hedge fund's trades and performance on its behalf.
In both cases, funds are obligated to submit their complete equity trades and position updates to Inalytics.
Furthermore, Inalytics verifies the data carefully for its accuracy.
Our dataset covers the years 2005 to 2015 and contains complete trading and holding information for the equity portfolios of 21 distinct hedge funds, allowing us to precisely track their long and short stock positions over time. This ability to precisely identify hedge fund trades is a unique feature of our data and crucial for our analyses. 12 Specifically, we have access to two datasets: The first is a transaction-level dataset containing all trades. Variables in this dataset include stock identifiers (ISIN, SEDOL, and CUSIP), the date of the trade, the number of shares traded, and the execution price. The second dataset is a stockday level dataset of each funds' portfolio holdings. This dataset contains stock identifiers, the number of 12 The data that comes closest to ours in its level of detail is obtained via a fuzzy name-matching between the hedge fund trades contained in the ANcerno institutional transaction data and quarterly equity holdings reported in 13F filings. However, funds covered by ANcerno only make available a subset of their transaction records and identifying inventory positions is thus very noisy; a fact that we demonstrate in Internet Appendix B.1. Moreover, while our data is at the fund-level, the ANcerno data is at the fundfamily level. Consequently, we find that hedge fund portfolios in ANcerno are substantially larger and not representative of the portfolios of long-short equity hedge funds (Internet Appendix B.1). Finally, prior work finds that hedge fund families in ANcerno make most of their profits from short-term liquidity provision as opposed to long-term trading (Çötelioğlu, Franzoni, and Plazzi (2020), Jame (2018), Choi, Pearson and Sandy (2016)), which is again untypical for long-short equity hedge funds.
shares held, and the price of the stock at the end of the day. All prices are expressed in the base currency of the fund and in the local currency of the stock. Our data does not cover derivative positions, but conversations with Inalytics suggest that hedge funds in our sample use them little and, if they do, mostly for hedging their market exposure (for example using index options). 13 Thus, their equity trades likely offer a comprehensive reflection of the fundamental bets that they engage in.
We use a merged dataset that combines the holdings and trading data (details on merging these two datasets can be found Internet Appendix A). Hedge funds often split their orders into several trades that are executed on different days to reduce the market impact of their orders. To avoid double counting, we follow Di Mascio, Lines, and Naik (2016) and aggregate trades likely belonging to the same investment decision into orders. Specifically, we assume that trades belong to the same order if a hedge fund trades the same stock in the same direction and the distance between them is two trading days or less. Seventy-three percent of the orders comprise only one trading day and we show in Internet Appendix D.9 that our results are robust to not aggregating trades into orders.
B. Summary statistics
In Table 1 Panel A, we display summary statistics by fund. Funds hold on average 50 long positions and 24 short positions (median values are 36 and 19). In terms of USD, short positions make up about 30% of the combined portfolio value, implying that the funds are not market neutral. Having a larger long than short portfolio is typical for long-short equity hedge funds (see, e.g., Fung and Hsieh (2011)). Our funds conduct on average 6 orders per day. Compared to an average of 74 positions this corresponds to a new order per stock position every 12 trading days. The daily fund turnover (trading volume over total portfolio holdings) is 5.4% on average (median 2.8%). Our funds span a large range of different sizes. The median fund holds about USD 350 million in assets, while the 10 th and 90 th percentile funds range from USD 115 13 Aragon and Martin (2012) study derivative positions disclosed in 13F filings for a subsample of hedge fund management companies and find evidence of return predictability. However, as they note on p. 438, "the raw required filings are at the level of the advisor and do not contain entire portfolios at the fund level. A single filing might, therefore, correspond to several hedge funds with different strategies under management and even by mixed with mutual funds." million to USD 6,400 million, respectively. These numbers suggest that the funds in our data are above average in terms of size. For instance, assuming an average leverage of 2.13 as reported in Ang, Gorovyy, and van Inwegen (2011), we estimate that our median fund has about USD 164 million of assets under management, which is slightly above the 75 th percentile of the size distribution in the Lipper TASS database (see Lim, Sensoy and Weisbach (2016)).
[Insert Table 1 about here.] The investment areas of our funds are shown in Figure 1. We have 7 Europe, 3 US, 3 UK and 2 Australia focused funds, as well as 6 funds that invest worldwide. In line with their investment focus, funds mainly invest in North America, Europe, and Australia (included in Asia-Pacific). The EME and Japan region both make up less than 1% of the sample. Additional descriptive statistics are provided in Internet Appendix B, where we compare Inalytics to Ancerno data, report summary statistics for each individual fund, and document that funds overweigh large companies in their portfolios, similar to other institutional investors (e.g., Lee, Shleifer and Thaler (1991)). Otherwise, they split their investments relatively evenly across different industries and value vs. growth stocks.
We display gross fund profitability computed from portfolio holdings in Figure 2. Panel A shows the average fund profitability by year. We measure profitability as the position-weighted average signed return; that is, returns multiplied by -1 in case of a short position. Because most funds have more long than short positions, this profitability co-moves with the market. The worst year is 2008 when equity markets worldwide crashed in the wake of the Lehman bankruptcy. In 2009, equity markets recovered and our sample hedge funds experienced their best year. To get a better idea of their risk-adjusted performance, Panel B shows average signed 4-factor Carhart (1997) alphas. Our funds display positive signed 4-factor alphas in every year of the sample except 2008, suggesting that they exhibit persistent stock-picking skill.
In Table 1 Panel B, we display summary statistics by position. A position lasts from its opening-i.e., the first buy (sell) for long (short) positions-to its close-i.e., the moment when the stock holding goes back Next, we examine summary statistics at the order-level. We distinguish between three types of orders: Opening orders that initiate the position, closing orders that close the position, and follow-up orders that adjust the size of the position in between. We display summary statistics for each type of order separately in Panels C to E. When measured against the maximum position size, opening and closing orders are significantly larger than follow-up orders: while opening and closing orders on average make up around 77% of the maximum position size (median 100), follow-up orders make up only 15.5% (median 8.5%). 14 In our analyses, we focus on position opening and closing decisions, because the former are most likely to be driven by information while the latter allow us to precisely measure the foregone profits arising from early position closures. 15 Finally, we note that hedge funds do not split orders over multiple days very often: the average number of trading days per order is only about 1.6 (median 1) for each order type.
C. Datastream and Worldscope data
We obtain international stock return and balance sheet data from Datastream (Refinitiv) and Worldscope Fundamentals (Refinitiv), respectively. We complement Datastream with stock return information provided by Inalytics (this affects approximately 14% of our stocks). 16 To adjust returns for risk, we implement a regional version of the Carhart (1997) 4-factor model. Following Karolyi and Wu (2014), we categorize stock markets into 5 regions (Japan, North America, Europe, Asia-Pacific and Emerging Markets). The assignment of countries into regions is displayed in Internet Appendix A.1. We use a market factor, a Highminus-Low Book to Market Factor (HML), a Small-minus-Big (SMB) factor, and a Momentum (MOM) factor of winners minus losers. For America, Asia-Pacific, Europe, and Japan, we obtain daily factors from Kenneth French's website; for the emerging market region, we compute the factors ourselves (as detailed in Internet Appendix A.5). 17 We use the U.S. 1-month T-bill rate as the risk-free rate. Returns and all other variables are measured in U.S. dollars.
For each stock and each month, we estimate betas by regressing daily excess returns on the regional factors over the past 12 months: where , is the daily company return, , is the daily market return and , is the daily risk free rate. We only keep betas that are based on at least 50 days of non-missing return data. As recommended by Levi and Welch (2016), we shrink the resulting beta estimates toward their cross-sectional mean: , ℎ = 0.7 * , + 0.3 * ̅ , for ∈ { , , , } and where ̅ , is the equal-weighted average of , estimated in stock c's region. We then compute daily alphas as: We show in Internet Appendix D.7 that our results are robust if we only use return data from Datastream. 17 Our results are also robust to excluding the EME region completely, see Internet Appendix D.6. Furthermore, some stocks cannot be assigned to a region. In this case, we compute alphas relative to the global factors provided by Kenneth French, but we show in Internet Appendix D.6 that our results are robust to excluding these stocks.
Finally, we winsorize 4-factor alphas at the 1%-level on both sides.
In Internet Appendix C, we show that our results are robust to using characteristics-adjusted returns following the methodology of Daniel, Grinblatt, Titman, and Wermers (1997). In Internet Appendix D.2, we further show that they are robust to using benchmark-adjusted returns with respect to the self-reported fund-specific benchmark.
D. Other data
For additional cross-sectional tests, we employ data from a variety of sources: we use fund flows, leverage, and track record from the HFR hedge fund database, and data on hedge funds' insider ownership from SEC form ADV. These data sources force us to work with a subset of our data (e.g., 14 out of 21 funds in the case of HFR) as they rely on determining hedge funds' identities. In Internet Appendix A.9, we describe in detail our process of establishing hedge funds' identities in the Inalytics data.
A. Profitability of opening and closing trades
As shown in Figure 2, our sample hedge funds are profitable on average. We now examine their trading skill in more detail. We start with a simple graphical analysis presented in Figure 3, in which we show cumulative 4-factor alphas in the 125 trading days following opening or closing orders (as noted before, follow-up orders are excluded from our analysis). To be conservative, we always measure cumulative returns starting on the trading day following the last day of the order. 18 following the initiation of a long (short) position, cumulative benchmark-adjusted returns are slightly above (below) 1% (-1%). 19 Moreover, on both the long and the short side, most of these returns are realized in the 18 In Internet Appendix D.8, we measure returns from actual transaction prices and show that this only strengthens our results. 19 This number may seem small given that the average of yearly returns in Figure 2 Panel B is 3.5%. This difference is driven by two facts. First, the number of orders per year does not stay constant over time. Thus, a simple average over yearly values in Figure 2 is not representative for the whole sample. Second, many fund positions are kept open for less than 125 trading days and due to first 60 trading days (3 months) following the opening order. In other words, the post-opening alphas (per unit of time) decay over time: they are highest immediately after the position is established and then gradually shrink as time progresses. 20 In contrast to position openings, closings of long and short positions do not seem to be informed. Long sells, for instance, are not followed by negative alphas, but rather by positive ones. In the 125 trading days following the closing of a long position cumulative 4-factor alphas are about 0.5%. Similarly, the closing of a short position is followed by negative alphas of about -0.5% over 125 trading days.
In Table 2 Panel A, we show results of a regression analysis that tests for the significance of the return difference between long and short positions. Specifically, in Columns 1 to 3, we focus on position openings and run a regression of 4-factor alphas following the order on D(Long Position), a dummy variable equal to one if the order opens a long position (and zero if it opens a short position). In Column 1 and 2, we examine alphas for holding periods of 60 and 125 trading days, respectively. We choose these holding periods because they straddle the average holding period (see Table 1 Panel B) and Figure 3 reveals that most of the trade profitability accrues in this time. We include fund fixed effects to ensure that we compare long and short position openings by the same fund; we include month fixed effects to control for macroeconomic conditions. Standard errors are two-way clustered by stock and last date of order.
[Insert Table 2 about here.] Given our specification, the coefficient estimate for the D(Long Position) dummy can be interpreted as the return difference between long and short positions. This return difference is 1.6% over 60 days and 2.0% over 125 days and it is statistically significant at the 1% level. In Column 3, we repeat the regression for holding-period alphas; i.e., alphas from the day following the opening order to the day prior to the closing order. As noted before, this approach is conservative because it excludes within-order profits; results alpha decay the annualized returns over these shorter holding periods is much higher. For example, after 60 trading days, the average cumulative alpha is about 0.7%, which is 3.0% annualized.
including within-order profits are even larger (Internet Appendix D.8). The difference in holding period 4factor alphas between long and short positions amounts to 2.0% and is significant at the 1% level. 21 These findings confirm that our sample hedge funds possess investment skill.
In Table 2 Panel A Columns 4 and 5, we examine post-trade returns for closing orders using the same regression setup as in Columns 1 and 2. We find that the 4-factor alpha difference between closed long and closed short positions is again positive at 0.5% over 60 days and 0.9% over 125 days, significant at the 10% and 5%-level respectively. In Section IV below, we show that the post-closing return difference becomes substantially larger (and statistically more significant) when we focus on subsamples of constrained hedge funds.
In Table 2 Panel B, we repeat our analysis after controlling for various stock characteristics. These controls address the concern that hedge funds open or close positions in response to past returns or changes in stock liquidity and/or volatility, and that these stock characteristics are responsible for the return predictability.
The results show that the long-short return difference for opened and closed positions is barely affected by the inclusion of controls for past stock returns, share turnover, Amihud illiquidity, and return volatility (measured over the previous 60 trading days), perhaps because our left-hand-side variable (4-factor alphas) already controls for these effects. We therefore omit these controls in our subsequent analyses.
Taken together, our results imply that the hedge funds in our sample close their positions too early in the sense that they forgo a substantial fraction of the trade's potential profitability. Specifically, while long positions outperform short positions by about 2.9 percentage points (=2.0%+0.9%) from opening to 125 trading days after the close, we find that our hedge funds only capture about 68% (=2.0%/2.9%) of this return, implying that they leave a staggering 32% "on the table." This back-of-the-envelope calculation shows that early position closures are economically important. 21 We confirm in Internet Appendix D.1 that we find similar predictability for the average returns during the holding period.
Our results also offer an important insight for researchers studying the informativeness of individual buy and sell transactions. Indeed, they suggest that for the long-short equity hedge funds in our sample, only opening trades are informative, whereas closing trades are not only uninformative but rather predict returns in the opposite direction of the closing trade. It is therefore important to determine whether individual trades open or close a stock position, which is only possible with access to portfolio data such as we use here.
Without this distinction, opening and closing trades are lumped together, causing a downward bias when assessing investors' trading skills.
B. Profitable capital reallocation
We argue that early position closures arise because constrained hedge funds want to free-up capital in order to invest into new, more promising trading opportunities. Of course, this argument only makes sense when these new investments deliver higher returns than those that are forgone by closing existing positions. We test this prediction in Table 3 Panel A.
Specifically, in Columns 1 and 2, we regress post-opening and post-closing returns on D(Position Opening), a dummy variable that takes the value one for opening orders and zero otherwise. We use signed 4-factor alphas as the dependent variable, which are defined as alphas for long positions and minus one times alpha for short positions. Hence, for both long and short positions, a larger value implies a larger profitability. By including fund-portfolio-month fixed effects, we compare openings and closures undertaken by the same fund, on the same side of the portfolio (long or short), and in the same month; that is, when it is likely that the closure provided the capital for the new position opening. We find a significantly positive coefficient of about 0.5-0.6% for D(Position Opening), implying that newly opened positions are indeed more profitable than existing positions that are closed within the same month.
[Insert Table 3 about here.] In Columns 3 and 4 of Panel A, we complement our regression approach with an even finer matching analysis. Specifically, we match every position closing with position openings done by the same fund between the first day of the closing order and up to three trading days after the last day of the closing order.
We show the average return difference between matched openings and closings for the following 60 and 125 trading days, respectively. We find that newly opened positions outperform recently closed positions by 0.65% (0.65%) over the next 60 (125) trading days. The fact that our results get slightly stronger as we tighten the time link between openings and closings confirms that hedge funds redeploy their capital profitably.
In Panel B, we repeat our analysis after adding large position increases and decreases to our sample of opening and closing orders. We define large positions increases (decreases) as orders that at least double In summary, the results of this section show that the hedge funds in our sample possess investment skill but face constraints: they open stock positions that generate alpha, but close them before this alpha is fully exploited in order to recycle their capital into new investment opportunities. In the next section, we investigate further where these constraints come from.
IV. Examining Constraints to Fundamental Arbitrage
Our stylized trading model in Internet Appendix F shows that early position closures are driven by opportunity costs in the form of new investment opportunities, a tightening of funding constraints, and an increase in risk exposure. In this section, we examine the empirical relevance of these predictions. 22
A. Sample splits by changes in fund-specific funding constraints
Our first set of tests relates to changes in fund-specific funding constraints. When funds become more financially constrained, we expect them to close positions earlier, leading to higher foregone post-closing returns. In Table 4, we test this prediction by splitting the sample of closing orders along several dimensions of fund-specific funding constraints. Since we do not observe hedge funds' actual borrowing activity and/or cash positions, we rely on empirical proxies of funding constraints for this analysis.
Funding constraints tighten when more new trading opportunities emerge that require additional capital.
Accordingly, we split the sample by whether the hedge fund increased or decreased the number of open positions over the previous five or ten days. For each of these subsamples, we then regress post-closing returns over the following 125 trading days on the D(Long Position) dummy (as in Table 2 Panel A). The results in Table 4 [Insert Table 4 about here.] Next we study a tightening of funding constraints due to negative prior returns, which can force levered hedge funds to close existing positions. To tease out whether hedge funds' funding constraints operate at the position-or at the fund-level, we separately examine losses on the particular position and on the hedge fund portfolio as a whole (excluding the specific position). Specifically, in Table 4 Panels B and C, we split the sample of closing orders by whether returns on the specific position (or the portfolio excluding that position) were positive or negative over the prior 5 or 10 trading days. As expected, we find that hedge funds leave more money on the table after negative returns of both the specific stock and their overall portfolio. Interestingly, the effect of the stock-specific return appears to be somewhat stronger than the effect of the portfolio return, suggesting that funding constraints operate both at the position and at the fund level.
Investor redemptions may constitute another important source of funding constraints (e.g., Shleifer and Vishny (1997)). We examine this idea in Table 4 Panel D by splitting closing orders by whether the fund experienced an outflow or an inflow in the prior month. 23 We find that the direction of past fund flow does not seem to be very important as the post-closure alpha difference between long and short positions in the two subsamples is fairly similar. This suggests that hedge funds successfully manage redemption risk by means of advance notice periods and/or holding cash buffers.
B. Sample splits by level in fund-specific funding constraints
In this section, we examine sample splits by the level of fund-specific funding constraints. Funds with higher leverage, worse track records, and less liquid assets may find it more difficult to obtain additional financing, forcing them to close positions earlier.
In Table 5 Panel A, we start with a sample split by hedge fund leverage. In Columns 1 and 2, we compute leverage as the ratio of the total portfolio value in Inalytics over the net asset value reported in HFR. In Columns 3 and 4, we instead use the coarse but more widely available leverage classification provided in HFR. In both cases, the post-closing return difference between long and short positions is higher for funds with high leverage. When put in perspective with the long-short return difference over the holding period, highly-levered hedge funds leave approximately 40-50% of the trades' potential profitability on the table, in line with the intuition that they are more constrained. 24 [Insert Table 5 about here.] Next we look at portfolio liquidity. There are two reasons why owning a liquid portfolio can alleviate funding constraints. First, prime brokers look at liquidity in order to determine the haircut for the fund's portfolio. The more liquid the portfolio, the more pledgeable it is, which alleviates funding constraints by serving as collateral (e.g., Hart and Moore (1994)). Second, hedge funds with a liquid portfolio may be willing to borrow more money because they know that they can easily divest these assets if needed. We therefore expect hedge funds with more liquid portfolios to be less constrained, which should coincide with lower post-closing returns. To examine this prediction, Reputation is another way how hedge funds can mitigate agency frictions giving rise to financial constraints (e.g., Kreps et al. (1982) and Kreps (1990)). We examine this idea by looking at hedge funds' track records in Table 5 Panel C. Specifically, we split the sample by whether the length of the track record (Columns 1 and 2) or the average return since fund inception (Columns 3 and 4) is above or below the median. We find that a good reputation in the form of a positive track record helps to reduce financial constraints: the return difference between closed long and short positions amounts to 2.2% (1.5%) for funds with short (unsuccessful) track records, while it is only 0.2% (0.5%) for funds with long (successful) track records. 25 Finally, we consider inside ownership by hedge fund managers. The premise is that hedge fund managers with large personal stakes may be reluctant to accept outside equity capital because, facing decreasing returns to scale, they do not want to dilute the returns on their inside capital. Consistent with this view, Gupta and Sachdeva (2018) show that funds with large inside ownership have higher and more persistent alphas. At the trade-level, we would then expect these fund managers to close positions earlier in order to focus their limited capital on the most attractive bets. In Panel D, we therefore split the sample by the level of inside ownership, which we obtain from SEC Form ADV. We indeed find that the post-closure alpha difference between long and short positions is 2.6% for funds with high insider ownership while it is only 0.2% for funds with low insider ownership. 26 This finding is consistent with high-inside-ownership funds 25 While the economic magnitude appears to be relatively large, we note that our sample is relatively small. Indeed, we only have track record information for 13 out of the 21 hedge funds in our sample. Thus, the standard errors are relatively large, and the economic magnitude should be interpreted with caution. 26 Consistent with Gupta and Sachdeva (2018), we also find that hedge funds with high insider ownership have larger post-opening alphas compared to funds with low insider ownership (results available upon request).
deliberately operating on a smaller scale, forcing them to close existing positions earlier (thereby leaving about half of the position's profitability on the table). 27
C. Sample splits by change in market-wide funding constraints
Our previous results underscore the importance of debt financing for hedge funds. In this case, we expect portfolio closure decisions to be sensitive to changes in prime brokers' funding constraints. In Table 6, we test this prediction by offering sample splits along four widely-used measures of market wide funding constraints: changes in the TED spread, the intermediary risk factor of He, Manela and Kelly (2016) (henceforth HKM intermediary factor), changes in the VIX, and stock returns of primary dealers. 28 [Insert Table 6 about here.] For all four measures, our results paint a consistent picture. The return gap between closed long and short positions opens up after a tightening of funding constraints measured over the previous 5 or 10 trading days (i.e., when the TED spread increases, the HKM intermediary risk factor is negative, the VIX increases or stock returns of primary dealers are negative). This shows that tighter funding constraints in the intermediary sector are passed on to our sample hedge funds, forcing them to close their positions prematurely (resulting in foregone profits of about 40% of the position's potential profitability). 27 Alternatively, these hedge funds may for some reason be unable to attract outside capital, implying that they involuntarily remain more constrained. 28 The TED spread is defined as three-month LIBOR minus three-month T-Bill rate and is a bellwether of the financial sector's health (e.g., Brunnermeier (2009), Garleanu and Pedersen (2011)). The HKM intermediary risk factor reflects changes to the capital ratios of primary dealer counterparties of the New York Federal Reserve and He, Manela and Kelly (2016) find that it has significant explanatory power for the cross-section of returns in various asset classes. The VIX index is a measure of the implied volatility of S&P 500 index options, calculated and published by the Chicago Board Options Exchange (CBOE). Increases in the VIX are generally interpreted as reflecting an increase in risk aversion and tighter funding constraints. The intermediary stock returns, described in He, Manela and Kelly (2016), are value-weighted portfolio returns of all publicly-traded holding companies of primary dealer counterparties of the New York Federal Reserve. Negative returns signal that primary dealers have less capital and are more likely to tighten funding constraints for client hedge funds.
D. Sample splits by risk management variables
Our model shows that, in addition to financial constraints, risk considerations should be an important determinant of position closure decisions. We now test this hypothesis.
We start by examining changes to the volatility of hedge fund returns. Our model predicts that, when the volatility of portfolio stocks increases, hedge funds' risk constraints tighten, forcing them to close positions prematurely. Table 7 Panel A presents the results of two sample splits for different volatility measures. In Columns 1 and 2, we look at the change in portfolio return volatility, which is measured as the sum of squared fund portfolio returns over the previous 20 trading days. In Columns 3 and 4, we split the sample based on the change in the average stock position volatility, defined as the position-weighted average of the sum of squared stock returns over the previous 20 trading days. The results confirm our prediction for both volatility measures. The alpha difference between closed long and short positions amounts to a significant 1.3% after an increase in fund volatility, while it is 0.6% or less after a decrease in volatility.
[Insert Table 7 about here.] Next, we examine whether funds are more willing to forgo high post-closing returns in stocks that add more risk to their portfolio. Indeed, Pontiff (1996Pontiff ( , 2006 shows that exposure to idiosyncratic risk is an important cost to arbitrage because traders-like our sample hedge funds-need to take concentrated positions in order to take advantage of an arbitrage opportunity. In Columns 1 and 2 of Panel B, we therefore test whether funds are more willing to close a position prematurely if it exhibited an increase in idiosyncratic volatility. Indeed, we find that hedge funds leave more money on the table when they close positions that exhibited an increase in idiosyncratic volatility, confirming that idiosyncratic volatility is an important arbitrage cost as shown by Pontiff (1996).
Our detailed portfolio data allows us to measure more precisely how much an individual stock position contributes to overall portfolio volatility. Indeed, given that our hedge funds also hold many short positions, how much risk a position actually adds to the portfolio may deviate from idiosyncratic risk. To assess how much an individual stock position contributes to portfolio volatility, we compare the squared fund returns over the previous 60 trading days between two hypothetical portfolios: the current portfolio including the stock position that was closed and the portfolio excluding that position. We then split the sample of postclosing returns by whether the stock position increases or decreases portfolio volatility. As shown in Columns 3 and 4 of Panel B, we find that positions that increase portfolio volatility (whose closure thus decreases fund volatility) are followed by a long-short alpha difference of about 2.0%, while the difference is close to 0% for positions that increase portfolio volatility (whose closure thus increases fund volatility).
Finally, in Panel C, we examine risk at the industry level. We conjecture that hedge funds may try to avoid being overly exposed to a specific industry. Thus, after experiencing an increase in exposure to a certain industry, they may be more willing to prematurely close positions in that industry. To examine this prediction, we conduct sample splits by whether hedge funds increased or decreased their exposure to the industry of the closed stock over the prior 20 trading days. Using industry groupings based on 12 Fama-French industries and 2-digit SIC codes, we find a higher long-short difference in post-closure returns after an increase in industry exposure, suggesting that hedge funds are wary of taking on too much industry risk.
Taken together, these results suggest that hedge funds engage in active risk management, which leads them to close positions that are still profitable in order to reduce their risk exposure.
V. Additional Results and Robustness Checks
In this section, we summarize additional results and robustness checks that are presented in the Internet Appendix.
A. Long-short equity hedge funds as fundamental arbitrageurs
Long-short equity funds are described as fundamental investors that make independent long and short bets based on a fundamental analysis (Pedersen (2015), Getmansky, Lee and Lo (2015)). Our finding that hedge funds' opening trades are followed by abnormal returns over the subsequent 6 months (and more) is consistent with informed trading on long-lived information. In Internet Appendix E.5, we provide further evidence that our funds trade on fundamentals by showing that their trades predict future earnings surprises.
This finding suggests that our hedge funds are able to identify fundamentally under-or overvalued stocks.
In Internet Appendix E.6, we further show that our hedge funds rarely engage in merger arbitrage or pairs trading-two of the most popular convergence strategies involving equities. This suggests that hedge fund trades in different stocks represent independent speculative bets as implicitly assumed by our analysis.
B. Follow-up orders are different from opening and closing orders
Our analysis focuses on opening and closing orders, as follow-up orders are likely to be driven by different considerations. For example, Di Mascio, Lines and Naik (2016)
C. Representativeness
We acknowledge that the relatively small number of funds raises questions about the representativeness of our data. Given that similar data for a comparison is not available, the best we can do is to compare our imputed hedge fund returns to the returns reported in standard hedge fund databases such as Lipper TASS or HFR. We find broadly consistent factor loadings across the different hedge fund return series (see Internet Appendix E.7). We thus conclude that the hedge funds in our sample appear to be similar to longshort equity funds that report to standard databases. Moreover, the trading behavior displayed in our sample looks consistent with what is commonly assumed for long-short equity hedge funds: hedge funds' trades predict fundamentals (Internet Appendix E.5), are independent (Internet Appendix E.6), spread over different industries with a tilt toward large stocks (Internet Appendix B.4).
D. Potential data biases and selection concerns
In this subsection, we discuss potential data biases and selection concerns. We begin by noting that several sample biases that have been identified for standard hedge fund databases are not a major concern for us.
Indeed, hedge funds that engage with Inalytics provide most of their transaction data in real time, limiting the scope for window dressing and back-filling. Moreover, since our data includes funds that have already been terminated, survivorship bias is not an issue.
One potential concern is sample selection. Here, the biggest worry is that successful hedge funds strategically engage with Inalytics in order to advertise their trading success-implying that the documented trade profitability would be biased upward. Alternatively, it could be that institutional clients demand from poorly-performing hedge funds to submit their trades to Inalytics for monitoring and verification purposes or that poorly performing funds engage with Inalytics to learn how they can improve their trading processes.
In this case, the trade profitability would be biased downward.
We study the extent of sample selection in Internet Appendix E.8. We find that fund returns are not elevated (reduced) shortly after entering (before leaving) the sample. Hence, hedge funds do not opportunistically enter (leave) the sample in order to advertise (hide) their performance, suggesting that sample selection is not a big concern for our data. Finally, we argue that selection should only affect the magnitude of the documented trade performance. However, it should not invalidate our micro evidence on how financial constraints affect the trading behavior of long-short equity funds. Indeed, financial constraints are ubiquitous and we expect our qualitative results on early position closures and hedge funds' capital reallocations to apply more generally. 29
E. Alternative explanations
29 Consistent with this point, we note that Di Mascio, Lines, and Naik (2016) also find positive abnormal returns after the closure of long positions (but they do not explain this result).
Here we briefly discuss three alternative explanations and explain why they are unlikely to drive early position closures. First, we study how our results relate to the disposition effect; i.e., investors' tendency to to close winning positions too early and hold losing position too long (Odean (1998), Jin and Scherbina (2011)). In Internet Appendix E.9, we show that our hedge funds do not exhibit the disposition effect. In fact, they are more likely to close positions trading at a loss rather than those trading at a gain, which is the exact opposite of the disposition effect. Thus, our findings are not explained by the disposition effect.
Second, we study whether early position closures can be explained by lack of skill or biased beliefs. In Internet Appendix E.10, we show that average post-opening and post-closing returns are significantly positively correlated across managers; that is, the same hedge funds that leave more money on the table are also those that open more profitable positions. This is consistent with Table 3, which shows that, within a given fund, the forgone profits from early position closures are outweighed by the profits from new position openings. Hence, early closures do not appear to be mere trading errors, but rather conscious decisions to reallocate funds into more profitable investment opportunities.
Finally, we address the concern that post-closure returns are due to price pressure in illiquid stocks. We first note that this explanation fails to explain our numerous sample split results (Tables 4-7). For example, there is little reason to think that the track record of an individual hedge fund affects the liquidity condition of the stock market. Hence, price pressure cannot explain why only hedge funds with poor track records exhibit a significant post-closure return difference (Table 5 Panel C). In Internet Appendix D.10, we further show that our results for foregone post-closing returns are robust to measuring cumulative alphas starting three trading days after the last day of the closing order; that is, after leaving out the trading days that are expected to be most contaminated by price pressure originating from the closing trade.
F. Other robustness checks
In Internet Appendix D, we report additional robustness checks. For instance, we show there that our results are robust to using benchmark-adjusted returns instead of alphas (D.2), excluding stocks with converted prices of less than $1 (D.3), excluding trades around merger events (D.5), excluding stocks from emerging markets or without regional assignment (D.6), only using return data from Datastream (D.7), including within-order returns (D.8), or not aggregating trades into orders (D.9).
VI. Conclusion
Fundamental investors play a vital role in financial markets: they acquire and synthesize value-relevant information through their research and impound it into prices through their trading. Yet, like other realworld arbitrageurs, fundamental investors face constraints that impede their trading activity.
In this paper, we provide an in-depth study on such limits of "fundamental arbitrage." Specifically, we exploit proprietary trading data for a sample of discretionary long-short equity hedge funds-presumably the most important fundamental investors in today's markets-to offer a microscopic analysis of their trading activity. We first establish that positions opened by these funds predict risk-adjusted returns over a horizon of six months, suggesting that their trades are informed. We then show that their closing trades predict returns in the opposite direction of the closing trade. In other words, our sample hedge funds close their positions too early, thereby foregoing about a third of the total trade profitability.
We argue that this behavior arises naturally from the limits of arbitrage (we show this formally with the help of a simple trading model in Internet Appendix F): hedge funds rationally decide to close positions that are still expected to generate profits in order to accommodate tightened financial constraints and/or to invest their limited capital in even more profitable trading opportunities. Our results broadly confirm this view: funds engage in more premature position closures when new trading opportunities arise or when they become more constrained due to negative fund returns, increases in volatility, or increases in market-wide funding costs. Similarly, hedge funds with short track records, high leverage, or less liquid assets engage in more premature position closures. Somewhat surprisingly, investor outflows seem to matter less for portfolio closing decisions, perhaps because hedge funds manage redemption risk with the help of advance notice and lockup periods. Finally, we also find that hedge funds with large inside ownership leave more money on the table, consistent with a recent argument that large inside-ownership funds choose to accept less outside capital in order to not dilute their returns (Gupta and Sachdeva (2018)).
To the best of our knowledge, our paper is the first to provide micro-level evidence on how constrained fundamental investors decide to abandon a profitable trading opportunity in order to recycle their capital.
As the trading opportunity is not fully exploited, mispricing persists. Thus, despite the presence of informed fundamental traders, market prices can remain removed from their fundamental values.
Figure 1: Investment areas of funds
Panel A shows the investment areas of our sample of funds. We base these areas on their chosen benchmark, but verify that the funds indeed invest predominantly in these areas. Panel B depicts the regions of the stocks held by the funds. We compute this average over the number of positions over the entire sample period. The definition of the regions are displayed in Internet Appendix A.1.
1%
Asia-Pacific Europe North America Japan + EME
Figure 2: Fund returns and alphas
In this figure, we display fund returns and alphas by year. In Panel A, we use raw returns. In Panel B, we use 4-factor alphas. Specifically, for each fund, we first compute the (position-weighted) daily average signed return of positions the fund holds (for Panel B we compute the daily average signed 4-factor alpha of positions the fund holds). Then on each day, we compute the (equal-weighted) average across funds. Finally, we compound these returns over the year. Signed returns are equal to the stock's raw return for long positions and the stock's raw return times minus one for short positions. Signed 4-factor alphas are equal to the alpha according to the Carhart (1997) model estimated at the regional level for long positions and the 4-factor alpha times minus one for short positions.
Figure 3: Alpha following orders
This figure displays cumulative 4-factor alphas for 125 trading days following orders that open or close a position. Open Long Position is the buy order establishing a long position ("long buy"). Open Short Position is the sell order establishing a short position ("short sale"). Close Short Position is the buy order closing a short position ("short buy"). Close Long Position is the sell order closing a long position ("long sell"). Four-factor alpha is the alpha according to the Carhart (1997) model estimated at the regional level. The return index is set to 100 at the last day of the order. This table examines returns following opening and closing orders. In Panel A, we regress average 4-factor alphas following the order on a dummy variable whether the order is related to a long position. In Panel B, we add additional controls for past stock returns, share turnover, Amihud illiquidity, and return volatility (measured over the past 60 trading days). In Columns 1 to 3, we include only opening orders. In Columns 4 and 5, we include only closing orders. In Columns 1, 2, 4, and 5, the dependent variable is the cumulative 4-factor alpha expressed in percent for 60 and 125 trading days following the last day of the order. In Column 3, the dependent variable is the cumulative alpha from the day after the last day of the opening order to the day before the first day of the closing order (i.e., the holding period). We include fund fixed effects and month fixed effects (based on the month of the last day of the order). Details on variable constructions can be found in Appendix A. All standard errors are two-way clustered by stock and last date of order. We report t-statistics below the coefficients in parenthesis. ***, **, * indicate significance at the 1%, 5% and 10% level. This table examines if hedge funds reallocate their capital optimally. In Panel A, we compare returns following the opening and closing of (long and short) positions. In Columns 1 and 2, the dependent variables are signed 4-factor alphas (equal to the alpha for long positions and the alpha times minus one for short positions), which we regress on a dummy variable equal to one if it is an opening order. We include fund-portfolio-month fixed effects in these regressions (where portfolio distinguishes between the hedge fund's long and short portfolio). In Columns 3 and 4, we present results for a matching analysis in which position closings are matched with position openings made by the same fund between the first day of the closing order and three trading days after the last day of the closing order. The columns report the mean difference between matched openings and closings in signed cumulative 4-factor alphas for 60 and 125 trading days, respectively. In Panel B, we redo our analyses after adding large position increases (defined as orders that at least double the position) and large position decreases (defined as orders that decrease the position by at least 50%) to our sample of opening and closing orders. In Panel C, we compare returns following positions closed and positions kept open for long and short positions together. In Columns 1 and 2, the sample contains all positions a fund holds at the beginning of a day on which a position is closed (last day of order). For this sample, we regress signed position alphas on a dummy variable equal to one if the position is kept open (not closed) on that day. We include fund-portfolio-date fixed effects in these regressions. In columns 3 and 4, we present results for a matching analysis in which position closings are matched with all positions that the fund did not close on the same trading day. The columns report the mean difference between matched positions that are not closed and those that are closed in signed cumulative 4-factor alphas for 60 and 125 trading days, respectively. Details on variable constructions can be found in Appendix A. All standard errors are two-way clustered by stock and last date of order. We report t-statistics below the regression coefficients (mean difference in the matching analyses) in parenthesis. ***, **, * indicate significance at the 1%, 5% and 10% level. In this table, we examine returns following closing orders (as in Table 2, Panel A, Column 5), but split the sample by changes in fund-specific funding constraints. For the different subsamples, we regress cumulative 4-factor alphas expressed in percent for 125 trading days following the last day of the order on a dummy variable whether the order is related to a long position. In Panel A, we split the sample by change in number of positions in the 5 days or 10 days prior to the order. In Panel B, we split the sample by the stock-specific signed stock return in the 5 days or 10 days prior to the order. In Panel C, we split the sample by the signed fund return (excluding the specific stock) in the 5 days or 10 days prior to the order. In Panel D, we split the sample by fund flows over the prior month computed from HFR data. Details on variable constructions can be found in Appendix A. We include fund fixed effects and month fixed effects (based on the month of the last day of the order). All standard errors are two-way clustered by stock and last date of order. We report t-statistics below the coefficients in parenthesis. ***, **, * indicate significance at the 1%, 5% and 10% level. In this table, we examine returns following closing orders (as in Table 2, Panel A, Column 5), but split the sample by the level of fund-specific funding constraints. For the different subsamples, we regress cumulative 4-factor alphas expressed in percent for 125 trading days following the last day of the order on a dummy variable whether the order is related to a long position. In Panel A, we split the sample by fund leverage. In Columns 1 and 2 of Panel A, we split the sample by whether the computed fund leverage, defined as the ratio of the fund's portfolio value over its net asset value, is above or below median. In Columns 3 and 4 of Panel A, we use leverage as reported in HFR. We treat "2-5" and "unspecified" leverage as high leverage and "1-2" as low leverage (if we exclude observations with "unspecified" leverage, the economic difference between the two groups is even larger). In Panel B, we split the sample by the portfolio's value-weighted average liquidity of the funds' stock positions measured using the Amihud Illiquidity measure over the previous 60 or 125 trading days. In Panel C, we split the sample by fund track record. In Columns 1 and 2 of Panel B, we split the sample by whether the time since fund inception is above or below median. In Columns 3 and 4 of Panel C, we split the sample by whether the average fund return (taken from HFR) since inception was above or below median. In Panel D, we split the sample by whether the fund's insider ownership according to ADV data is above or below median. Details on variable constructions can be found in Appendix A. We include fund fixed effects and month fixed effects (based on the month of the last day of the order). All standard errors are two-way clustered by stock and last date of order. We report t-statistics below the coefficients in parenthesis. ***, **, * indicate significance at the 1%, 5% and 10% level. In this table, we examine returns following closing orders (as in Table 2, Panel A, Column 5), but split the sample by changes in market-wide funding constraints. For the different subsamples, we regress cumulative 4-factor alphas expressed in percent for 125 trading days following the last day of the order on a dummy variable whether the order is related to a long position. In Panel A, we split the sample by whether the TED spread, defined as the difference between the three-month LIBOR and the three-month T-bill interest rate, has increased or decreased over the prior 5 or 10 trading days.
In Panel B, we split our sample by whether the HKM intermediary risk factor aggregated over the past 5 or 10 trading days is negative or positive. The HKM intermediary risk factor measures innovations to the capital ratio of financial intermediaries (He, Kelly and Manela (2017)). A negative risk factor implies lower capital ratios and thus tighter funding constraints. In Panel C, our proxy for funding constraints is the change in the VIX index over the prior 5 or 10 trading days. In Panel D, we split the sample by the cumulative intermediary stock return, which is the value-weighted portfolio return of all publicly-traded holding companies of primary dealer counterparties of the New York Fed. Details on variable constructions can be found in Appendix A. We include fund fixed effects and month fixed effects (based on the month of the last day of the order). All standard errors are two-way clustered by stock and last date of order. We report t-statistics below the coefficients in parenthesis. ***, **, * indicate significance at the 1%, 5% and 10% level. In this table, we examine returns following closing orders (as in Table 2, Panel A, Column 5), but split the sample by several risk management variables. For the different subsamples, we regress cumulative 4-factor alphas expressed in percent for 125 trading days following the last day of the order on a dummy variable whether the order is related to a long position. In Panel A, we split the sample by change in fund return volatility. In Columns 1 and 2 of Panel A, we measure fund return volatility as the sum of squared fund returns over the previous 20 trading days. In Columns 3 and 4 of Panel A, we measure fund return volatility as the average sum of squared stock returns over the previous 20 trading days. In both cases, we compare our volatility measures to their values over a 20-day window before that. In Panel B, we split our sample by how the individual stock position affects portfolio volatility. In Columns 1 and 2 of Panel B, we split by the change in idiosyncratic stock volatility, where idiosyncratic volatility is measured as the sum of squared 4-factor alphas over the previous 20 trading days. In Columns 3 and 4 of Panel B, we split the sample by whether the position closure decreases or increases the volatility of the fund's portfolio. We determine this by comparing the squared fund returns over the previous 60 trading days between two portfolios: the (actual) portfolio excluding the closed stock position and the (hypothetical) portfolio that the fund would have had if it had not closed the position. In Panel C, we split the sample by whether there is an increase or a decrease in the fund's long (or short) exposure in the same industry relative to 20 trading days prior. In Columns 1 and 2 of Panel C, we use Fama-French 12 industry classification. In Columns 3 and 4 of Panel C, we use 2-digit SIC codes. Details on variable constructions can be found in Appendix A. We include fund fixed effects and month fixed effects (based on the month of the last day of the order). All standard errors are two-way clustered by stock and last date of order. We report t-statistics below the coefficients in parenthesis. ***, **, * indicate significance at the 1%, 5% and 10% level.
Appendix A: Variable definitions
This table displays the variable definitions for all variables used in the regressions. Return measures over the 60 trading days following the order are set to missing if we have less than 50 daily return observations. Returns measures over the 125 trading days following the order are set to missing if we have less than 100 daily return observations. Cumulative holding period returns are set to missing if more than 10% of the daily return observations are missing. All return measures are winsorized at the 1% level on both sides.
Variable Name Definition Stock Return
Return in USD from Datastream or Inalytics. 4-Factor Alpha , − , − * ( , − , ) − * − * − * For more details see Section 2.C. DGTW Return − Similar stocks are stocks in the same quintile of market capitalization, book-to-market ratio and past 12 months stock return within the same region. For more details see Daniel et al. (1997) and Internet Appendix A.6. Benchmark-Adjusted Return − ℎ For more details see Internet Appendix A.6. Benchmark Return USD return of the benchmark specified by the fund. The benchmark is specific for the fund, but is the same for both long and short positions of the fund. Data is provided by Inalytics. Signed 4-Factor Alpha Four-factor alpha for long positions and 4-factor alpha multiplied by minus one for short positions.
Signed DGTW Return DGTW return for long positions and DGTW return multiplied by minus one for short positions. Signed Benchmark-Adj. Return Benchmark-adjusted return for long positions and benchmark-adjusted return multiplied by minus one for short positions. D(Long Position) Dummy variable equal to one if the order is related to a long position (i.e., a long buy or a long sell) and zero if it is related to a short position (i.e., a short sell or a short buy).
D(Position Opening)
Dummy variable equal to one if the order is related to a position opening (i.e., a long buy or a short sell) and zero if the order is related to a position closure (i.e., a long sell or a short buy).
HKM Intermediary Risk Factor
Measures innovations to the capital ratio of financial intermediaries (primary dealer counterparties of the New York Federal Reserve). The data are available at http://apps.olin.wustl.edu/faculty/manela/data.html.
More specifically, He, Kelly, and Manela (2017) calculate aggregate dealer capital ratios as = ∑ , ∑ ( , + , ) and compute innovations in this variable using an AR(1) process = 0 + −1 + . The risk factor is then defined as the growth rate of these innovations: It is set to missing if there are 16 or fewer daily fund observations available in the last 20 trading days. Alpha is based on non-shrunk betas to stay consistent with the prior literature. Average Position Return Volatility [1,20] ℎ [∑ 2 20 1 ] The weights is the dollar value invested. Daily stock returns are winsorized at 10% and -10%. A stocks volatility is set to missing if there are 16 or fewer daily stock return observations available in the last 20 trading days. SUE IBES − −8, −1 ( − ) Analyst forecasts are taken from I/B/E/S detail history North America file for U.S. and Canadian companies and from the I/B/E/S detail history International file for other companies. For each analyst, only the last forecast is retained if it has been issued no more than 60 days prior to the earnings announcement date. The data is quarterly. SUE Worldscope − −4 −8, −1 ( − −4 ) Quarterly earnings data is taken from Worldscope. HF imbalance [5,20] This variable takes the value one (minus one) if sample hedge funds open a long (short) position from t-20 to t-5 days prior to the earnings announcement and zero if there is no newly opened position. If there are opened positions in both direction, the variable takes the value one (minus one) if the newly opened long (short) positions are larger in terms of the number of traded stocks. Dummy variable equal to one in the first 60 days that a fund is in our sample. D(First 125 days in sample) Dummy variable equal to one in the first 125 days that a fund is in our sample. D(Last 60 days in sample) Dummy variable equal to one in the last 60 days that a fund is in our sample. D(Last 125 days in sample) Dummy variable equal to one in the last 125 days that a fund is in our sample. | 2019-06-20T07:45:24.609Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "8f3fe7688d71c2c12075c17fe4e5ec9424c2e2a1",
"oa_license": null,
"oa_url": "https://doi.org/10.17016/feds.2021.022",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "7f148089e2edcff42aa8344b6c74dcf4e8a26306",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
15495933 | pes2o/s2orc | v3-fos-license | The Presence of Modifiable Residues in the Core Peptide Part of Precursor Nisin Is Not Crucial for Precursor Nisin Interactions with NisB- and NisC
Precursor nisin is a model posttranslationally modified precursor lantibiotic that can be structurally divided into a leader peptide sequence and a modifiable core peptide part. The nisin core peptide clearly plays an important role in the precursor nisin – nisin modification enzymes interactions, since it has previously been shown that the construct containing only the nisin leader sequence is not sufficient to pull-down the nisin modification enzymes NisB and NisC. Serines and threonines in the core peptide part are the residues that NisB specifically dehydrates, and cysteines are the residues that NisC stereospecifically couples to the dehydrated amino acids. Here, we demonstrate that increasing the number of negatively charged residues in the core peptide part of precursor nisin, which are absent in wild-type nisin, does not abolish binding of precursor nisin to the modification enzymes NisB and NisC, but dramatically decreases the antimicrobial potency of these nisin mutants. An unnatural precursor nisin variant lacking all serines and threonines in the core peptide part and an unnatural precursor nisin variant lacking all cysteines in the core peptide part still bind the nisin modification enzymes NisB and NisC, suggesting that these residues are not essential for direct interactions with the nisin modification enzymes NisB and NisC. These results are important for lantibiotic engineering studies.
Introduction
Nisin is a prominent lantibiotic produced by some strains of Lactococcus lactis. Lantibiotics form a class of ribosomally synthesized antimicrobial peptides, highly active against Gram-positive bacteria. The spread of multi-drug resistant bacteria is an alarming issue, especially in view of the decreasing effectiveness of conventional antibiotics. To tackle this problem, novel antimicrobial agents are urgently needed. Lantibiotics are very promising candidates to replace conventional antibiotics. Their unique characteristics include the presence of lanthionine and (methyl)lanthionine rings and a high stability upon heat or proteolytic treatment [1,2]. Precursor nisin consists of an N-terminal leader peptide sequence and a C-terminal modifiable core peptide part. Dehydrated residues are introduced posttranslationally by NisB, specifically into the core peptide part, through dehydration of serines and threonines, yielding Dha (dehydroalanine) and Dhb (dehydrobutyrine), respectively. Lanthionine and (methyl)lanthionine rings are formed by intramolecular coupling of cysteines to the dehydrated residues in a stereospecific manner by the cyclase NisC. Fully modified precursor nisin is transported by a dedicated ABC transporter, NisT, to the outside of the cell, where the protease NisP cleaves off the leader, liberating active nisin [3,4]. Recently, a complex of nisin modification enzymes has been isolated, consisting of NisB, NisC and NisT [5].
Dehydration and cyclization processes for class I and class II lantibiotics have been shown to proceed from the N-terminal to the C-terminal end [6,7] and an alternating mode of action between NisB and NisC has been proposed [6,8]. Interestingly, in contrast to class I and II lantibiotics, class III lantibiotics synthetase LabKC has a C-to N-terminus processing mode [9].
Nisin exerts at least two modes of antimicrobial action: i) by binding to lipid II, an essential intermediate for cell wall biosynthesis, it inhibits cell wall synthesis [10,11]; and ii) by forming pores in the cell membrane, it releases cellular constituents [11]. Nuclear magnetic resonance studies performed with nisin and lipid II in sodium dodecyl sulfate micelles demonstrated that the rings A and B of nisin are important for the nisin-lipid II interactions [12]. In particular, formation of two intermolecular hydrogen bonds between Dhb2 (dehydrobutyrine) and Abu8 (aaminobutyric acid) of nisin and the pyrophosphate moiety of a lipid II analog was demonstrated experimentally [13]. Moreover, it was shown that the formation of intermolecular hydrogen bonds and contact of nisin with the electronegative pyrophosphates of lipid II triggers the formation of a cage-like structure, consisting of rings A and B [13]. Due to sequestering of lipid II, nisin is active at nM concentrations. Nisin additionally inhibits the outgrowth of spores via lipid II-dependent pore formation [14]. Class I lantibiotics possess two modification enzymes, i.e. a dehydratase LanB and a cyclase LanC. Class II lantibiotics are modified by a bi-functional enzyme called LanM, which is able to perform both the dehydration and the cyclization reactions. The C-terminal part of LanM enzymes has a low sequence similarity to that of LanC enzymes and is anticipated to perform the cyclization reaction. However, the N-terminal part of LanM does not share any homology with NisB. Applying various unnatural substrates, it has been demonstrated that the class I NisBTC machinery as well as the class II LanM enzymes can modify non-lantibiotic peptides, if they N-terminally possess the class I nisin leader or a class II leader [15][16][17][18][19][20][21]. Moreover, the class II LctM enzyme has been shown to modify peptides where the core peptide and the leader peptide parts are separated by several alanine residues and more excitingly, peptides that had non-proteinogenic amino acids in the leader or in the core peptide parts were also modified [18]. This suggests that lantibiotic modification enzymes have very relaxed substrate specificities.
The nisin leader is an important site of interaction between precursor nisin and the modification enzymes NisB and NisC [5,[22][23][24][25]. Furthermore, the class I and class II lantibiotics have been observed to be (partially) modified in the absence of the leader [20,22]. However, the core peptide part has also been demonstrated to be involved in the interactions with NisB and NisC [22]. This notion is important in the light of increased interest to engineering of lantibiotics. What particular residues in the core peptide are involved in the interactions with the nisin modification enzymes NisB and NisC is not known.
Successful examples of the enhancement of the antimicrobial activity of nisin by protein engineering exist [26,27]. Ring engineering studies of nisin have demonstrated that to exert antimicrobial activity, the thioether rings ABC of nisin should be present [28]. Opening of ring A results in significant loss of antimicrobial activity against Micrococcus luteus NCDO 8166 and leads to a complete loss of antimicrobial activity against Lactococcus lactis MG1614 [28]. Opening of ring B results in a lack of antimicrobial activity against L. lactis LL108 [29]. Ring C has been shown to be essential for nisin antimicrobial activity by a) thermolysin cleavage of the Ala15-Leu16 and subsequently the Leu16-Met17 bonds in ring C of nisin(1-29); b) introducing a tryptic cleavage site into ring C by substituting the Met17 by Lys and subsequent trypsin treatment [30]; c) engineering a disulfide bond instead of a lanthionine that, upon reduction, caused a total loss of activity [31].
Negatively charged amino acids are not naturally found in the core peptide part of precursor nisin and closely related peptides such as subtilin and epidermin [32]. The absence of negatively charged amino acids in the core peptide region of precursor nisin was intriguing to us and might be an indication that negatively charged residues are unfavourable for the interactions with the nisin modification machinery and/or for modification. Another possibility is that negatively charged residues have a negative impact on the antimicrobial activity of nisin. Which exact residues in precursor nisin are involved in the interactions with the nisin modification enzymes NisB and NisC is not known. The development of a nisin binding assay [5] facilitates molecular characterization of the interactions between precursor nisin and its modification enzymes [5]. In this study, we perform co-purification studies of the nisin modification enzymes NisB and NisC with structurally unnatural precursor nisin molecules lacking modifiable residues in the core peptide part as well as with mutants that have various numbers of negatively charged residues in the core peptide part, to investigate whether binding of the modifying enzymes to precursor nisin and the substrate-enzyme complex formation are hampered, and whether modifications can still occur.
Synthetic nisin variants
Synthetic DNA encoding nisin variants was ordered from Life Technologies and subsequently cloned into a pNZnisAE3 vector [15] through cutting out the nisA gene by utilizing BglII and HindIII restriction sites, and cloning of the synthetic DNA, encoding precursor nisin, using the same restriction sites.
Recombinant DNA techniques
Standard genetic manipulations were essentially performed as described by Sambrook et al. [33]. Plasmid isolation was performed by means of the Plasmid DNA Isolation Kit (Roche Applied Science). Restriction analysis was performed with restriction enzymes from Fermentas. DNA ligation was performed with T4 DNA ligase (Fermentas) and round PCR amplification was done with Phusion DNA polymerase (Finnzymes).
Bacterial strains and growth conditions
Strain Lactococcus lactis NZ9000 [34] was used as an expression host in this study. In short, cells were grown as described previously [5] at 30uC in M17 medium (Difco) supplemented with 0.5% (w/v) glucose and 5 mg/ml chloramphenicol or 5 mg/ml erythromycin where appropriate. In case both antibiotics were used simultaneously, 4 mg/ml chloramphenicol and 4 mg/ml erythromycin were applied.
Antimicrobial activity assay
The indicator strain L.lactis NZ9000 bearing plasmid pNZnisPT was grown ON in M17 medium supplemented with 0.5% glucose. Next morning it was re-inoculated into a fresh M17 medium containing 0.5 ng/ml nisin for induction. When the OD 600 reached 0.6, 100 ml of the culture was added to 100 ml of liquid M17 agar at 40uC. Plates were dried, and wells were made. 50 ml Ni-NTA purified samples were applied to the wells and the plates were left ON at 30uC.
Ni-NTA purification
Ni-NTA purification was performed as previously described [5]. In brief, 1.5 ml of 50% superflow Ni-NTA column resin (Qiagen) was equilibrated twice with 38.5 ml lysis buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, 10 mM imidazole, pH 8) in a 50 ml tube by mixing on a rotor for 30 minutes. Subsequently, column material was resuspended in a 4-8 ml cytoplasmic fraction, transferred into a 15 ml tube, lysis buffer was added to a final volume of 12 ml and histagged protein was allowed to bind to the column material on a rotor in the cold room at 4uC for 2 hours. Subsequently, the column was washed twice with 35 ml of wash buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, 20 mM imidazole, pH 8). Elutions were collected in four fractions of 0.5 ml each with elution buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, 250 mM imidazole, pH 8). Fractions were analyzed by SDS-PAGE and Western Blot.
SDS-PAGE and Western Blot
Western blots were performed using anti-NisB, anti-NisC or anti-leader antibodies [5]. SDS-PAGE was done using standard molecular biology techniques [33]. Samples were not boiled before applying to SDS-PAGE.
Results
In order to investigate the substrate specificity of the nisin modification enzymes and to determine the binding requirements of precursor nisin to the modification enzymes NisB and NisC, we engineered two synthetic nisin variants, i.e. one lacking all serine and threonine residues in the core peptide part, and a second one lacking all cysteines. Additionally, precursor nisin variants with increasing numbers of negatively charged amino acids were made ( Fig. 1; Table 1). A previously described prenisin interaction assay was used that allows co-purification of the nisin modification enzymes NisB and NisC using a precursor nisin with a C-terminal His-tag extension as bait [5].
Ser/Thr-less and Cys-less variants of precursor nisin still bind NisB and NisC
Unnatural precursor nisin variants, one lacking all serines/ threonines and another one lacking all cysteines (all these residues were replaced by Ala residues) were expressed (Fig. 1) and employed as substrates for in vivo modification. Both synthesized variants were still able to bind NisB and NisC and able to pull them down ( Fig. 2A and 2C). This observation indicates that the presence of serines and threonines in the core peptide part of precursor nisin is not essential for NisB binding, and the presence of cysteines is not absolutely necessary for NisB or NisC binding. However, in the situation with Ser/Thr-less precursor nisin, only NisC can be detected (Fig. 2C, lane 2) but not the complex of NisB-NisA, as detected with Cys-less precursor nisin (Fig. 2C, lane 1).
Negatively charged residues in the core peptide do not influence NisB and NisC interactions
The requirements for binding of the nisin modification enzymes NisB and NisC to precursor nisin are currently unknown. The leader peptide is crucially important, but electrostatic interactions with the core peptide might also have an influence. To investigate this hypothesis, we introduced additional negative charges into the core peptide part of precursor nisin (Fig. 1). Three variants of precursor nisin with an increasing number of negatively charged residues were made, namely NisA-H 6 T2D P9D with two negatively charged residues, NisA-H 6 T2D P9D K12D N20E with four negatively charged residues and NisA-H 6 T2D P9D K12D N20E H27D K34E with six negatively charged residues (Fig. 1). These precursor nisin mutants were able to co-purify with NisB, indicating binding to NisB (Fig. 2B). Effect of negatively charged residues in the core peptide on the antimicrobial activity of precursor nisin Introduction of negatively charged residues at different positions of the core peptide part of precursor nisin results in a severe decrease in antimicrobial activity (Fig. 3). The notion that the backbones of rings A and B interact with lipid II [13] and that the C-terminal part of nisin is responsible for the interactions with the lipid membrane [35] allows us to classify the substitutions into categories: the T2D-lipid II interactions, the P9D-lipid II interactions, the K12D-membrane interactions, the N20Emembrane interactions, the H27D-membrane interactions, the K34E-membrane interactions. The general trend that is observed constitutes a severe decrease in the antimicrobial potency of nisin with increasing numbers of negatively charged residues introduced into the core peptide part.
Discussion
The substrate specificity of lantibiotic modification enzymes is not yet well understood, although some rules for the posttranslational modifications have been proposed [32]. Despite many attempts, obtaining the crystal structure of NisB is still a challenge. However, recently the long-awaited in vitro activity of NisB has been successfully reconstituted [36] which will lead to a more detailed understanding of the mechanism of NisB. In order to shed more light on the substrate specificities of this interesting enzyme, we made specific mutations in the core peptide part of precursor nisin and analyzed these variants a) for binding with the nisin modification enzymes NisB and NisC, b) for their antimicrobial activity in case of negative charge altered mutants.
Although serines and threonines are the residues that NisB specifically modifies, here we demonstrate that precursor nisin lacking all serines and threonines in the core peptide part still binds to and can co-purify NisB and NisC in the precursor nisinmodification enzyme pull-down assay [5]. This suggests that the serines and threonines are not involved in direct binding to NisB and strengthens the notion that the leader peptide in particular has the strongest contribution to modification enzyme binding. Notably, the Cys-less precursor nisin pulls down both NisC and the NisC-NisA complex, whereas the Ser/Thr-less precursor pulls down only NisC. This may indicate the importance of cysteines in precursor nisin for a release of NisA from NisC, i.e. the absence of cysteines may lead to incomplete reactions and thus inefficient release of NisA from NisC.
Our study demonstrates that the presence of increasing numbers of negatively charged residues in the core peptide part does not lead to decreased NisB-precursor nisin interactions either. Notably, although negatively charged residues in the core peptide part do not interfere with the binding of precursor nisin mutants to the modification enzymes NisB and NisC, NisA-H 6 T2D P9E, NisA-H 6 T2D P9E K12D N20E and NisA-H 6 T2D P9E K12D N20E H27D K34E mutants have strongly decreased antimicrobial activity levels, demonstrating that the negatively charged residues at positions T2, P9, K12, N20, H27 and K34 have an adverse effect on the antimicrobial activity of nisin. However, the observed antimicrobial activity suggests the presence of at least three lanthionine rings and indicates that these mutants are partially dehydrated.
The C-terminal region of nisin is responsible for interactions with the target membrane, in particular with negatively charged lipids [35]. The nisin rings A and B are responsible for the interactions with the pyrophosphates of lipid II [12,13]. Depending on the position, the decreased levels of the antimicrobial activity of the nisin mutants containing negatively charged residues in the core peptide part are probably due to a) decreased affinity for the pyrophosphate moiety of lipid II (positions T2, P9), b) decreased affinity for negatively charged phospholipids (positions K12, N20, H27 and K34) and c) a lack of one or more lanthionine rings. The first two thioether rings in nisin, i.e. ring A and B (Fig. 1) form a cage-like structure that interacts with the electronegative pyrophosphate moiety of lipid II. The negatively charged residues introduced in the N-terminal part of nisin, i.e. T2D P9D K12D N20E, are highly likely to be unfavourable in terms of electrostatic interactions with lipid II. Changing T to D at position 2 of nisin results in a more bulky side chain. Most likely, in addition to the unfavourable electrostatic effect, this leads to steric hindrance of the altered cage-like structure with the pyrophosphate moiety. The steric hindrance effect is also supported by the notion that mutation of the adjacent residue, i.e. S3T leads to 12 fold-reduced antimicrobial activity [30]. The Ser at position 3 is in the Dconfiguration, and the decrease in the antimicrobial activity by changing the thioether ring A from lanthionine to methyllanthionine remained unexplained until NMR studies [13] demonstrated that the additional methyl group would point in the space surrounded by the cage structure of the rings. To the best of our knowledge, the T2D substitution has never been studied before, while the T2A, the T2S and the T2V single point nisin mutants did not lead to a significant change in antimicrobial activity [37].
Our results show that the presence of negatively charged residues in the C-terminus of NisA (NisA-H 6 T2D P9D K12D N20E H27D K34E mutant) further decreases the antimicrobial activity of nisin (Fig. 3). This result is in agreement with previously published data, where the nisin Z V32E mutant, containing an extra negative charge, was analyzed for interactions with negatively charged lipids and for its antimicrobial activity [35]. In this study, NisZ V32E was demonstrated to have a decreased affinity towards negatively charged lipids together with a concomitant decreased antimicrobial activity [35]. Our study demonstrates a general trend that the increase in negative charges in the nisin core part lowers the antimicrobial activity of the mutants. With a few negative charges there should be at least three lanthionine rings present because mutants with rings A, B and C correctly formed are known to retain some activity. In case of high numbers of negative charges we cannot exclude that multiple lack of modification, as also suggested by Rink et al [32], is also responsible for the negligible activity observed. However, the substrate peptides can still be bound by NisBC. The presence of negatively charged residues in the core peptide part of precursor nisin lead to severely decreased antimicrobial activity of the nisin mutants bearing negatively charged residues, either caused by incomplete modification reactions or reduced interactions with lipid II and/or membranes or both. Serines/threonines and cysteines are not necessary for the interactions of the precursor nisin core peptide with the nisin modification enzymes. These findings can guide further engineering of lantibiotics as a potential class of alternative antibiotics. | 2018-04-03T00:50:01.502Z | 2013-09-09T00:00:00.000 | {
"year": 2013,
"sha1": "3b108f949ed822a4b644540ff84dedeec9a402a1",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0074890&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8d9361469fcdedc47fbe2b7272eab86500d7830",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
1087613 | pes2o/s2orc | v3-fos-license | Cell death in amastigote forms of Leishmania amazonensis induced by parthenolide
Background Leishmania amazonensis infection results in diverse clinical manifestations: cutaneous, mucocutaneous or visceral leishmaniasis. The arsenal of drugs available for treating Leishmania infections is limited. Therefore, new, effective, and less toxic leishmaniasis treatments are still needed. We verified cell death in amastigote forms of Leishmania amazonensis induced by the sesquiterpene lactone parthenolide. Results The tested compound was able to concentration-dependently affect axenic and intracellular amastigotes, with IC50 values of 1.3 μM and 2.9 μM, respectively after 72 h incubation. No genotoxic effects were observed in a micronucleus test in mice. Parthenolide induced morphological and ultrastructural changes in axenic amastigotes, including a loss of membrane integrity, swelling of the mitochondrion, cytoplasmic vacuoles, and intense exocytic activity in the region of the flagellar pocket. These results led us to investigate the occurrence of autophagic vacuoles with monodansylcadaverine and the integrity of the plasma membrane and mitochondrial membrane potential using flow cytometry. In all of the tests, parthenolide had positive results. Conclusions Our results indicate that the antileishmanial action of parthenolide is associated with autophagic vacuole appearance, a reduction of fluidity, a loss of membrane integrity, and mitochondrial dysfunction. Considering the limited repertoire of existing antileishmanial compounds, the products derived from medicinal plants has been one the greatest advances to help develop new chemotherapeutic approaches.
Background
Leishmaniasis is associated with high morbidity but low mortality. It is a poverty-related disease and has become a serious impediment to socioeconomic development. The true burden of this illness remains unclear because the notification of the disease is compulsory in only 32 of the 88 affected countries, and most of the affected people live in remote areas. Additionally, the disfiguring scars caused by Leishmania keep patients hidden. An estimated 1.5 million new cases of cutaneous leishmaniasis and 500,000 cases of visceral leishmaniasis occur annually, with approximately 12 million people currently infected [1]. Moreover, cases of Leishmania and human immunodeficiency virus co-infection are becoming more frequent [2,3].
Leishmania (Leishmania) amazonensis infection results in diverse clinical manifestations, ranging from cutaneous to mucocutaneous or visceral involvement [4]. This is attributable to the genetic diversity of L. amazonensis strains, and this divergence extends to variations of chromosome size [5].
The arsenal of drugs available for treating Leishmania infections is limited. The basic treatment consists of administering pentavalent antimonial compounds [6]. However, the choice of medication depends on the species involved and type of clinical manifestation [7]. The usefulness of antileishmanial drugs has been limited by their toxicity, and treatment failure is often attributable to drug resistance [8]. To solve this problem, developing less toxic drugs and discovering cellular and molecular markers in parasites to identify the phenotype of chemoresistance against leishmanicidal drugs are necessary [8,9]. These problems led to the development of additional antileishmanial drugs. Some drug-delivery systems, plants, and synthetic compounds are being developed as effective treatments for the disease [7].
Previous studies demonstrated the in vitro activity of parthenolide, a sesquiterpene lactone purified from Tanacetum parthenium, against promastigotes and intracellular amastigotes (inside J774G8 macrophages) of L. amazonensis [10]. Moreover, significant alterations in promastigote forms were demonstrated by light microscopy and scanning and transmission electron microscopy [11].
We evaluated the activity of parthenolide against L. amazonensis axenic amastigotes and demonstrated a possible mechanism of action of this compound in this life stage of the parasite.
Antileishmanial assays
The addition of 4.0 μM parthenolide to the culture of axenic amastigotes induced growth arrest and partial cell lysis after 48 h (i.e., growth inhibition up to 90%). When the cells were treated with 2.0 μM parthenolide, the percentage of growth inhibition was approximately 70%. Parthenolide had an IC 50 A concentration of 1.0 or 0.5 μM of the reference drug amphotericin B inhibited more than 93% of L. amazonensis amastigote cell growth. This drug had an IC 50 and IC 90 of 0.22 μM and 0.45 μM, respectively, after culturing for 72 h ( Figure 1B).
Parthenolide also inhibited the growth of intracellular amastigotes in mouse resident peritoneal macrophages after 24 h incubation. Treatment with 4.0, 3.2, 2.4, and 1.6 μM parthenolide reduced the proliferation of parasites into macrophages (survival index) by 82.5, 59.4, 37.3, and 6.1%, respectively, compared with the control. The survival index indicated that parthenolide inhibited the intracellular viability and multiplication of Leishmania in infected murine macrophages and showed 50% inhibition of cell survival at a concentration of 2.9 μM (Figure 2).
Previous studies showed that when J774G8 murine macrophages were treated with parthenolide, the 50% cytotoxic concentration (CC 50 ) was 56.4 μM [10]. By comparing the toxicity for J774G8 macrophages and activity against intracellular amastigotes, obtaining the selectivity index ratio is possible (CC 50 for J774G8 cells/IC 50 for protozoa). In the Figure 1 Effects of parthenolide (A) and amphotericin B (B) on the growth of L. amazonensis axenic amastigotes. After treatment with different concentrations of the drugs, parasites were counted, and the percentage of parasite growth inhibition was determined daily for 120 h. The data indicate the average of the two independent experiments performed twice. Statistical analysis: the data of each incubation period were compared statistically at p < 0.05. Bars that are not indicated with letters in common are statistically different. present study, parthenolide had an IC 50 of 2.9 μM, presenting a selectivity index ratio of 19.4 (i.e., the compound is 19.4-times more selective against parasites than host cells).
Mutagenicity evaluation
The results of the in vivo bone marrow micronucleus test in rats are shown in Table 1. Parthenolide did not induce genotoxic effects at a concentration of 3.75 mg/kg body weight, with no significant increase in the frequency of MNPCE (10.0 ± 1.6) compared with the vehicle control group (7.0 ± 1.8). In contrast, a significant increase in the frequency of MNPCE was observed in the positive control group (cyclophosphamide; 27.0 ± 4.0). In the present study, no clinical signs of toxicity were observed in treated animals. However, further studies should be performed with higher concentrations of parthenolide to exclude the possibility of genotoxicity.
Scanning and transmission electron microscopy
To determine the morphological and ultrastructural changes in L. amazonensis axenic amastigotes induced by parthenolide, the cells were treated with the IC 50 (1.3 μM) of the compound. Untreated controls showed no morphological ( Figure 3A) or ultrastructural ( Figure 3D) differences. Similarly, cells incubated with 0.05% DMSO (i.e., the same concentration used in the final solutions of parthenolide) remained unaltered (data not shown). When treated with parthenolide, changes in form were visualized by scanning electron microscopy ( Figure 3B and C). Transmission electron microscopy showed a loss of membrane integrity associated with amphotericin B exposure at the IC 50 concentration ( Figure 3E). Parthenolide caused intense swelling of the mitochondrion ( Figure 3F) and cytoplasmic blebbing ( Figure 3G). Finally, the ultrastructural analysis showed that amastigotes treated with parthenolide formed multiple cytoplasmic vacuoles ( Figure 3H), and intense exocytic activity was observed in the region of the flagellar pocket, appearing as concentric membranes within the pocket ( Figure 3I).
Labeling of autophagic vacuoles with monodansylcadaverine
We studied the incorporation of monodancylcadaverine (MDC) in cells in which autophagy was stimulated by parthenolide. Axenic amastigotes treated with the IC 50 ( Figure 4B
Flow cytometric measurement of amastigote culture
Live L. amazonensis cells were incubated with propidium iodide and rhodamine 123, and fluorescence was measured by flow cytometry. The gated percentage of propidium iodide-stained amastigotes after treatment with amphotericin B (positive control) was 71.4%, much higher than untreated parasites (negative control) that presented 6.0% ( Figure 5A). When the cells were treated with 20 and 40 μM parthenolide, the percentages of labeled amastigotes were 34.2% and 56.2%, respectively ( Figure 5B), possibly indicating a considerable increase in plasma membrane permeability. To prove that Leishmania cells functionally respond to the pharmacological alteration of ΔΨ m , amastigotes were treated with the protonophore carbonyl cyanide m-chlorophenylhydrazone (CCCP), which has been shown to interfere with mitochondrial membrane potential in various cell types [12]. The results showed that 82.5% of the amastigotes without treatment (negative control) presented a maximal increase in fluorescence, and with 200 μM CCCP, 46.7% showed fluorescence, indicating a loss of ΔΨ m ( Figure 5C). We next observed ΔΨ m reductions of 68.4% and 56.1% when the amastigotes were treated with 20 and 40 μM parthenolide, respectively, suggesting that this compound interferes with the mitochondrial membrane potential leading to alteration of ATP generation and in consequence cell damage takes place.
EPR spectra of spin-labeled Leishmania
The experimental and best-fit EPR spectra of spin-label 5-DSA structured in the plasma membrane of Leishmania are shown in Figure 6. These EPR spectra are typical for cellular membranes that contain an appreciable amount of integral proteins. Treatment with parthenolide increased two EPR parameters, the outer hyperfine splitting, 2A // , and rotational correlation time, τ C , indicating a significant reduction of membrane lipid dynamics. 2A // is a practice parameter measured directly in EPR spectra that has been widely used to monitor membrane fluidity, although in principle it is a static parameter associated with the orientation distribution of the spin labels in the membrane. The theoretical EPR spectrum of spin-label 5-DSA in the plasma membrane of Leishmania was best fitted using a model of two spectral components. This indicates that the membrane has two populations of spin labels of distinct mobility. The EPR spectra of spin labels in lipid bilayers are well known to contain proteins sometimes composed of two spectral components. The more restricted component is associated with boundary lipids where the spin labels surround the hydrophobic regions of proteins, whereas the more mobile component arises from the spin labels located in the bulk bilayer phase, away from the protein [13]. The fitting program provides the τ c and population of each component. Thus, the mean of the rotational correlation time was calculated as τ c = N 1 *τ c1 + N 2 *τ c2 , in which N 1 and N 2 are the fractions of the population in components 1 and 2, respectively, and τ c1 and τ c2 are the corresponding rotational time correlations.
Discussion
For many years, parasites of the genus Leishmania have displayed extraordinary plasticity to face modifications in their environment [14]. The expansion of risk factors related to environmental changes and man-made transformations are making leishmaniasis a growing public health concern in many countries worldwide [15]. Leishmaniasis urgently needs novel drugs with improved features, and many compounds primarily derived from plants are promising leads for the development of novel chemotherapeutics [16]. The development of axenic cultures of amastigotes of Leishmania species yielded new opportunities to investigate the antileishmanial activities of new compounds directly at the mammalian stage of the parasite [17]. Assays that use intracellular amastigote cell cultures are relevant because this life cycle stage of the parasite is important to its pathogenicity, and data obtained exclusively from promastigote cell lines are insufficient [16]. Therefore, in the present study, we determined the leishmanicidal activity of parthenolide, which is naturally occurring, in both axenic and intracellular amastigotes.
To discover better leishmanicidal compounds, the isolation and purification of the active ingredients of medicinal plants are gaining attention [18]. Many new natural product groups, such as terpenes, have exhibited antiprotozoal potential and attracted renewed interest with surprising efficacy and selectivity [19].
Parthenolide is a lipophilic hydrocarbon compound formed by units of isoprene. The accumulation of lipophilic compounds in the cytoplasmic membrane and membrane constituents of microorganisms has considerable effects on the loss of cellular integrity and inhibition of respiratory cellular activity in mitochondria [20]. This interaction with cell membranes eventually leads to cell death. In our research, parthenolide had antileishmanial effects against axenic and intracellular amastigotes of L. amazonensis presenting IC 50 of 1.3 after 72 h growth and 2.9 μM after 24 h growth, respectively. The differences in IC 50 values can be explained because the experiments with axenic amastigotes are directed against the relevant stage of the parasite whereas the use of intracellular amastigotes will give essential information on the capacity of the drugs to target intracellular organisms. The role played by the macrophages on drug-mediated toxicity may be important. Their presence may limit the availability of the compounds under evaluation [21,22].
The toxicity for J774G8 macrophages and the activity against intracellular amastigotes were compared by using the selectivity index ratio (CC 50 for J774G8 cells/IC 50 for protozoa) [10]. The parthenolide was more selective against the intracellular amastigotes than the mammalian cells, with a selectivity index ratio of 19.4. It is generally considered that biological efficacy is not due to in vitro cytotoxicity when this index is ≥ 10 [23,24].
The low toxicity against mammalian cells is an important criterion in the search for active compounds with antiprotozoal activity. For this purpose, the genotoxicity of parthenolide in a mouse model was determined using a micronucleus test and cyclophosphamide as the positive control because it is a known genotoxin [25]. Micronuclei are masses of cytoplasmic chromatin that appear outside the main nucleus as a result of chromosomal damage or damage to the mitotic apparatus in the erythroblasts of the test species, and they can be used as an indicator of the effects of agents that cause DNA damage [26]. In mice, micronuclei in mature erythrocytes in peripheral blood live approximately 1 month, providing a measure of average chromosomal damage [27]. Our results showed no differences in the frequency of MNPCE compared with the negative control, demonstrating no toxic effects on bone marrow at the dose tested (3.75 mg/kg body weight).
Electron microscopic studies revealed extensive cytoplasmic vacuolization, leading to the examination of the possibility that parthenolide induces autophagic cell death. Autophagy cell death is a process that is thought to occur in all eukaryotes and is characterized by an accumulation of autophagic vacuoles. This mechanism occurs for energy production for survival when cells recycle their cytoplasmic contents during periods of environmental stress or certain stages of development. A double-membrane vesicle called the autophagosome forms in the cytosol, engulfing organelles and bulk cytoplasm. Subsequently, these vesicles fuse with lysosomes, where their contents are degraded and recycled [28]. One of the most frequently used methods to examine autophagy is staining with acidotropic dyes [29], and MDC is considered an autofluorescent compound and specific marker for autophagic vacuoles [30]. MDC staining is only obtained when the compartments into which it loads are acidic. Neutralization of these compartments leads to a swift loss of MDC staining or lack of MDC uptake [31]. Therefore, we suggest that the vacuoles that were observed under a transmission electron microscope are autophagosomes. Another study used MDC as a marker to analyze the molecular level of the machinery involved in the autophagic process [32] and was also used to demonstrate that antimicrobial peptides induce autophagic cell death in L. donovani [33].
Amphotericin B was used as a positive control in some of our experiments because this polyene antibiotic forms aqueous and nonaqueous pores in membranes, which is the basis of leishmanicidal action [34]. Using transmission electron microscopy, we could see the loss of membrane integrity induced by this antimicrobial agent. Similarly, alterations in the cytoplasmic membrane, including membrane blebbing and disruption, could be visualized in axenic amastigotes treated with parthenolide. Studies have shown that a flow cytometric membrane potential assay can be used as a reliable tool for studying the interactions between amphotericin B and the Leishmania membrane [35]. Alterations in membrane permeability are detected by propidium iodide nucleic acid stain that selectively passes through plasma membranes and bind to DNA, emitting high fluorescence when excited by an argon ion laser [36]. Since its introduction, the propidium iodide flow cytometric assay has also been widely used as a quantitative measure of cell apoptosis. During apoptosis, DNA fragmentation occurs, with a subsequent loss of cellular DNA content [37].
Terpenoic compounds can produce major changes in the cellular and mitochondrial membrane structures of different pathogenic agents, modifying their permeability and integrity [20]. Ultrastructural findings also revealed mitochondrial damage induced by parthenolide. We used flow cytometry analysis to determine whether the compound interferes with the mitochondrial membrane potential of the amastigotes. The flow cytometry results showed that transmembrane potential decreased, reflected by a reduction of rhodamine 123 fluorescence. Rhodamine 123 is a fluorescent cationic stain for mitochondria in living cells and is subsequently washed out of the cells once the mitochondrion's membrane potential is lost [38]. The present results indicated an increase in proton permeability through the internal mitochondrial membrane, inhibition of electron transport, or decrease in mitochondrial substrate transport/oxidation, which would impair proton pumping by mitochondrial complexes and reduce adenosine triphosphate synthesis, resulting in parasite cell death [39]. CCCP was used as positive control because it is an uncoupler of oxidative phosphorylation and reduces mitochondrial membrane potential by directly attacking the proton gradient across the inner mitochondrial membrane [12,40]. Amastigotes treated with parthenolide presented severe plasma membrane and mitochondrial damage, suggesting an autophagic process [39].
Treatment with parthenolide induced shedding of the membranes into the flagellar pocket, appearing as concentric membranes and suggesting intense exocytic activity because this site is where endocytosis and exocytosis occur in trypanosomatids. Treatment of promastigote forms of L. amazonensis with edelfosine for 1 day [41] and parthenolide for 3 days [10] also led to the appearance of a large number of vesicles inside the flagellar pocket, suggesting a process of exacerbated protein production by cells as they attempt to survive.
Other studies indicated that the plasma membrane of human promyelocytic leukemic HL-60 cells appears to be one of the targets of parthenolide because its integrity is lost very early during cell death, reflected by atypical apoptosis and primary necrosis (i.e., lysis of the membrane) [42].
The lipid spin probe 5-DSA was incorporated into the plasmatic membrane of Leishmania in the usual way, and the EPR spectra obtained were typical for cell membranes. Interestingly, the spectra of the Leishmania membrane were very similar to those for the same spin label in erythrocyte membranes [43]. The erythrocyte membrane of spin-labeled lipids has been well characterized by EPR spectroscopy and is considered to have certain rigidity, particularly because of its high content of protein and cholesterol. The presence of sesquiterpene parthenolide significantly increased the rigidity of the membrane of Leishmania when applied to the cell suspension at a ratio of 3 × 10 9 parthenolide molecules/cell. Parthenolide also showed dose-dependent anti-Leishmania activity against the amastigote form. The IC 50 was 1.3 μM parthenolide/ ml for a cell concentration of 1 × 10 6 cell/ml. Therefore, the effect of parthenolide against the amastigote forms of Leishmania was observed at a ratio of 7.8 × 10 8 parthenolide molecules/cell. The greatest change in membrane fluidity was observed at a concentration 3.8-fold higher than for growth inhibition. Membrane stiffness, assessed by EPR spectroscopy of the spin label, has been associated with lipid peroxidation [44,45]. A detailed study of the interaction between parthenolide and membranes and their role as a pro-oxidant in simpler systems is necessary to determine whether the membrane rigidity observed here was attributable to lipid peroxidation.
Conclusions
Our results indicated that the antileishmanial action of parthenolide is associated with autophagic vacuole appearance, membrane stiffness, the loss of membrane integrity, and mitochondrial dysfunction. These results indicate that parthenolide induced amastigote cell death by autophagy, but other mechanisms of cell death cannot be dismissed, such as apoptosis and necrosis. Considering the limited repertoire of existing antileishmanial compounds, continuously developing new leishmanicidal compounds is essential. In the ongoing search for the best antileishmanial compounds, products derived from plants are gaining ground. The isolation and purification of the active components of medicinal plants has been one the greatest advances. Additionally, delineation of the biochemical mechanisms involved in mediating effect of these compounds would help develop new chemotherapeutic approaches.
Drugs
Parthenolide (minimum 90%) was purchased from Sigma-Aldrich (Steinheim, Germany). Amphotericin B (Cristália, Produtos Químicos Farmacêuticos Ltda, Itapira, SP, Brazil) was used as a positive control. In all of the tests, 0.05% dimethyl sulfoxide (DMSO; Sigma, St. Louis, MO, USA) was used to dissolve the highest dose of the compounds and had no effect on the parasites' proliferation or morphology.
Axenic amastigotes
Promastigotes of the Leishmania species differentiate to amastigotes with the combination of low pH and high temperature [46]. The WHOM/BR/75/Josefa strain of Leishmania amazonensis, isolated by C.A. Cuba-Cuba (University of Brasília, Brasília, Distrito Federal, Brazil) from a human case of diffuse cutaneous leishmaniasis, was used in the present study. Axenic amastigote cultures were obtained by the in vitro differentiation of promastigotes from the stationary phase in 25 cm 2 tissue culture flasks by progressive temperature increase and pH decrease [47]. The cultures were maintained at 32°C in Schneider's insect medium (Sigma, St. Louis, MO, USA), pH 4.6, with 20% fetal bovine serum through weekly serial sub-culturing for further studies.
Antiproliferative effect
For the parasite growth inhibition assays, L. amazonensis axenic amastigotes were harvested during the exponential phase of growth, and 10 6 cells were added to each well of a 24-well plate and treated with different concentrations of parthenolide and amphotericin B. Medium alone and 0.05% DMSO were used as negative controls. For each treatment, the parasites were observed and counted daily using a Neubauer chamber with an optical microscope. Each experiment was performed in duplicate and twice on different occasions. The antiproliferative effect (percentage of growth inhibition) was evaluated with 5 day treatment, and the data are expressed as the mean ± standard error of the mean (Microsoft Excel). The corresponding 50% and 90% inhibitory concentrations (IC 50 and IC 90 ) were determined from the concentration-response curves (Excel software). Data were compared via one-way analysis of variance (ANOVA) followed by Tukey's multiple range test for statistically significant differences at p < 0.05.
Activity of parthenolide in infection of murine macrophages
The effect of parthenolide on L. amazonensis-infected mouse peritoneal macrophages was evaluated. The experimental protocol was approved by the Animal Ethics Committee of the Universidade Estadual de Maringá (no. 013/ 2010). BALB/c mice resident peritoneal cells were harvested in phosphate-buffered saline (PBS; 0.01 M, pH 7.2) and centrifuged, and the sediment was resuspended in RPMI 1640 medium supplemented with 10% fetal bovine serum. Cells (1 × 10 5 ) were seeded on 13-mm coverslips in 24-well plates and incubated at 37°C in a 5% CO 2 atmosphere. After 15 h, macrophages were infected with promastigotes at a 10:1 parasite:cell ratio and incubated again for 6 h. The remaining noninternalized parasites were removed. The infected host cells were treated with parthenolide at concentrations of 4.0, 3.2, 2.4, and 1.6 μM. After 24 h, the coverslips were washed with PBS, fixed in methanol, stained with Giemsa, mounted in Entellan (Merck), and examined under an optical microscope. The rate of cell infection and number of amastigotes per cell were evaluated by counting 200 random cells in duplicate cultures in at least two independent experiments. The survival index was calculated by multiplying the percentage of infected macrophages and mean number of internalized parasites per macrophage. Data were compared via one-way analysis of variance (ANOVA) followed by Tukey's multiple range test for statistically significant differences at p < 0.05.
Genotoxicity study
To assess the toxicity of parthenolide in mice, a micronucleus test was conducted in groups of five male and five female Swiss albino mice (Mus musculus) that weighed approximately 42 g. The animals were obtained from the Central Animal House of the Universidade Estadual de Maringá, Paraná, Brazil. They were housed in plastic cages at 22 ± 1°C and 55 ± 10% humidity, with a 12 h/12 h light/ dark cycle and free access to water and food (Nuvilab Cr1). The study was conducted according to experimental standards approved by the Animal Ethics Committee of the Universidade Estadual de Maringá (protocol no. 013/2010).
The animals received 3.75 mg parthenolide/kg body weight suspended in 10% DMSO by oral gavage. The negative control was a vehicle group, and the positive control was a group that received 40 mg cyclophosphamide/ kg body weight. The mice were examined regularly for mortality and clinical signs of toxicity until sacrifice by carbon dioxide asphyxiation, which occurred 24 h after treatment. Both femurs were dissected, and bone marrow was flushed with fetal calf serum. After centrifugation for 5 min at 2,000 × g, 10 μl of the sediment was smeared on glass slides and air-dried. The smears were fixed with absolute methanol for 5 min and stained with May-Grunwald-Giemsa to detect micronucleated polychromatic erythrocytes (MNPCE). The number of micronucleated cells was counted in 2,000 reticulocytes per animal using an Olympus BH-2 microscope at 1,000× magnification [26]. The statistical analyses were made with a oneway analysis of variance (ANOVA) followed by Dunnet test. Differences were considered significant at p value of less than 0.05.
Scanning and transmission electron microscopy
After treatment with the IC 50 (72 h) of parthenolide, axenic amastigotes were washed in PBS and fixed in 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer at 4ºC. For scanning electron microscopy, amastigotes were placed on a specimen support with a poly-L-lysine-coated coverslip and washed in cacodylate buffer. The cells were dehydrated in an increasing ethanol gradient, criticalpoint-dried in CO 2 , sputter-coated with gold, and observed in a Shimadzu SS-550 SEM scanning electron microscope.
For transmission electron microscopy, amastigote forms were treated with the IC 50 of parthenolide and the IC 50 of amphotericin B and fixed as described above. The cells were postfixed in a solution that contained 1% osmium tetroxide, 0.8% potassium ferrocyanide, and 10 mM calcium chloride in 0.1 M cacodylate buffer, dehydrated in an increasing acetone gradient, and embedded in Epon resin. Ultrathin sections were stained with uranyl acetate and lead citrate, and the images were examined in a Zeiss 900 transmission electron microscope.
Fluorescence of monodansylcadaverine during cell death
Axenic amastigotes were treated with IC 50 and IC 90 equivalents of parthenolide. After 72 h, the cells were washed and resuspended in PBS. To verify the induction of autophagy by parthenolide, the cells were incubated with 0.05 mM monodansylcadaverine (MDC) at 37°C for 10 min. After incubation, the cells were washed three times with PBS to remove excess MDC, immediately analyzed by fluorescence microscopy at an excitation wavelength of 360-380 nm and emission wavelength of 525 nm, and photographed using a charge-coupled-device camera. This study was qualitative.
Flow cytometry
The antileishmanial activity of parthenolide (20 and 40 μM) on the integrity of the plasma membrane and mitochondrial membrane potential of axenic amastigotes (5 × 10 6 cells/ml) was determined after 3 h treatment. Amphotericin B (5.0 μM) and carbonyl cyanide m-chlorophenylhydrazone (200 μM) were used as positive controls. Untreated amastigotes were used as a negative control. Each flow-cytometric technique was evaluated by repeating each experiment three times to verify reproducibility.
The integrity of the plasma membrane was assessed using L. amazonensis amastigotes at an average density of 5 × 10 6 cells suspended in 500 μl PBS and stained with 50 μl propidium iodide (2 μg/ml) for 5 min at room temperature.
To measure mitochondrial membrane potential (ΔΨ m ), 1 ml of saline that contained 1 × 10 6 of treated amastigotes was mixed with 1 μl rhodamine 123 (5 mg/mL) for 15 min at 37°C. The cells were washed, resuspended in PBS, and incubated at the same temperature for 30 min.
A total of 10,000 events were analyzed per sample using a FACSCalibur cytometer, and numeric data were processed with Cellquest software (both from Becton Dickinson). Propidium iodide and rhodamine 123 are excited with a 480 nm argon ion laser, and fluorescence emission occurs at 560-580 nm and 515-530 nm, respectively.
Electron paramagnetic resonance spectroscopy
Spin-label 5-doxyl stearic acid (5-DSA), with a nitroxide radical moiety (doxyl) in the fifth carbon atom of the acyl chain, was purchased from Sigma (St. Louis, MO, USA). A small aliquot (3 μl) of stock solution of the spin label in ethanol (2 mg/ml) was transferred to a glass tube. After the solvent evaporated, approximately 2.4 × 10 8 cells of Leishmania suspended in 40 μl PBS was added to the film of the spin label with gentle agitation. In a second tube, 6 μl of a stock solution of parthenolide in chloroform (201 mM) was added. After evaporation of the solvent, the first spin-labeled cell suspension was placed on the parthenolide film and gently agitated. The cells were then introduced into a 1 mm inner diameter capillary column for electron paramagnetic resonance (EPR) measurements, which was sealed by flame. Samples were also prepared that contained double and triple the concentrations of parthenolide used in the first sample (using 12 and 18 μl of the solution of parthenolide in chloroform, respectively).
Electron paramagnetic resonance spectroscopy was performed with a Bruker ESP 300 spectrometer (Rheinstetten, Germany) equipped with an ER 4102 ST resonator. The instrument settings were the following: microwave power, 10 mW; modulation frequency, 100 KHz; modulation amplitude, 1.0 G. Electron paramagnetic resonance spectra simulations were performed using the NLLS program developed by Budil and coworkers [48]. In the spectral calculations, the NLLS program includes the magnetic g-and A-tensors and rotational diffusion tensor, R, which are expressed in a system of Cartesian axes fixed in the spinlabeled molecule. To reduce the number of parameters in the fittings and simplify the simulation, the average rotational diffusion rate, R bar , was calculated by the fitting program using the relationship R bar = (R per 2 •R par ) 1/3 , in which R per is the perpendicular component of the rotational diffusion, and R par is the parallel component of the rotational diffusion. R bar was converted to the parameter rotational correlation time, τ c , following the relationship τ c = 1/6 R bar . Similar to previous studies [49,50], the magnetic parameters were determined based on a global analysis of the overall spectra obtained in this work, and all of the EPR spectra were simulated using the same predetermined parameters. In this work, the spectra were simulated with a model of two spectral components. | 2017-06-21T01:22:10.295Z | 2014-06-10T00:00:00.000 | {
"year": 2014,
"sha1": "eb76dc922c5f92ef5a25a3769e197a57841c4b88",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-14-152",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb76dc922c5f92ef5a25a3769e197a57841c4b88",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
3424765 | pes2o/s2orc | v3-fos-license | Mutations in SNORD118 cause the cerebral microangiopathy leukoencephalopathy with calcifications and cysts
Although ribosomes are ubiquitously expressed and essential for life, recent data indicate that monogenic causes of ribosomal dysfunction can confer a remarkable degree of specificity in terms of human disease phenotype. Box C/D small nucleolar RNAs (snoRNAs) are evolutionarily conserved non-protein encoding RNAs involved in ribosome biogenesis. Here we show that biallelic mutations in the gene SNORD118, encoding the box C/D snoRNA U8, cause the cerebral microangiopathy leukoencephalopathy with calcifications and cysts (LCC), presenting at any age from early childhood to late adulthood. These mutations affect U8 expression, processing and protein binding and thus implicate U8 as essential in cerebral vascular homeostasis.
In 1996, Pierre Landrieu and colleagues described three unrelated children with a purely neurological disorder characterized by the radiological triad of cerebral white matter disease (leukoencephalopathy), intracranial calcifications and cysts (LCC) (Fig. 1) 1 . This disease manifests as progressive cerebral degeneration, in which microangiopathy characterized by angiomatous-like blood vessels with gliosis and Rosenthal fiber deposition is the most striking pathological feature (Supplementary Fig. 1) [1][2][3] . The observation of sibling pairs, including affected females, suggested that LCC is a genetic disorder, likely inherited as an autosomal recessive trait. However, uncertainty has remained over the molecular basis of this phenotype, not least because of the wide range of age at presentation that has been observed 4 . Here we define LCC as an apparently single-gene disorder due to germline biallelic mutations in the gene encoding the box C/D snoRNA U8, thus indicating that LCC may represent a novel ribosomopathy. Furthermore, the genetic architecture of LCC illustrates some of the difficulties associated with the attribution of causation to sequence variants in non-protein-coding genomic DNA.
Identification of SNORD118 mutations in patients with LCC
Over a period of 12 years, we collected clinical data and biological samples relating to 40 patients affected with LCC (Supplementary Table 1). Surprisingly for a rare autosomal recessive disorder, there was a relative paucity of consanguineous families in our cohort (only 2 of 33 pedigrees), strongly suggesting that homozygous mutations might not typically be associated with this phenotype. These 40 patients, who were mainly of European ancestry, demonstrated an age at presentation ranging from early infancy to 54 years of age. In an attempt to define the genetic basis of LCC, we sequenced the exomes of 18 affected individuals and analyzed the data in the context of both an autosomal recessive trait and an autosomal dominant model with reduced penetrance. However, no candidate causal mutations were identified (data not shown). We then pursued a different strategy, using linkage and haplotype analysis in five pairs of affected siblings born to unrelated parents and two singletons who were the offspring of separate consanguineous unions. In this way, across the genome, we were able to identify a single region of >1 Mb in size with a Mutations in SNORD118 cause the cerebral microangiopathy leukoencephalopathy with calcifications and cysts 1 1 8 6 VOLUME 48 | NUMBER 10 | OCTOBER 2016 Nature GeNetics A r t i c l e s logarithm of odds (LOD) score >3. The minimal mapping locus of 1.2 Mb was located on chromosome 17 (genomic coordinates 7,721,931-8,930,080, GRCh37) (LOD score of 6.02), indicating that LCC diseasecausing variants lie within this interval (Supplementary Fig. 2).
Considering the absence of any obvious pathogenic variants on reexamination of our sequence data covering the coding exons and essential splice sites in this mapping region, we undertook a capture sequencing assay of 3 Mb of genomic DNA on chromosome 17 (coordinates 7,000,000-10,000,000) using samples from ten unrelated patients. In each of these affected individuals, we identified two rare variants (defined as alleles with a frequency of <0.005 in the Exome Aggregation Consortium (ExAC) database) lying within a 199-bp stretch of DNA (8,076,076,960) encompassing the gene SNORD118 (NR_033294.1). Sanger sequencing confirmed these variants, leading us to analyze a further 30 patients demonstrating typical clinical and neuroradiological characteristics of LCC. In total, we observed two rare sequence variants that segregated with phenotypic status (40 affected individuals and 5 unaffected full siblings) in all 33 families in our cohort (Fig. 2, Table 1 and Supplementary Table 2). Where DNA was available (18 families), all parents showed appropriate heterozygosity for a single variant, except in two cases: in F819, the mother carried two rare variants, and her two affected children each inherited a distinct rare maternal allele in combination with a paternally derived genomic deletion of SNORD118 (Supplementary Fig. 3); and, in F906, a c.103G>A nucleotide alteration arose de novo on the paternal allele (with microsatellite analysis confirming paternity; Supplementary Table 3).
Of the total of 36 rare putative pathogenic variants observed in SNORD118, 13 were new (not annotated in the ExAC database, comprising more than 112,000 annotated alleles for each of these 12 variants, or our in-house data set of >5,000 exomes). In 15 and 16 of 33 families, the affected individuals were compound heterozygous for either one completely new allele and one rare allele or for two rare alleles (where the rarer of the two variants had an ExAC allele frequency of <0.00009756 in all cases), respectively. Consistent with our linkage analysis, two unrelated patients born to consanguineous parents demonstrated homozygosity for a rare allele (with allele frequencies of 0.0005781 and 0.00000887 for these putative mutations and no recorded homozygotes in the ExAC database). Further supporting the notion that biallelic variants in SNORD118 are causative of LCC, we noted recurrent putative mutant alleles in our cohort. Specifically, 11 new or rare variants were observed in more than one family, with a mutant allele shared by 4 or more different sets of families at 5 distinct nucleotide positions. One of these alleles, n.131C>G, was seen in four LCC families but was not recorded in the ExAC database of more than 112,000 alleles at this position, while an n.*5C>G variant, observed in the compoundheterozygous state in 8 disease pedigrees (8 of 66 alleles in affected individuals), had an ExAC frequency of 0.0005781 (1 in 1,730 control alleles) (8 in 66 versus 1 in 1,730; χ 2 test, P < 0.000005). Notably, screening of a panel of 677 European controls to determine the frequency of biallelic new or rare variants in the same person, which is not possible to derive from ExAC data, found only 4 individuals carrying two rare variants on distinct alleles (4 of 677 versus 20 of 20 LCC probands where it was possible to test for or impute biallelic inheritance; χ 2 test, P < 0.000005) (Supplementary Tables 4 and 5). Of further note, none of these alleles were new and only one was seen in our patient cohort. Thus, despite a remarkable variability in age at presentation for some patients, ranging from the neonatal period (for example, F172 and F344) to the sixth decade of life (for example, F433 and F1172), these data indicate that LCC is a genetically homogeneous disorder and that we have identified the disease-causing genomic variants in all 40 patients conforming to the LCC phenotype in our study.
Functional analysis of a selection of SNORD118 variants
SNORD118 encodes the box C/D snoRNA U8. Box C/D snoRNAs are evolutionarily conserved RNAs involved in ribosomal biogenesis and function 5 . The U8 snoRNA is independently transcribed, producing precursor U8 snoRNAs that are processed to mature box C/D U8 snoR-NAs 6 . The conserved box C/D motif binds to four core proteins, namely, 15.5K, NOP56, NOP58 and fibrillarin 7 . These proteins assemble onto the box C/D motif in a stepwise manner, with initial binding of 15.5K necessary for recruitment of the three remaining proteins, thus facilitating stabilization, processing and trafficking. U8 also contains a well-conserved LSm (like-Sm)-binding site 8 . Seven individual LSm proteins assemble into a ring structure that binds U8 to form a ribonucleoprotein complex. This U8 snoRNP is necessary for ribosomal RNA processing 9 .
Of the 36 mutations affecting U8 that we identified, one (c.-54_-49del) lies within a recognized 5′ enhancer element important for U8 expression 6 , 29 are within the U8 sequence and 4 are situated immediately 3′ to the U8 sequence in a region that is transcribed but then removed by processing of the precursor U8 transcript 10 . We also identified one complete gene deletion (in F819) and a further variant just 5′ to the start of the U8 sequence (F1424) of undetermined functional consequence. Within the mature U8 sequence itself, ten variants were observed in the box C/D and LSm-binding site, affecting nucleotides that are strictly conserved across evolution (Supplementary Fig. 4). Additionally, the variants at n.103, n.104 and n.113 lie within the stem of a hairpin loop that is very well conserved and would be predicted to decrease the stability of this structure (Supplementary Fig. 5), while the three variants at n.81 and the variant at n.82 are situated within the highly conserved GAUU motif of the LSm-binding site.
To demonstrate a functional consequence for a selection of these U8 variants, we first generated a reporter construct encompassing the entire U8 promoter region, either with the wild-type sequence or the n.-54_-49del variant. In keeping with the promoter function of this region, we observed 109-fold higher reporter gene expression with the wild-type sequence in comparison to empty vector. In contrast, the n.-54_-49del variant was associated with a profound reduction in transcriptional activity relative to the wild-type sequence (Fig. 3). npg A r t i c l e s We then assessed the effect of four variants (n.57G>A, n.58A>G, n.61A>G and n.60_61insT) observed in the U8 box C region responsible for the interaction of U8 with the 15.5K protein. These patientassociated putatively causal mutations all resulted in reduced binding to the 15.5K protein in comparison to wild-type sequence (Fig. 4). Binding of His-tagged 15.5K to the n.57G>A and n.58A>G U8 was severely impaired (Fig. 4a and Supplementary Fig. 6), concordant with previous studies that demonstrated that disruption of the essential GA dinucleotide within the asymmetric internal loop completely abrogates 15.5K binding 11 . A more subtle reduction in 15.5K binding was seen with the n.61A>G and n.60_61insT variants (Fig. 4a,b). Binding of His-15.5K and n.61A>G demonstrated a shift in mobility at 100 nM of His-15.5K; however, this shift was less than that observed in wild-type RNA at the same concentration, and excess free RNA could be seen at all concentrations up to 500 nM, indicating that binding was impaired. Similarly binding of His-15.5K and n.60_61insT demonstrated a shift in mobility at 100 nM. This shift was less than that observed in wild-type RNA, and an equivalent shift was only achieved at the highest concentration tested (500 nM), indicating that binding was impaired. The nucleotides at positions n.60 and n.61 form part of the conserved stem loop II of the box C/D motif that is necessary for assembly of the box C/D snoRNP 12 .
Finally, to explore the effect of variants in the region immediately downstream of U8, including the n.*5C>G sequence alteration seen in eight families, we examined the in vitro processing of precursor U8 snoRNAs with extended 3′ regions encompassing four distinct variants. Previous work has defined a highly stereotyped pattern of processing of the SNORD118 transcript 10 , and we observed disturbed processing of the precursor U8 RNAs with the four mutations in comparison to wild-type RNA ( Fig. 5 and Supplementary Fig. 7a). Additionally, we examined in vitro processing of precursor U8 with the box C mutation n.58A>G located away from the extended 3′ region. The processing pattern observed for n.58A>G U8 was indistinguishable from that of wild-type U8 (Supplementary Fig. 7b), indicating that variants located in the 3′ extension of U8 specifically affect processing of the precursor U8 snoRNA.
Together, these data provide compelling evidence that the diseaseassociated variants that we identified are likely to have a functional effect on U8 activity.
LCC patient fibroblasts demonstrate a cellular phenotype
Although the clinical features of LCC are apparently limited to the central nervous system, we observed a marked reduction in expression of SNORD118 (U8) in patient-derived fibroblasts in comparison The gray box represents mature SNORD118. From 5′ to 3′, the orange boxes highlight the distal (DSE) and proximal (PSE) sequence elements. From 5′ to 3′, box C, the LSm-binding site and box D in SNORD118 are highlighted by the red boxes. The violet box highlights the 3′ box (at the end of the precursor transcript). The blue line corresponds to the sequence encompassing the 3′-processed precursor transcripts of SNORD118 that are intermediates for the mature transcript. Variants that were found in the ExAC browser are shown above the schematic, with new variants not seen in ExAC shown below. The number of LCC families with each variant is shown in parentheses. Deletions and duplications are represented by blue boxes beneath the schematic. A number sign indicates two rare variants that were seen in F344 in the homozygous state. However, n.8G>C was also observed in F278, suggesting that this is the likely pathogenic variant. a All genomic coordinates should be preceded by chr17(GRCh37). b In F344, both of the rare variants were seen in the homozygous state. However, n.8G>C was also observed in F278, suggesting that this is the likely pathogenic variant. c Deletion extends beyond these boundaries, but the boundaries have not been fully defined. One sibling carries g.8076762G>A plus g.8076696_ 8076977del. The other sibling carries g.8076770G>A plus g.8076696_8076977del. The mother carries g.8076762G>A plus g.8076770G>A.
to control cells (Fig. 6a). Furthermore, primary fibroblasts from patients with LCC did not grow as well as cells from controls (Fig. 6b).
A higher percentage of LCC fibroblasts were senescent in comparison to control cells (Fig. 6c), and a proliferation defect was confirmed by carboxyfluorescein succinimidyl ester (CFSE) labeling (Fig. 6d,e). This proliferative defect was not associated with increased incidence of apoptosis (Fig. 6f), and we did not observe a disturbance in cell cycling under basal conditions or after treatment with the DNA crosslinking agent mitomycin C (Supplementary Fig. 8). Fibroblasts from patients with Coats plus also demonstrate proliferative defects (secondary to a problem with telomere homeostasis) 13,14 , but qRT-PCR data for fibroblasts from a patient with Coats plus (F345) showed U8 levels comparable to those in controls. Because of an inability to grow enough cells, we were unable to investigate translation efficiency in fibroblasts. Epstein-Barr virus (EBV)-transformed patient-derived lymphoblasts, which grow normally (data not shown), did not demonstrate any abnormality in polyribosome (polysome) fractionation analysis (Supplementary Fig. 9).
Although the phenotype of the exclusively neurological disease LCC is clearly distinctive, it is not pathognomonic, as a similar radiological association can also be seen in the context of the multisystem disorder Coats plus (Supplementary Fig. 10) 15 . Coats plus is most frequently caused by mutations in CTC1, encoding conserved telomere maintenance component 1, but such mutations were not found in patients without systemic involvement, indicating that Coats plus and LCC are genetically distinct entities 13,16 . Noting that SNORD118 is situated 50 kb upstream of CTC1 on chromosome 17p, we considered whether the LCC disease-associated variants across SNORD118 might be affecting a control element for the expression of CTC1. In the absence of a functional antibody against CTC1, expression was tested in patientderived fibroblasts using qRT-PCR and was similar to that in controls (Supplementary Fig. 11). As expected, we observed increased DNA damage and numbers of telomere-dysfunction-induced foci (TIFs) in primary fibroblasts from patients with biallelic CTC1 hypomorphic mutations. In contrast, primary fibroblasts from patients with LCC displayed a similar number of DNA damage foci as controls and showed no evidence of disturbed telomeric integrity ( Supplementary Fig. 12). We also looked at telomere length in four families using Flow-FISH, but we saw no consistent association with genotype status (data not shown). Taken in the context of the data relating to U8 per se, these results indicate that the physical proximity of SNORD118 and CTC1 is likely not causally relevant to LCC.
LCC fibroblasts do not demonstrate dysfunction of TMEM107
Finally, SNORD118 is located within the 3′ UTR of the proteincoding gene TMEM107 (NM_032354.3). Although dysfunction of TMEM107 could, in theory, be causal in the LCC phenotype, we did not observe any exonic variants in this gene. Moreover, expression of TMEM107 at the RNA and protein levels was similar for patients and controls in primary fibroblasts ( Supplementary Fig. 13). An N-ethyl-N-nitrosourea (ENU)-mutagenesis-derived mouse with a homozygous loss-of-function mutation in Tmem107 demonstrates a defect in ciliogenesis leading to polydactyly, neural tube defects and microphthalmia-none of which are seen in patients with LCC 17 -and mutations in TMEM107 have been reported to result in a typical human ciliopathy phenotype 18 . These observations lead us to conclude that LCC is not consequent to TMEM107 dysfunction.
DISCUSSION
Here we demonstrate that mutations in SNORD118 cause the cerebral microangiopathy LCC. We identified completely new or rare biallelic sequence variants in all 40 radiologically stereotyped patients belonging to every one of the 33 families in our cohort. Moreover, the additional observation of sharing of new and rare alleles across multiple pedigrees and the results of screening of a large panel of control samples to determine the frequency of biallelic rare variants in unaffected individuals provide strong evidence of genetic causality. Of particular note, although LCC is rare and inherited as an autosomal recessive trait, only 2 of 33 mutation-positive families that we identified are consanguineous. In keeping with this finding, the affected individuals in 31 families were compound heterozygotes for two different SNORD118 variants, likely comprising one 'severe' and one 'mild' mutation. Nineteen families in our cohort carry one of six putative pathogenic variants with an overall allelic frequency of >0.001 in the ExAC control database (akin to, for example, the known pathogenic mutations p.Ala177Thr in RNASEH2B and p.Pro193Ala in ADAR1, which have allele frequencies of 0.001 and 0.002, respectively). In each of these 19 families, the second allele is either not present at all in ExAC (is completely novel; 9 families) or is only present at a very low frequency (<0.00009756; 1 in 10,250 alleles in all cases) (Supplementary Table 6). We suggest that these more frequent, but still rare, variants are likely hypomorphic, whereas the still rarer variants confer a greater deleterious effect on U8. Similar to the situation observed in certain other Mendelian autosomal recessive diseases 13,19,20 , these data lead to the conclusion that biallelic null mutations in SNORD118 are likely incompatible with development, whereas homozygosity for (most) hypomorphic variants may be WT n.*1C>T n.*5C>G n.*9C>T n.*10C>G Figure 5 3′-end precursor processing of U8 variants. In vitro 3′-end processing of 5′-end-labeled in vitro-transcribed precursor U8 snoRNA (U8-165) was assessed in HeLa nuclear extracts. At 30 min, multiple pre-U8 snoRNA processing intermediates can be seen for the wild-type pre-U8 snoRNA. Four mutations (n.*1C>T, n.*5C>G, n.*9C>T and n.*10C>G) in the pre-U8 snoRNA located 1, 5, 9 and 10 nucleotides (nt) downstream of the 3′ end of mature U8 were assessed. All four mutant RNAs exhibited a difference in processing intermediates in comparison to wild-type pre-U8 snoRNA. Furthermore, all mutant RNAs displayed additional, shorter intermediates that are not present with wild-type RNA. Data are representative of three independent experiments. npg associated with no disease, later-onset pathology or a different phenotype not ascertained here. Such a possibility would explain the presence in the ExAC database of a small number of homozygotes for 5 of the 36 putatively causal mutations that we report (Supplementary Tables 2 and 6). Related to this understanding, it is of possible relevance that one, F446, of the two individuals in our cohort with homozygous SNORD118 mutations died at the youngest age of any of our patients (13 years), and we also note that two of our patients were completely asymptomatic until the sixth decade of life. Such variability in age at presentation likely relates to the overall functional effect of the two molecular lesions in combination. Furthermore, it is interesting that the mother in F819 carries two rare variants (on different alleles), both of which were seen in affected patients. However, she has not demonstrated any disease features to the age of 60 years. In contrast, both of her children, each carrying a different maternally derived rare allele in combination with a paternally inherited whole-gene deletion, experienced the onset of symptoms in their teenage years.
As the apparatus of protein synthesis, the ribosome is one of the most precisely constructed and regulated molecular machines in the cell 21 . The ribosome consists of rRNA and ribosomal proteins, with snoRNAs being an evolutionarily conserved group of non-proteincoding RNAs involved in the modification and processing of rRNAs. U8 is a vertebrate-specific factor, which is the only known snoRNA essential for maturation of the 60S large ribosomal subunit RNAs 5.8S and 28S (refs. [22][23][24]. It is thought that U8 snoRNA binds to newly transcribed pre-rRNA, thereby facilitating its proper folding, but later needs to be displaced for further processing to occur. We provide evidence that a variant in the promoter region of SNORD118 affects expression of the U8 snoRNA. Furthermore, alterations in box C disturb association of U8 with the snoRNA-binding protein 15.5K, while variants at the 3′ end of the gene confer aberrant processing of the precursor U8 snoRNA. In total, we recorded seven putatively causal mutations in the invariant box C/D motif, three within the stem of a hairpin loop that is very well conserved and would be predicted to decrease the stability of this structure and three within the highly conserved GAUU motif of the LSm-binding site. Taken together, these data indicate that the variants we describe are pathogenic and likely act as loss-of-function mutations. LCC, which to our knowledge constitutes the first example of a human disease associated with point mutations in a gene encoding a box C/D snoRNA, illustrates some of the difficulties associated with the attribution of causation to sequence variants in a non-proteincoding portion of genomic DNA, where it is more challenging to use conventional in silico algorithms to assess the pathogenicity of sequence variants. This fact, together with the high degree of sequence variation across SNORD118 (Supplementary Table 7) and the finding that the disease occurs almost invariably in the context of compound heterozygosity, leads us to suggest that identification of the genetic basis of LCC would likely only have been possible with a large cohort of patients such as the one presented here. The degree of SNORD118 sequence variability in the general population is noteworthy, perhaps indicating that this variation confers important biological effects worthy of further study.
Alterations in ribosomal components, structure or function can cause a heterogeneous class of diseases referred to as the ribosomopathies 25 , with this diversity in clinical phenotype informing a developing understanding of the multiple specialized roles of the ribosome in normal physiology 26,27 . Despite the U8 snoRNA being ubiquitously expressed, germline mutations in SNORD118 cause an exclusively neurological, progressive microangiopathy, thus suggesting further subtleties in ribosomal activity directly relevant to human health and disease. That being said, the precise link between U8 and cerebral vascular homeostasis awaits elucidation and may conceivably involve a direct effect on translation or other yet to be defined functions of U8.
METHODS
Methods and any associated references are available in the online version of the paper.
ONLINE METHODS
Subject ascertainment. We ascertained subjects demonstrating a characteristic neuroradiological picture of white matter disease (leukoencephalopathy), intracranial calcification and brain cysts internationally through colleagues in pediatric neurology and clinical genetics. All subjects had previously been screened negative for mutations in CTC1. Written informed consent was obtained for all participants. The study had ethical approval from the Leeds Multicentre Research Ethics Committee (07/Q1206/7). Exome sequencing. Genomic DNA was extracted from lymphocytes from affected individuals by standard techniques. For whole-exome analysis, targeted enrichment and sequencing were performed on DNA extracted from peripheral blood from 18 patients: F281, F330, F331 (2 affected individuals), F343, F344, F362 (2 affected individuals), F426 (2 affected individuals), F433, F446, F454 (2 affected individuals), F521 (2 affected individuals), F551 and F564. Enrichment was undertaken using the SureSelect Human All Exon kits following the manufacturer's protocol (Agilent Technologies), and samples underwent paired-end sequencing on either an Illumina HiSeq 2000 or SOLiD platform. Sequence data were mapped using Burrows-Wheeler aligner (BWA) and the hg18 (NCBI36) human genome as a reference. Variants were called using SOAPsnp and SOAPindel (from the Short Oligonucleotide Analysis Package) with medium stringency. Access to exome data can be discussed by contacting the corresponding author.
Linkage analysis. We used the Merlin package 28 to perform nonparametric linkage analysis in five pairs of affected siblings (F331, F426, F454, F521 and F780) born to unrelated parents (with the inclusion of one unaffected sibling in F780) and two singletons (F344 and F446) who were the product of independent consanguineous unions (first-and second-cousin parents, respectively). We assumed allele frequencies as provided by Affymetrix for the European-ancestry population and calculated a LOD score using the Kong and Cox exponential model 29 . Data were annotated according to the hg19 human genome.
Genomic capture. Genomic capture, next-generation sequencing and bioinformatic analysis were performed as a service at BGI Tech (Hong Kong) on ten affected individuals from ten separate families (F330, F331, F362, F426, F433, F446, F454, F521, F691 and F780). A 3-Mb interval (chr. 17: 7,000,000-10,000,000) was captured using Agilent SureSelect technology and was then sequenced using the Illumina HiSeq 2000 platform. Genomes were aligned to the hg19 reference genome with BWA, and all subsequent analyses were performed with the hg19 genome.
Sanger sequencing. Mutation analysis was performed by direct sequencing of purified genomic PCR products using the BigDye Terminator v3.1 cycle sequencer system (Applied Biosystems) and an ABI 3130 DNA sequencer. Primers were designed for a 544-bp region around SNORD118. Primers used are listed in Supplementary Table 8. Mutation description is based on the NR_033294.1 reference sequence for SNORD118.
Copy number analysis. Copy number analysis of SNORD118 in F819 was performed using DNA from the two affected individuals and their mother (paternal DNA not available). Parental samples from F426, F780 and F619 were used as controls. Copy number analysis was performed using TaqMan Universal PCR Master Mix (Applied Biosystems) with 40 ng of DNA. The copy number probes chosen were Hs01424421_cn (76 bp upstream of U8) and Hs01021672_cn (72 bp downstream of U8). F619 Mum was chosen as the calibrator sample, and copy number was assessed with Applied Biosystems StepOne Software v2.1 and Applied Biosystems CopyCaller software V2.0.
Control panel sequencing.
Human Random Control DNA Panels (UK Caucasian blood donors) HCR-1 to HCR-5 were purchased from Public Health England. DNA panels NDPT099 and NDPT095 were acquired from the NINDS Human Genetics Resource Center DNA and Cell Line Repository. All panels were sequenced as above for the 544-bp region around SNORD118. Where two rare variants were identified, the PCR products were cloned into the pGEM-T Easy vector (Promega) so that each allele could be sequenced separately.
In silico analysis. In silico analysis of variants identified in patients with LCC was carried out using RNAfold (version 2.1.9), from the ViennaRNA suite.
U8 cloning.
A 525-bp region of chromosome 17 encompassing the entire U8 coding region was subcloned into the pGEM-T Easy vector. Site-directed mutagenesis was carried out using the QuikChange Site-Directed Mutagenesis protocol (Stratagene). Primers used for the cloning and site-directed mutagenesis are listed in Supplementary Table 6.
In vitro transcription of U8 snoRNA. RNA was transcribed from PCR templates (primer sequences are listed in Supplementary Table 6) amplified from pGEM constructs containing wild-type or mutant U8 snoRNA sequence, using the T7 Ribomax Express Large-Scale RNA kit (Promega) in accordance with the manufacturer's instructions. Transcribed RNAs were purified on a 6% acrylamide/7 M urea gel. After passive elution from the gel, RNA was precipitated and resuspended in 20 µl of water. Quantification of RNA was carried out on the Boeco S-22 UV/Vis spectrophotometer.
Dephosphorylation and labeling of transcribed RNA. Dephosphorylation of transcribed RNA was carried out using 10 U of calf intestinal alkaline phosphatase (New England BioLabs) in Cut Smart buffer. Reactions were incubated for 1 h at 37 °C, followed by phenol extraction and resuspension in water to give a 1 µM final concentration. Dephosphorylated RNA (at 1 µM concentration) was end-labeled with 3,000 Ci/mmol [ 32 γ-P]ATP (Hartmann Analytic) using T4 polynucleotide kinase in PNK buffer (New England BioLabs). Labeled RNA was phenol extracted, precipitated and resuspended in water. The amount of labeled RNA was then determined using the Bioscan QC 2000 radioactivity counter.
15.5K cloning, expression and purification. The 15.5K ORF was amplified from human cDNA (see Supplementary Table 6 for primer sequences) and then cloned into the NdeI and XhoI sites of pET28a (Novagen) to produce the pET28a-TEV-15.5K plasmid. Positive clones were sequenced to confirm correct fusion of the human 15.5K ORF with the sequences encoding an N-terminal His 6 tag and TEV protease cleavage site. The pET28a-TEV-15.5K plasmid was transformed into Rosetta 2 (DE3) (Novagen), and expression of His 6 -TEV-15.5K was induced by using Overnight Express Instant TB medium (Novagen). His 6 -TEV-15.5K protein was purified as previously described for Snu13p 30 .
Electrophoretic mobility shift assays. For EMSAs, recombinant His-15.5K was incubated with 50,000 dpm 32 P-end-labeled U8 snoRNA in EMSA buffer (20 mM HEPES-KOH, 150 mM KCl, 1.5 mM MgCl 2 , 0.2 mM EDTA and 0.1% Triton X-100) for 30-45 min on ice. Resulting RNA-protein complexes were resolved on a native 7% acrylamide gel for 8.5 h at 4 °C. Gels were dried and exposed to X-ray film for approximately 8 h at −80 °C in the presence of an intensifying screen. Binding between His-15.5K and 32 P-end-labeled RNA was quantified using the Typhoon FLA 7000IP phosphorimager. For supershift assays, recombinant His-15.5K was incubated for 30 min on ice with 6×His antibody ab18184 (Abcam) before addition of 50,000 dpm 32 P-end-labeled U8 snoRNA. For the competition assays, recombinant His-15.5K was incubated for 30 min on ice with 2.5 µg of unlabeled U8 snoRNA before addition of 50,000 dpm 32 P-end-labeled U8 snoRNA.
3′-processing assays. For 3′-processing assays, 32 P-end-labeled U8 snoRNA was incubated with HeLa nuclear extract (CIL Biotech) at 30 °C in buffer containing 0.25 mM ATP, 10 mM phosphocreatine, 3.2 mM MgCl 2 , 20 mM HEPES KOH, pH 7.9, 2.6% PVA and 240 U RNasin (Promega). At 0 and 30 min, 10 µl of the reaction was removed and added to a tube containing 4 µl of stop solution (1 mg/ml proteinase K, 50 mM EDTA and 1% SDS). Reactions were then incubated at 37 °C for 15 min, phenol extracted, precipitated and resolved on a 6% acrylamide/7 M urea gel. Gels were dried and exposed to X-ray film overnight at −80 °C in the presence of an intensifying screen.
Luciferase assays. A 298-bp region of chromosome 17 encompassing the entire U8 promoter region from control DNA was cloned into the pGEM-T Easy vector. Site-directed mutagenesis was carried out to create the n.-54_-49del mutation. DNA from wild-type and mutant clones was then subcloned into the pGL3 Basic vector (Promega). The primers used for the cloning are listed in Supplementary Table 6.
HeLa-M cells were maintained in 75-cm 2 flasks in DMEM with GlutaMAX (Life Technologies, 10566-016) supplemented with 10% FBS (Sigma, F7524). Cells for transfection were plated out in triplicate wells in Corning 96-well clear-bottom plates at 1 × 10 4 cells per well in a 100-µl volume of culture medium. Transfections were carried out using the Effectene kit (Qiagen, 301425) according to the manufacturer's instructions. Cells in each well were cotransfected with 200 ng of pGL3 vector and 20 ng of pRL-TK control vector (10:1 ratio). Plates were incubated for 24 h at 37 °C in 5% CO 2 . Twenty-four hours after transfection, medium containing transfection reagent was removed from all wells and replaced with 75 µl of fresh culture medium. Cells were assayed for firefly and Renilla luciferase activity using the Dual-GLO Luciferase Assay System (Promega, E2920) in accordance with the manufacturer's instructions. Luciferase activity was measured on the BMG Labtech Fluostar Omega microplate reader. The firefly/Renilla luciferase ratio was calculated for each well after subtraction of average background values from untransfected cells. The mean firefly/Renilla ratio was calculated from triplicate values and used to calculate the fold increase in RLU as compared to empty pGL3 vector.
Polysome fractionation and profiling. Human lymphoblastoid cell lines (LCLs) from control (CTRL4) and patient (F433 and F446) samples were used for polysome profiling. Approximately 2 × 10 7 cells were incubated in RPMI medium with 100 µg/ml cycloheximide for 5 min at 37 °C in 5% CO 2 , followed by a wash in 1× PBS containing 100 µg/ml cycloheximide. Cells were collected by centrifugation at 200g for 5 min at 4 °C and resuspended in 425 µl of hypotonic buffer (5 mM Tris-HCl (pH7.5), 2.5 mM MgCl 2 , 1.5 mM KCl and 1× protease inhibitor cocktail, EDTA free). The resuspended cell mixture was supplemented with 5 µl of 10 mg/ml cycloheximide, 1 µl of 1M DTT and 100 U of RNasin. The cell mixture was incubated on ice for 10 min followed by 5 s of vortexing. The mixture was further supplemented with 25 µl of 10% Triton X-100 and 25 µl of 10% sodium deoxycholate and was vortexed again for 5 s. To pellet debris, the lysates were centrifuged at 16,000g for 7 min at 4 °C and the supernatant was transferred to a new prechilled tube. The optical density at 260 nm was measured for all lysates and adjusted so that all samples had the same optical density. Lysates were loaded onto sucrose gradients (15-50%) and centrifuged for 2.5 h at 274,000g. Polysome profiles were recorded using the Isco UA-6 UV/Vis detector.
Cell culture. Human primary fibroblasts and HeLa-M cells were cultured in DMEM (Gibco) supplemented with 10% heat-inactivated FCS and 1% penicillin-streptomycin. Human LCLs were cultured in RPMI supplemented with 10% heat-inactivated FCS and 1% penicillin-streptomycin. The control fibroblast line CTRL1 is a commercially available line purchased from the American Type Culture Collection. All other fibroblasts were generated by the referring institutes and shipped as growing cultures. All LCLs were generated as a service at St Mary's Hospital, Manchester. HeLa-M cells were a kind gift from the Manson laboratory at the University of Manchester. All cells are routinely tested for mycoplasma contamination. All putative patient cell lines were genotyped for relevant variants in SNORD118 to confirm authenticity.
Cell growth and proliferation. Proliferation was assessed for primary fibroblast cells from patients (F281, F334, F691 and F906) and healthy controls (CTRL1-CTRL3). Cells were plated in triplicate for each patient and control at 25 cells per well in a 96-well plate. Every cell from each well was counted under the microscope at 12 h after plating (day 0), and then at day 3 and day 6. Cell proliferation was measured in patient and healthy control primary fibroblasts using the Cell Trace CFSE Cell Proliferation kit (Life Technologies). Cells were labeled following the manufacturer's protocol and then plated as 5,000 cells per well in a 24-well plate. CFSE content was analyzed by flow cytometry at 30 min and 2 d after staining by flow cytometry on a Gallios flow cytometer with Kaluza acquisition software.
Apoptosis. Apoptosis was measured in patient and healthy control primary fibroblasts using an Annexin V-FITC kit (Beckman Coulter). Briefly, cells were seeded in 24-well plates and incubated for 24 h. Cells were collected, washed twice in PBS, and then stained with Annexin V-FITC and propidium iodide according to the manufacturer's instructions. The resulting fluorescence was detected by flow cytometry on a Gallios flow cytometer with Kaluza acquisition software.
Senescence. Senescence was measured in patient and healthy control primary fibroblasts using the Senescence β-Galactosidase Staining kit from Cell Signaling Technology. Briefly, cells were seeded in 24-well plates and incubated for 24 h. Cells were labeled following the manufacturer's instructions. The percentage of senescence was calculated with the following formula: (number of β-galactosidase-positive cells in five fields/total number of cells in these five fields) × 100. DNA content. DNA content was measured in patient and healthy control primary fibroblasts. Briefly, cells were plated at 5,000 cells per well in 24-well plates and treated with 50 ng/ml mitomycin C (Sigma-Aldrich). Cells from a patient with Fanconi anemia were used as a positive control. After 5 d of treatment, cells were collected, washed twice in PBS and then incubated for 30 min in 70% ethanol. Cells were washed three times in PBS and incubated in PBS containing 1 µg/ml DAPI and 0.1% Triton X-100 (Sigma-Aldrich) for 30 min on ice; they were then analyzed by flow cytometry on a Gallios flow cytometer with Kaluza acquisition software.
ImmunoFISH for telomere-dysfunction-induced foci. Primary fibroblasts from healthy controls, CTC1-mutation-positive patients and patients with LCC were fixed for 5 min in 4% formaldehyde in water and permeabilized in PBS with 1% BSA and 0.1% Triton X-100. Cells were incubated with primary anti-53BP1 antibody (Santa Cruz Biotechnology) for 1 h at room temperature and then with Alexa Fluor 488-labeled secondary antibody (Life Technologies). Samples were fixed for 5 min in 4% paraformaldehyde and dehydrated in successive 5-min baths of 70% ethanol, 95% ethanol and 100% ethanol. PNA-Cy3-Telo-C probes (Dako) were hybridized according to the supplier's recommendations. Briefly, probes were incubated with the samples for 5 min at 80 °C and left in the dark at room temperature for 90 min. Samples were then washed twice in 70% formamide, 10 mM Tris-HCl and PBS and were mounted with DAPI mounting medium (Vectashield).
Protein analysis.
Whole-cell lysates were prepared from patient primary fibroblasts (5 × 10 6 cells per sample) using 10 mM EDTA/RIPA buffer containing protease inhibitors (Roche) and Halt phosphatase inhibitor (Thermo Scientific). For immunoblot analysis, 30 µg of total protein was loaded onto 15% SDS-PAGE gels, and electrophoresis was performed using the Mini-PROTEAN system (Bio-Rad Laboratories). Following wet-blotting transfer of protein onto nitrocellulose membrane (Li-Cor), non-specific antibody binding was blocked using Odyssey Blocking Buffer (PBS) for a minimum of 1 h at room temperature. Rabbit primary antibody to TMEM107 (Abcam, ab181396) was incubated with the membrane for 1 h at room temperature with agitation, using a 1:1,000 dilution in blocking buffer. As a loading control, membranes were incubated with a 1:3,000 dilution of mouse primary antibody to β-actin (Abcam, ab125248). Following washes with PBS, membranes were incubated with IRDye 800CW donkey anti-mouse IgG (Li-Cor, 925-32212) and IRDye 680RD donkey anti-rabbit IgG (Li-Cor, 925-68073) diluted 1:10,000 in blocking buffer for 45 min at room temperature. Following further washes, signal was detected using the Odyssey digital infrared imaging system (Li-Cor), and images were analyzed using Image Studio v5 (Li-Cor). | 2017-10-30T00:54:27.526Z | 2016-08-20T00:00:00.000 | {
"year": 2016,
"sha1": "7cbf0219b917ea22debe06636e2fa99144074da7",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc5045717?pdf=render",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "9f852b723f4c2c2adfe6571f1596e6b7735c9e10",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
4793154 | pes2o/s2orc | v3-fos-license | Delayed and repeated intranasal delivery of bone marrow stromal cells increases regeneration and functional recovery after ischemic stroke in mice
Background Stroke is a leading cause of death and disability worldwide, yet there are limited treatments available. Intranasal administration is a novel non-invasive strategy to deliver cell therapy into the brain. Cells delivered via the intranasal route can migrate from the nasal mucosa to the ischemic infarct and show acute neuroprotection as well as functional benefits. However, there is little information about the regenerative effects of this transplantation method in the delayed phase of stroke. We hypothesized that repeated intranasal deliveries of bone marrow stromal cells (BMSCs) would be feasible and could enhance delayed neurovascular repair and functional recovery after ischemic stroke. Results Reverse transcription polymerase chain reaction and immunocytochemistry were performed to analyze the expression of regenerative factors including SDF-1α, CXCR4, VEGF and FAK in BMSCs. Ischemic stroke targeting the somatosensory cortex was induced in adult C57BL/6 mice by permanently occluding the right middle cerebral artery and temporarily occluding both common carotid arteries. Hypoxic preconditioned (HP) BMSCs (HP-BMSCs) with increased expression of surviving factors HIF-1α and Bcl-xl (1 × 106 cells/100 μl per mouse) or cell media were administered intranasally at 3, 4, 5, and 6 days after stroke. Mice received daily BrdU (50 mg/kg) injections until sacrifice. BMSCs were prelabeled with Hoechst 33342 and detected within the peri-infarct area 6 and 24 h after transplantation. In immunohistochemical staining, significant increases in NeuN/BrdU and Glut-1/BrdU double positive cells were seen in stroke mice received HP-BMSCs compared to those received regular BMSCs. HP-BMSC transplantation significantly increased local cerebral blood flow and improved performance in the adhesive removal test. Conclusions This study suggests that delayed and repeated intranasal deliveries of HP-treated BMSCs is an effective treatment to encourage regeneration after stroke.
Background
Stroke is a leading cause of death and disability worldwide with only one FDA-approved drug treatment, tissue plasminogen activator (tPA), in the U.S. [1]. tPA is a thrombolytic agent that may show therapeutic effect acutely after stroke; delayed administration of tPA after its therapeutic window of 4.5 h has increased risk of hemorrhagic conversion [2]. Additionally, a set of exclusion criteria precludes many patients from receiving the tPA treatment regardless of the time window. These may include uncontrolled hypertension, indication of intracranial hemorrhage, seizure at the onset of stroke, and a history of arteriovenous malformation or aneurysm. As another impedance, patients that were asleep during the onset of stroke cannot accurately pinpoint the time of occurrence. Thus, it appears necessary and important to develop delayed treatments several hours or even several days after stroke.
Transplantation of cells such as bone marrow stromal and bone marrow stem cells (BMSC) has been explored as a regenerative avenue for stroke therapy [3][4][5]. The regenerative phase is thought to have a wide therapeutic window. Cells can be transplanted in the delayed phase of stroke, from days to a month after stroke [6,7]. BMSCs are already clinically used for therapy in autologous and allogeneic transplantation for diseases such as leukemia and sickle cell anemia [8,9]. Similar to other types of stem and progenitor cells, BMSCs produce trophic factors that are beneficial for the recovery of brain damage. For example, BMSC-conditioned media enhances neurite outgrowth and neurite length in Ntera-2 neurons, demonstrating the paracrine effects of BMSCs in vitro [10]. Adaptive factors released by mesenchymal stem cells include cytoprotective factors (endothelin-1), angiogenic factors (VEGF, Smad4, and Smad7), and pro-migration factors (LRP-1, LRP-6) [11]. Previous studies showed that trophic factors secreted by transplanted BMSCs contributed to tissue repair, ultimately leading to improved functional recovery [12,13]. An intravenous infusion of BMSC-and BMSC-conditioned media led to neurogenesis and an attenuation of macrophage/microglia invasion in the brains of ischemic stroke mice [14]. Although the exact mechanism remains to be further identified, it is suggested that transplanted BMSCs can serve as vehicles for regenerative and anti-apoptotic factor delivery through their paracrine actions after administration [11].
Endogenous regeneration occurs in the adult mammalian brain through processes such as neurogenesis in which neural progenitor cells are continuously generated in regenerative niches such as the subventricular zone (SVZ). Following ischemic injury, neuroblasts are diverted from the rostral migratory stream toward the ischemic region by chemoattractive factors specifically the stromal cell-derived factor-1 α (SDF-1α) [15][16][17]. This response appears to be an attempt at self-repair after a CNS injury. However, it is estimated that as many as 80% of SVZ-derived new neurons at the ischemic site die 6 weeks post-ischemia, possibly due to the detrimental cytotoxic effects of the injured environment [18]. Treatments aimed at bolstering this endogenous repair may prove to be a promising strategy for stroke therapy.
There are currently several methods of stem cell delivery for brain disorders: intravenous/intra-arterial infusion, intracerebral injection, and more recently, intranasal administration. Previous studies with BMSC transplantation focused on intracerebral and intravascular routes [19][20][21]. Even though these studies demonstrate the therapeutic potential of BMSCs after stroke, these delivery methods are either invasive or inefficient. For example, intracranial administration requires a craniotomy surgery and cell injection with a needle or a pipette that can damage brain tissue. Cells delivered systemically have low homing rates to the brain; the majority of cells are found in peripheral organs primarily in the lungs, liver, and spleen following delivery [22,23].
As a way to bolster transplantation efficacy, our group has reported that hypoxic preconditioning (HP) of BMSCs and neural progenitor cells conveys several benefits [3,24,25]. HP increases the survival of transplanted cells in the ischemic heart and brain [3,25], enhances BMSC homing capacities to the brain infarct region through increased expression of the SDF-1 ligand CXCR4 [3]. Intranasally delivered BMSCs in the acute phase of stroke reduced the volume of the ischemic infarct, decreased the number of TUNEL-positive cells in the peri-infarct region, and improved sensorimotor functional recovery of after stroke [3]. In the present investigation, we aimed to demonstrate that the time window for intranasal administration of BMSCs can be extended past 24 h after stroke. Taken the advantage of the noninvasive nature of the nasal route, we demonstrated that delayed and repeated administrations of BMSCs could be applied for enhancing endogenous regenerative activities and sustainable functional recovery after stroke.
Methods
The experimental timeline is summarized in Fig. 1.
BMSC cell culture
BMSCs were isolated and cultured as previously described [3,24]. Briefly, GFP-expressing BMSCs were dissected from the tibias of postnatal day 21 transgenic Wistar rats (Charles River Laboratories, Wilmington, MA). Cells were cultured and maintained in DMEM media with 15% BMSC Fetal Bovine Serum (FBS) and 1% Penicillin-Streptomycin to prevent contamination of cell culture. Cells were trypsinized with 0.25% trypsin-EDTA and then deactivated with 15% FBS media before being plated into dishes. After 24 h, non-adherent cells were removed and fresh medium was added to adherent cells. Upon isolation, the BMSC population was characterized via fluorescence-activated cell sorting using cell surface markers CD34, CD45, CD73, CD90, and CD105 (eBioscience, San Diego, CA or BD Pharmingen, Rockville, MD). All cells used in this study were freshly isolated, used at low passages (within 4 passages) when they were 80-90% confluent, and maintained at normoxic oxygen until hypoxic preconditioning.
Immunocytochemistry
Cells in 3.5 cm tissue culture dishes were fixed with 4% paraformaldehyde for 10 min. The dishes of cells were washed with 1× phosphate buffered saline (PBS) 3 times for 5 min per wash. Ethanol:acetic acid (2:1) was applied for 10 min and washed with PBS. 0.2% Triton-X 100 was applied for 10 min for cell permeabilization and washed out with PBS (3 times, 5 min each). The cells were blocked with 1% cold fish gelatin (Sigma, St. Louis, MO) for 1 h and primary antibodies were applied at a concentration of 1:100 overnight at 4 °C for SDF-1α (MAB350, R&D Systems, Minneapolis, MN), CXCR4 (R&D Systems), VEGF (Millipore, Billerica, MA) and FAK (c-20; Santa Cruz Biotechnology). A secondary antibody (Jackon ImmunoResearch, West Grove, PA) corresponding to the host animal of the primary antibody was applied and incubated at room temperature for 1 h then washed with PBS. Hoechst 33342 was applied at a concentration of 1:25,000 for 5 min and washed with PBS. Dishes were cover-slipped with Vectashield mounting media (Vector labs, Burlingame, CA). Photographs were taken with a fluorescent microscope (BX51, Olympus, Tokyo, Japan).
Following reoxygenation, cells were either harvested for PCR analysis, fixed for immunohistochemistry, or harvested for intranasal delivery (Fig. 1). Only HP-treated BMSCs were used for transplantation in this investigation; comparisons between HP-BMSCs and non-HP BMSCs have been established where HP benefits were great enough for us to continue the use of HP on BMSC as standard protocol [3].
PCR analysis
mRNA was harvested from the control BMSC and HP-BMSC culture dishes using Trizol reagent (Invitrogen Life Technologies, Grand Island, New York). 250 μl of Trizol was used per 3.5 cm dish of BMSC. The cells were scraped in Trizol and collected, vortexed, and briefly incubated for 5 min to allow for the full dissociation of nucleoprotein complexes. Fifty μl of choloroform was added to the mixture to separate RNA into a colorless phase from the mixture. This colorless aqueous phase was separated into a new tube. One hundred and twenty five μl of isopropyl alcohol was used to precipitate the RNA. The RNA was centrifuged down into a pellet and washed with 75% ethanol two times. The alcohol was discarded and the RNA pellet was dried and resuspended in molecular grade water. RNA concentration was measured (Take3, BioTek Instruments, Winooski, VT).
RNA was converted to cDNA using the High Capacity RNA-to-cDNA kit ™ (Life Technologies). Reverse transcription PCR was performed on 1 μg of total cDNA Fig. 1 Experimental timeline. Focal ischemic stroke was induced in adult male mice on "day 0". Three days after stroke, a total volume of 100 μl of BMSC suspension (~ 1 × 10 6 cells) or control media was administered via the intranasal route at 3, 4, 5, and 6 days after stroke. Starting on the same day as BMSC administration, all mice received daily BrdU injections intraperitoneally (i.p) until the day of euthanasia. Laser-Doppler scanning was performed 21 days after stroke to measure changes in the local blood flow in penumbra. The adhesive removal test of sensorimotor function was performed at 7 and 14 days after stroke. Immunohistochemistry was performed at 14 days after stroke to analyze regeneration from each sample for control and HP-BMSCs. Each PCR reaction was performed with a mixture of Taq polymerase (New England Biolabs, Ipswich, MA) and its corresponding 10x Taq buffer (New England Biolabs), forward and reverse primers, 10 mM dNTP, water, and cDNA. In particular, we used primers probing for anti-apoptotic and trophic factors. PCR samples were run out on a 1.8% agarose gel with ethidium bromide and visualized under UV light. A list of the factors that we probed for and their primer pairs are listed below alphabetically. All lane intensities were normalized against the corresponding 18S control. PCR gels were captured with a gel imaging system and levels of intensity were quantified with ImageJ. Primer
Focal ischemia stroke model of the adult mouse
All animal experiments and surgical procedures were approved by the Emory University Animal Research Committee and met NIH standards. The animal protocol (2001290-021015BN) specifies the housing location of animals in the temperature and huminity controlled rooms in the Emory University animal facility. The justification of using the mouse stroke model and the number of animals were provided in the protocol. The sterile method, surgery proceudres, pre-surgery and post-surgery procedures are specified. The animal monitoring methods for anesthesia, during surgery, after surgery, the sign of pain and infection are specified in the protocol. The post-surgery care including food and water supplies and prevention of pain and infection using antibiotics and analgesic drugs are described. At the endpoint of experiments, animals will be euthanized using overdose isoflorune.
The sensorimotor cortex ischemic stroke was induced based on previous reports [26,27]. 8-10 week-old adult male C57BL/6 mice from Jackson Laboratories weighing 26-30 g were used in this investigation. The ischemic surgery procedure was performed following our published method [27]. Briefly, anesthesia was induced using 3.5% isoflurane followed by the maintenance dose of 1.5% isoflurane. Both the tail and paws of the animal were pinch-tested for anesthetic depth. The right middle cerebral artery (MCA) was permanently ligated using a 10-0 suture (Surgical Specialties Co., Reading, PA), accompanied by a bilateral common carotid artery (CCA) ligation for 10 min. This ischemic procedure was suitable and sufficient for the induction of focal ischemia in the mouse brain, resulting in specific infarct formation in the right sensorimotor cortex [27]. Body temperature remained at 37 °C using a heating pad and a temperature-controlled incubator. Three days after stroke, all mice received 50 mg/kg daily BrdU injection i.p. until they were euthanized Fig. 1).
Intranasal administration of BMSCs
BMSCs were treated with HP as described above. Prior to transplantation, HP-BMSCs were incubated in Hoechst 33342 (1:10,000) for 1 h during reoxygenation. The cells were rinsed with PBS and dissociated from the dish with 0.25% Trypsin-EDTA. 15% FBS growth medium was added to inactivate the trypsin and the cell suspension was collected and centrifuged at 1000×g for 3 min, the media was removed, and cells were resuspended at approximately 1 × 10 6 cells/100 μl. Three, 4, 5, and 6 days after stroke and 30 min prior to BMSC administration, each mouse received a total of 10 μl (10 mg/ml) hyaluronidase (Sigma, St. Louis, MO; dissolved in sterile PBS) delivered into the nasal cavity (5 μl in each nostril). Hyaluronidase increases tissue permeability of the nasopharyngeal mucosa that facilitates stem cell invasion into the brain [28]. One set of animals was randomly designated as the control group receiving cell culture media (100 μl total/animal) and the other set was given BMSCs (approximately 1 × 10 6 cells/100 μl). Rat cells were used in this experiment due to the greater yield of cells from rats compared to mice. Five drops containing control media or cell suspension were pipetted in each nostril, alternating each nostril with 1-min intervals.
Tracking BMSCs after transplantation
Six and 24 h after intranasal administration of BMSC, mice were anesthetized with 4% chloral hydrate (10 ml/ kg, i.p.) and euthanized once deemed non-responsive. Their brains were dissected out, flattened for tissue sectioning tangential to the surface of the cortex, and mounted in Optimal Cutting Temperature (OCT) compound (Sakura Finetek USA Inc., Torrence, CA, USA) on dry ice. Tissues were sectioned at 10 μm thickness and counterstained with propidium iodide (PI) for nuclear label. Co-labeling of Hoescht 33342 dye positive cells with PI counterstain verified true nuclear labeling of BMSCs in the brain. The peri-infarct area of the cortex was examined for transplanted BMSCs.
Immunohistochemistry and quantification
Immunohistochemistry was performed to analyze neurogenesis and angiogenesis in vivo. Design-based stereology was used when sectioning fresh frozen brains coronally at 10 μm thickness on a cryostat (CM 1950, Leica Biosystems, Buffalo Grove, IL). Every tenth section was collected such that two adjacent tissues were at least 100 μm apart to avoid counting the same cell twice during analysis. Tissues were collected to include the peri-infarct and infarct areas 1 mm anterior and 1 mm posterior to bregma.
Brain sections were dehydrated on a slide warmer for 15 min and fixed with 10% buffered formalin for 10 min. The sections were washed with PBS (1×, pH 7.4) three times and fixed with methanol twice for 7 min each. Slides were air-dried for several seconds then rehydrated in PBS. Sections were incubated in 2 N HCl for 1 h at 37 °C and then washed in borate buffer for 10 min. Tissue sections were permeabilized with 0.2% Triton X-100 for 45 min and washed in PBS three times. Brain sections were blocked with 1% cold fish gelatin (Sigma) and incubated overnight at 4 °C with the following primary antibodies: Ms anti-NeuN (1:200; MAB377, Millipore, Billerica, MA), Rat anti-BrdU (1:400; AbD Serotec, Hercules, CA), and Rabbit anti-Glut-1 (Chemicon Millipore). Slides were then incubated for 1 h at room temperature with the following secondary antibodies: BrdU: Cy3 antirat (1:300, Jackon ImmunoResearch); NeuN: anti-Mouse (1:100, Alexa Fluor 488, Life Technologies, Grand Island, NY); and Glut-1 Cy5 anti-Rabbit. Slides were mounted with Vectashield mounting media and cover-slipped and stored at − 20 °C.
Brain sections were imaged under fluorescent microscopy. Six fields per section were photographed at 40x magnification of both sides of the peri-infarct area in the cortex. Six tissue sections of per animal were photographed. The numbers of BrdU/NeuN co-labeled cells and BrdU/Glut-1 co-labeled cells were quantified using the Image J software (NIH). The reported number for each animal is the sum number of NeuN/BrdU and Glut-1/BrdU co-labeled cells in the image sampling.
Local cerebral flood flow (LCBF) measurement
Animals were anesthetized with 4% chloral hydrate (10 ml/kg, i.p.). The skin was incised to expose the skull over the peri-infarct area for blood flow measurement. Laser scanning imaging was performed by Laser Doppler flowmetry [29] (PeriFlux System 5000-PF5010 LDPM unit, Perimed, Stockholm, Sweden) and used to estimate the LCBF. The laser was placed to scan above the right MCA and blood flow was measured by the LDPI program. Blood flow was recorded at the same location at the stroke penumbra before stroke, during, and 21 days after stroke. Values were averaged from 6 repeated readings from each time point for both BMSC and control mice. Quantification of the post-stroke mean intensity values were normalized to pre-stroke mean values to measure the change in blood flow over time.
Adhesive removal functional behavior test
The adhesive removal test was used to evaluate the animal's sensorimotor impairment after an ischemic stroke as previously described [30]. The stroke affects the forepaws somatosensory cortex thus affecting forepaw sensation in this focal ischemia model. Briefly, an adhesive sticker (Tough-Spots 3/8″ diameter, Diversified BioTech, Dedham, MA) was cut into quarters and placed on one paw of the animal. The time required for the animal to remove the adhesive sticker was recorded in seconds. All mice were trained for 3 days before stroke with one trial run per day to ensure that animals were able to remove the adhesive sticker. Animals were then tested 3 days before, 7, and 14 days after ischemia. The time was averaged from 4 trials per animal. Stereotypically, theunimpaired right paw (contralateral to the unimpaired cortex) has a quicker detection and removal time compared to an impaired left paw (contralateral to the impaired cortex).
Statistical analysis
Data was graphed and statistically analyzed using Graph-Pad Prism, version 4 (GraphPad Software, Inc., San Diego, CA). Data analysis was performed using a student's two-tailed t test for the comparison of the two experimental groups (hypoxic preconditioning PCR, angiogenesis, and neurogenesis data) and two-way ANOVA for multiple comparisons with Bonferroni post hoc tests (Laser Doppler blood flow and adhesive removal test). Changes were identified as significant if p was less than 0.05.
Regenerative factors expressed in BMSCs
Immunocytochemical staining ( Fig. 2A-E) and reverse transcriptase PCR analysis (Fig. 2F) revealed that rat BMSCs in cultures expressed several regenerative and migration factors such as SDF-1α, VEGF, CXCR4 and FAK.
Hypoxic-preconditioning increases survival factors in BMSCs
To enhance the survival and regenerative potential of BMSCs, cultured BMSCs were subjected to 24-h hypoxic preconditioning (HP) treatment in a hypoxic chamber of 0.1-0.3% oxygen. To verify that BMSCs were responsive to hypoxia, RT-PCR showed that the HIF-1α expression was drastically higher after HP (Fig. 3a). Bcl-xl is a prominent downstream factor of HIF-1α [31]. The Bcl-xl level was also significantly increased in BMSCs treated with HP compared to control (Fig. 3b).
Intranasally delivered BMSCs migrated to the peri-infarct region of the ischemic brain
In a mouse model of focal ischemic stroke targeting the right sensorimotor cortex, HP treated rat BMSCs (HP-BMSCs; 1 × 10 6 cells/100 μl) were intranasally administered at 3, 4, 5, and 6 days after stroke. Transplanted BMSCs were pre-labeled with Hoechst 33342 for in vivo tracking. Due to the well-established and overwhelming benefits of HP, we opted to only use preconditioned cells in this investigation [3]. BMSCs labeled with Hoechst 33342 were detected in the peri-infarct region 6 and 24 h after single BMSC administrations (Fig. 4).
Intranasally delivered BMSC increased neurogenesis and angiogenesis in the ischemic brain
To assess neurovascular regeneration, we quantified NeuN/BrdU and Glut-1/BrdU co-labeled cells in the peri-infarct area of each animal (Fig. 5). In the periinfarct cortex 21 days after stroke (18 days after the first delivery of BMSCs), there was a significant increase in NeuN/BrdU co-labeled cells (Fig. 5C, D) and Glut-1/ BrdU co-labeled cells (Fig. 5E, F) compared to vehicletreated stroke controls.
BMSC transplantation increased local cerebral blood flow (LCBF)
The laser Doppler scanning method was used to survey the LCBF at the stroke peri-infarct region before, during, and 21 days after focal stroke [29]. The measurement was taking at the same location over the right somatosensory cortex bordering the ischemic core (Fig. 6a). The mean post-stroke blood flow value was normalized to the mean pre-stroke value. There was a significant increase in LCBF in the BMSC treatment group compared to vehicle controls (Fig. 6b).
BMSC transplantation increased functional recovery after stroke
The adhesive removal test was used to assess sensorimotor deficits in mice 7 and 14 days after stroke induction.
In the focal stroke model with damaged right somatosensory cortex, the left paw of the animal is affected. We tested both left and right paws for comparisons. The data Fig. 4 Intranasally delivered BMSCs migrated to the peri-infarct region of the ischemic brain. A-C Brain sections at the stroke penumbra region were collected and analyzed to examine cell presence of BMSCs delivered intranasally. Hoescht-positive BMSCs (blue) counterstained with Propidium Iodide (PI, red) were detected in the peri-infarct area of the cortex at 6 and 24 h after BMSC delivery at 3 days after stroke. Scale bars = 20 μm. D An illustration of the mouse ischemic brain where the cells were found in the cortex was then normalized to the pre-stroke baseline of each individual mouse to account for the natural variation of their removal times before injury. The BMSC treatment group showed significant reduction in removal time of the left paw at 14 days compared with vehicle-treated mice (Fig. 7).
Discussion
In the present investigation, we demonstrated that delayed multiple intranasal deliveries of BMSCs showed improved vascular and neural regeneration and functional recovery in a focal ischemic stroke model of adult mouse. We confirmed previous findings that BMSCs express several trophic and migratory factors including SDF-1α, CXCR4, VEGF and FAK [24]. These factors play prominent roles in cellular migration, neurogenesis, and angiogenesis [15,24,32,33]. It is predicted that the BMSCs acted as carriers to deliver these factors to the ischemic brain for recovery. As a strategy to enhance the endogenous repair potential of these cells, hypoxic The adhesive removal test was used to assess the sensorimotor impairment after ischemic stroke. The test on both control and treatment groups was performed at 3 days before stroke, and 7 and 14 days after stroke. The BMSC treatment group at 14 days displayed significant improvement (shorter time) to remove the sticky dot compared to control mice. N = 8, *p = 0.0503, two-way ANOVA preconditioning was applied as a routine pre-treatment of transplanted cells [3,34]. Our present study further confirms that HP treatment effectively increases the expression of several pro-survival and pro-regenerative factors in BMSCs, which prime them for greater therapeutic benefits after transplantation.
To be successful in a regenerative therapy, several obstacles must be overcome. One of the major issues of stem cell transplantation after stroke is cell survival after engraftment. Previous studies report that the survival rate of transplanted cells is low, most cells die within 3 days of engraftment likely due to the cytotoxicity from the ischemic environment [35][36][37]. The timing of stem cell administration may impact graft survival. To maximize transplant efficacy, delayed administration of BMSCs until after levels of inflammatory cytokines have significantly subsided is a feasible approach [38]. To increase the tolerate of transplanted cells to the harsh ischemic environment of the post-stroke brain, we have shown before and in the current investigation that preconditioning BMSCs with hypoxia before transplantation is an effective strategy to increase the expression of specific gene transcripts that are adaptive to low oxygen conditions [39,40].
The intranasal route can be used to non-invasively deliver neuropeptides and drugs to the brain, but is relatively a new method for cell treatment of stroke [41][42][43]. It was necessary to demonstrate the feasibility and efficacy of delayed and repeated administration of BMSCs. Intranasally delivered cells can bypass the BBB at the nasal mucosa, pass through the nasal epithelium, and enter the brain via olfactory sensory nerves, such as the trigeminal nerve [28,42]. It is also possible that intranasally delivered cells can gain access to the CSF and perivascular spaces, further facilitating their transport into deeper brain regions. In the present study, intranasally delivered BMSCs reached the peri-infarct region as early as 6 h after delivery. This is consistent with our previous report showing the ability for the cells to quickly migrate to the brain parenchyma within 1.5 h after intranasal delivery [3].
The homing mechanisms of the SDF-1α/CXCR4 axis and FAK pathways are known to contribute to the migration of HP-BMSCs [3,44]. SDF-1α is endogenously upregulated in the brain after ischemic injury, forming a chemoattractive gradient that is strongest in the core starting at 7 days after stroke [17]. HP of BMSCs increases CXCR4 expression, suggesting an important role of the SDF-1α/CXCR4 axis in HP-mediated homing of HP-BMSCs to the stroke region [3,26]. We demonstrate that CXCR4 is expressed in BMSCs, thus these cells can respond to SDF-1α signaling for directed migration toward the lesion site. A previous study of BMSCs injected into the retro-orbital venous sinus of stroke animals revealed that BMSCs migrated to ischemic region closely associating with reactive astrocytes and vessels expressing SDF-1α [44].
FAK is an essential downstream signaling partner in the CXCR4 signaling cascade. FAK phosphorylation can increase cellular migration. We demonstrated before that HP increased BMSC homing to the stroke site was mediated by the mechanism involving FAK and CXCR4 upregulation [3]. FAK is a crucial protein kinase integrating extracellular signaling and cellular migration [45]. HP increases total FAK levels as well as the activated form of phosphorylated FAK [24]. Taken together, our previous and current data indicate that HP promotes the migration mechanism that allows efficient homing of intranasally delivered BMSCs to the peri-infarct region.
Our goal was to test whether or not delayed and repeated administration of BMSCs could increase regenerative activities after stroke. In the current study, we observed that BMSC transplantation exhibits a number of beneficial effects to the injured brain [3,46,47]. We found significant increases in neurogenesis and angiogenesis in the peri-infarct area 14 days after stroke in animals that received intranasal BMSCs. It is thought that the trophic factors expressed by BMSCs played a role in neurogenesis. Transplantation of BMSCs with SDF-1α released into this area may attract more endogenous neural progenitors to the peri-infarct area. VEGF is a major trophic factor for the stimulation of angiogenesis. For example, VEGF increased tubule formation and vessel growth of human endothelial cells (HUVEC) in vitro [48]. In stem cell transplantation, neural stem cells that secrete VEGF increased the neovascularization and overall blood vessel density in the peri-infarct area [49]. Further, neural stem cell transplantation enhances angiogenic pathways with increased levels of VEGF and its ligands [49]. VEGF can be neuroprotective and neurogenic. For example, VEGF was shown to promote neurogenesis in the SVZ and SGZ and endogenous migration of neural progenitors from the SVZ [50]. Mice over-expressing VEGF had fewer neurological deficits and smaller infarct volumes after a stroke [51]. Release of VEGF by intranasally administered HP-BMSCs may play a similar role in facilitating angiogenesis.
The neural-vascular interaction in the neurovascular niche plays a major role for functional recovery after stroke [52]. New neurons must interact with the vasculature for the remodeling process and ultimately for animal functional recovery. We tested the effect of intranasally delivered BMSC on blood flow recovery by measuring cerebral blood flow in the peri-infarct area. Mice that received BMSCs had increased local cerebral blood flow 21 days after stroke indicative of vascular improvement. | 2018-04-13T17:24:15.994Z | 2018-04-12T00:00:00.000 | {
"year": 2018,
"sha1": "57bb8da903433f45eaf66f15a2352cc47ef0df55",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/s12868-018-0418-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a6d9e5cc612acb6722c01bcc506b054e9b4d151",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118458839 | pes2o/s2orc | v3-fos-license | Microscopic dynamical description of proton-induced fission with the Constrained Molecular Dynamics (CoMD) Model
The microscopic description of nuclear fission still remains a topic of intense basic research. Un- derstanding nuclear fission, apart from a theoretical point of view, is of practical importance for energy production and the transmutation of nuclear waste. In nuclear astrophysics, fission sets the upper limit to the nucleosynthesis of heavy elements via the r-process. In this work we initiated a systematic study of intermediate energy proton-induced fission using the Constrained Molecu- lar Dynamics (CoMD) code. The CoMD code implements an effective interaction with a nuclear matter compressibility of K=200 (soft EOS) with several forms of the density dependence of the nucleon-nucleon symmetry potential. Moreover, a constraint is imposed in the phase-space occu- pation for each nucleon restoring the Pauli principle at each time step of the collision. A proper choice of the surface parameter of the effective interaction has been made to describe fission. In this work, we present results of fission calculations for proton-induced reactions on : a) 232 Th at 27 and 63 MeV, b) 235 U at 10, 30, 60 and 100 MeV, and c) 238 U at 100 and 660 MeV. The calculated observables include fission-fragment mass distributions, total fission energies, neutron multiplicities and fission times. These observables are compared to available experimental data. We show that the microscopic CoMD code is able to describe the complicated many-body dynamics of the fission process at intermediate and high energy and give a reasonable estimate of the fission time scale. Sensitivity of the results to the density dependence of the nucleon symmetry potential (and, thus, the nuclear symmetry energy) is found. Further improvements of the code are necessary to achieve a satisfactory description of low energy fission in which shell effects play a dominant role.
I. INTRODUCTION
Nuclear fission, since its discovery more than 75 years ago, has played a prominent role in applications as well as basic nuclear research. Among the wide range of modern applications and given the increasing energy demand worldwide [1], nuclear energy production in Generation-IV reactors [2,3] and the incineration of nuclear waste in Accelerator-Driven Systems (ADS) [4,5] are areas of current intense efforts. In parallel, fission offers an important mechanism to produce a variety of isotopes for medical and industrial use [6]. It is also one of the main approaches to produce exotic neutron-rich nuclei in Rare Isotope Beam (RIB) facilities [7][8][9][10][11][12].
From an astrophysical point of view, fission is a key reaction of the rapid neutron capture process (r-process) and essentially sets the upper boundary on the synthesis of heavy elements [13][14][15][16][17]. In the same vein, fission largely determines the stability and the properties of superheavy elements [18][19][20][21]. the random neck-rupture model of Brosa [27]. A current version of the latter is the temperature-dependent Brosa model developed in [28]. These models offer main ingredients in current statistical models of fission (e.g. [29][30][31][32][33]). Two widely used current models that contain a statistical description of fission are the code GEMINI++ [34] in which fission is described by the transition-state approach of Moretto [35] and the code SMM ( [36], and references therein) in which low-energy fission is described by an empirical parametrization and higher-energy fission is treated as one of the possible channels of statistical multifragmentation.
Along with the statistical description of fission, the dynamical approach to the fission process was put forward early on in the seminal paper of Kramers [37]. Based on the assumption that the deformation degree of freedom can be viewed as a Brownian particle interacting stochastically with the single-particle degrees of freedom that constitute a heat bath, Kramers analytically solved the appropriate 1D Fokker-Planck equation and predicted that fission is actually retarded relative to its rate obtained from purely phase-space arguments. Along these lines, dynamical approaches based on the dissipative character of the nuclear shape motion were developed based either on the Fokker-Planck equation or the Langevin equations in more that one demensions describing the deformation degrees of freedom. A detailed review of these efforts is given in [38,39]. Refined dynamical approaches along these lines have continued with increasing degree of sophistication and success (e.g. [40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56]).
Notable to all efforts to descibe fission based on the macroscopic LDM is the prediction of symmetric mass yields, in contrast to the large body of experimental data of minor actinides that indicate asymmetric mass yield distributions of low-energy fission. This discrepancy was remedied by Swiatecki [57] by the inclusion of a microscopic correction to the macroscopic LDM part of the potential energy surface (PES) of the nucleus. A detailed shape-dependent macroscopic-microscopic PES was obtained by Strutinsky [58,59] forming the basis of the successfull shell-correction approach to the PES.
The Strutinsky macroscopic-microscopic approach, due to its simple physical foundation and numerical flexibility, has seen continuous development and success. Detailed relevant reviews concerning the description of fission barriers can be found in [60][61][62]. The most detailed prediction of PES based on the macroscopic-microscopic approach was performed by Möller et al. on a fivedimensional deformation space [60,63]. This detailed PES description forms the basis of the recent dynamical description of fission in the limit of strong dissipative coupling (Smolutsovski limit) in which the fission process resembles a random walk on the multidimentional PES [52][53][54][55].
Apart from the macroscopic-microscopic description of the heavy-element PES relevant to the description of nuclear fission, fully microscopic approaches based on the nuclear Density Functional Theory (DFT) have been developed. Some recent representative works are [64][65][66][67][68][69][70][71][72] in which extensive reference to previous works can be found. We also report the recent work [73] on unconstrained DFT calculations in which properties of very deformed nuclei pertinent to fission are described.
Whereas the static properties of the PES are very well accounted for by the modern DFT-type approaches, the microscopic description of the full dynamics of the fission process still remains a daunting project for nuclear theory.
Two main quantal approaches have been adopted in the past to describe the fission dynamics. First in 1978, the time-dependent Hartree-Fock (TDHF) theory was applied to the fission process [74]. However, as documented in recent studies of heavy-ion collisions [75][76][77], the TDHF approach, being a one-body (mean-field) approach has essentially no correlations beyond the mean field and, as such, cannot fully describe fluctuations (as, e.g., encountered in nucleon exchange or fragment formation). The lack of fluctuations also affects the correct triggering of scission in the time evolution of a deforming nucleus.
The second approach is based on the adiabatic approximation and involves the adiabatic TDHF theory [78] and the time-dependent generator coordinate method (GCM) [79]. In these approaches, the adiabadic hypothesis is invoked for the fission process, namely, that the nucleonic degrees of freedom are fully equilibrated during the slow evolution of the deforming nucleus over the macroscopic PES determined by the appropriate deformation degrees of freedom. We note that the adiabatic approximation is also inherent to the Langevin (or Fokker-Plank) type of approaches mentioned earlier. Under this approximation, the dynamics is described up to the scission configuration. At this point a sudden approximation is invoked (as in the early scission model [26]) in order to obtain the fission fragments and their characteristics (mass, charge, kinetic energy). However, as stated in recent works [80,81], ignoring the non-adiabatic effects during and past scission leads to limitations in the predictive power of the models regarding the fragment characteristics (e.g., the kinetic energy).
The recent dynamical study [81] attempts to bridge the two regimes. The adiabatic phase of the fission process is described with a static mean-field (DFT) approach and the nonadiabatic phase is carefully described with TDHF. It is found that the proper treatment of the nonadiabatic phase results in an accurate description of the kinetic energy and excitation energy of the resulting fission fragments. Before closing the above review on dynamical approaches, we also report efforts to describe spontaneous fission employing TDHF and Feynman's path integral approach [82], as well as the semiclassical equivalent method employing Vlasov's equation [83,84].
The preceeding rather limited overview of the extended literature on nuclear fission dynamics clearly indicates that a full microscopic description of the fission pro-cess is unusually challenging and, as of today, "it has not been possible to establish a computationally feasible framework capable to describe real nuclei with realistic interactions" [64]. Nevertheless, we can see that substantial progress in both macroscopic, as well as quantal dynamical approaches continues vividly. Motivated by the current situation regarding fission dynamics, in the present work, we initiated a study of fission based on the semiclassical microscopic N-body Constrained Molecular Dynamics (CoMD) model [85] in regards to its ability to describe the full dynamics of the fission process in proton-induced reactions on Th and U from low to high energies.
In the following discussion, we classify, as customary, the proton-induced fission reaction according to the proton energy E p as: a) low-energy when E p < 20 MeV, b) intermediate energy when 20 MeV < E p < 200 MeV and c) high energy when E p > 200 MeV. The present paper has the following structure. In section II, we highlight the basic aspects of the CoMD code and present the way that the code is applied to nuclear fission. In section III, we present results of fission calculations for proton-induced reactions on : a) 232 Th at 27 MeV and 63 MeV, b) 235 U at 10 MeV, 30 MeV, 60 MeV and 100 MeV, and c) 238 U at 100 MeV and 660 MeV. The fission fragment mass yield distributions are presented, as well as total fission cross sections, total energies, neutron multiplicities and fission times with respect to incident proton energy. In section IV, discussion of the results and conclusing remarks are given.
II. DESCRIPTION OF THE THEORETICAL MODEL
The theoretical model employed in this work is the microscopic Constrained Molecular Dynamics (CoMD) model originally designed for reactions near and below the Fermi energy [85][86][87]. Following the general approach of Quantum Molecular Dynamics (QMD) models [88], in the CoMD code nucleons are described as localized Gaussian wave packets. The wave function of the N-body nuclear system is assumed to be the product of these single-particle wave functions. With the Gaussian description, the N-body time-dependent Schödinger equation leads to (classical) Hamilton's equations of motion for the centroids of the nucleon wavepackets. The potential part of the Hamiltonian consists of a Skyrme-like effective interaction and a surface term. The isoscalar part of the effective interaction corresponds to a nuclear matter compressibility of K=200 (soft EOS) or K=380 (stiff EOS). For the isovector part, several forms of the density dependence of the nucleon-nucleon symmetry potential are implemented. Two of them will be used in the present work: the "standard" potential [red (solid) lines] and the "soft" potential [blue (dotted) lines] in the figures that follow. These forms correspond to a dependence of the symmetry potential on the 1 and the 1/2 power of the density, respectively (see, also, [89] and references therein).
We note that in the CoMD model, while not explicitly implementing antisymmetrization of the N-body wavefunction, a constraint in the phase space occupation for each nucleon is imposed, effectively restoring the Pauli principle at each time step of the (classical) evolution of the system. This constraint restores the fermionic nature of the nucleon motion in the evolving nuclear system. More specifically, at each time step, and for each nucleon, the presense of neighboring nucleons is determined in phase space. If the phase space occupation probability is greater than 1, then the code changes the direction (not the magnitude) of the nucleon momentum, so that the total momentum and kinetic energy are conserved [85].
The short range (repulsive) nucleon-nucleon interactions are described as individual nucleon-nucleon collisions governed by the nucleon-nucleon scattering cross section, the available phase space and the Pauli principle, as usually implemented in transport codes (see, e.g. [90]). The present CoMD version fully preserves the total angular momentum (along with linear momentum and energy), features which are critical for the accurate description of observables from heavy-ion collisions and, for the present study, the fission dynamics.
The ground state configurations of the target nuclei were obtained with a simulated annealing approach and were tested for stability for long times (2000-3000 fm/c). These configurations were used in the subsequent particle-induced fission simulations.
In the calculations of the present work, the CoMD code was used mainly with its standard parameters. The soft density-dependent isoscalar potential was chosen (K=200), as well as the standard and soft symmetry potentials, as mentioned above. The surface term was set to zero.
For a given p-induced reaction, a total of 3000-5000 events were collected. For each event, the impact parameter of the collision was chosen in the range b = 0-6 fm, following a triangular distribution. Each event was followed in time up to 15000 fm/c ( 5.0×10 −20 sec ) The phase space coordinates were registered every 50 fm/c. At each time step, fragments were recognized with the minimum spanning tree method [85], and their properties were reported. From this information, we obtained information on the evolution of the fissioning system and the properties of the resulting fission fragments. We mention that we consider as fission time (t f ission ) the time interval between the implantation of the proton in the target nucleus and the emergence of the two fission fragments. We allowed an additional time of 2000 fm/c after scission for the nascent fission fragments to de-excite. (We varied this time interval from 2000 to 5000 fm/c and we did not notice an appreciable change in the characteristics of the fission fragments.) Thus, in the following discussion, for each event, the fission fragment properties are reported and studied 2000 fm/c after scission.
A typical time evolution of a fissioning system as predicted by CoMD is presented in Fig. 1. The figure refers to p-induced fission of 232 Th at 63 MeV and gives a three dimensional representation of the fissioning system in the center of mass at three time instants. At 0 fm/c (Fig. 1a), the proton approaches the target nucleus 232 Th. At 2000 fm/c (Fig. 1b) the nucleus has been substantially deformed. This configuration is near or past the saddle point. (For this event scission occurs at t f ission =2500 fm/c.) At 4000 fm/c (Fig. 1c), we observe the two fission fragments and the emission of two neutrons departing nearly perpendicular to the fission axis.
In Fig. 2, the time evolution of the axial quadrupole moment Q 20 of the fissioning system (Fig. 2a) and its mean radius (Fig. 2b) are presented for the same event. Both quantities increase with time and indicate the course of the system toward fission. (We note that the decreasing value of Q 20 after scission is attributed to the continuous rotation of the deformed nucleus and the resulting fission fragments.)
A. Mass yields: Low and Intermediate Energy
We begin our study of the behavior of the CoMD code with comparisons to the recent experimental data described in [29,91]. First we show the CoMD calculations for the proton induced fission of 232 Th at energies 27 MeV and 63 MeV, using the standard and the soft symmetry potential. In Fig. 3, the mass yield distributions are illustrated for the reaction at 27 MeV. In the experimental data (full points), we observe the asymmetric nature of the fission mass yield, as expected for low-energy fission of minor actinides. We compare the data with our calculations (open points with statistical errorbars) with the standard symmetry potential (Fig. 3a), as well as the soft symmetry potential (Fig. 3b). With both selections of the symmetry potential, we observe a merely symmetric mass distribution with a rather flat top. No clean sign of an asymmertic mass yield distribution is seen in the CoMD calculations. A hint for asymmetric distribution may be implied in Fig. 3b. At this point, we note the finding of Nadtochy et al. [92] that in dynamical Langevin calculations of fission, dominance of asymmetric mass splits relative to the (expected) symmetric mass split may occur as a result of the dissipative dynamical behavior of the system. A similar suggestion that asymmetric fission may result from the hydrodynamical behavior of the system was first reported in [93] before strong shell effects were considered responsible for the asymmetric mass yield distribution of actinides [58].
The main reason for the symmertic mass yield distribution obtained by the CoMD code is that the nucleonnucleon interaction in the model does not include spin dependence, thus the resulting mean-field potential does not contain a spin-orbit contribution. Thus, the model does not predict the correct shell effects in the singleparticle motion of the deforming nucleus, which are necessary to lead to the asymmetric fission of 232 Th. A closer inspection of the two yields calculations, shows that they are slightly different but neither of them tends to resemble the experimental distribution.
We wish to comment that, while in the present implementation of the CoMD model the interaction has no spin dependence, (thus CoMD cannot describe the correct shell effects), the code emulates the quantum behavior of the deforming nuclear system, thus we should expect shell effects (at the mean-field level) which would correspond to those obtained by a deforming harmonic oscillator or Woods-Saxon potenial (without a spin-orbit term) [94]. We also mention that application of the CoMD approach to light atoms has successfully reproduced the electronic binding energies, as well as electron radii revealing shell structure [95]. A study of shell effects in the present implementation of CoMD applied to nuclei has not been performed to date. Such a study with the present CoMD code and a possible extension of it with spin dependence (in the spirit of recent work on BUU [96]) will be undertaken by us in the near future.
In Fig. 4, the mass yield distribution for the same reaction at proton energy 63 MeV is presented. It is evident that the structure of the experimental mass yield curve tends to become more symmetric at this higher energy. This is to be expected, because as the proton energy and, thus, the excitation energy of the fissioning system increases, shell effects will begin to fade (see e.g. [52,97]). However, it seems that this beam energy is not high enough to completely wash out the shell effects, as two asymmertic shoulders appear in the experimental mass yield curve. In Fig. 4a, we show the CoMD calculations with the standard symmetry potential and in Fig. 4b, the soft symmetry potential. As in the lower energy case (Fig. 3), the two choices of the symmetry potential do not lead to substantial differences on the mass yield shape.
We note that at this higher energy, as the asymmetric mass split is attenuated and the symmetric contribution is enhanced, an overall improvement in the agreement between our CoMD calculations and the experimental data is obtained. Thus, with the current implementation of the CoMD code, intermediate-energy fission mass yields may be correctly descibed. We will explore this behavior with other fissioning systems in the following.
We continue our comparisons with the recent work [98] on the proton-induced fission of 235 U at proton beam energies of 10 and 30 MeV. In Fig. 5, we present the calculated mass yield distributions (open points) of the proton induced fission of 235 U at 10 MeV, using the two forms of the symmetry potential, the standard (Fig. 5a) and the soft (Fig. 5b). The experimental data of [98] are presented by closed points. We note that the yield data in [98] are in arbitrary units, so we multiplied the yields with a factor of 7, for both reactions to make them comparable with our calculated cross section results. In the experimental data, the prevalence of the asymmetric fission mode is obvious for this low-energy fission reaction. Our calculations with the standard symmetry potential indicate a rather symmetric distribution (Fig. 5a). Interestingly, the calculations with the soft symmetry potential indicate a hint of asymmetric fission (Fig. 5b). However, as already discussed above, in the absence of the correct shell effects from the CoMD potential, we do not expect to obtain the correct asymmetric mass distribution. In the comparisons of Fig. 5, we observe that the CoMD calculations result in a wider mass yield distribution as compared with the data. Apart from a possible calculational aspect, this may also point to a limitation in the data toward asymmetric mass splits.
In Fig. 6, we present the mass yield distribution from the p-induced fission of 235 U at proton energy 30 MeV. The experimental data come from [98]. In the CoMD calculations, we again used the two forms of the symmetry potential, the standard (Fig. 6a) and the soft (Fig. 6b). Conclusions similar to those from Fig. 5 can be drawn. However, going from 10 MeV to 30 MeV proton energy, in the experimental mass yield distributions we see that the peak-to-valley ratio is reduced as the symmetric contribution increases relative to the asymmetric contribution. Thus, in Fig. 6, the CoMD calculations (yielding a symmetric mass distribution) are obviously in better agreement with the experimental data.
Furthermore we explored the behavior of CoMD at higher energy for the same system. In Fig. 7 we show the CoMD calculations for proton induced fission of 235 U at 60 MeV, again with the two forms of the symmetry potential, standard (Fig. 7a) and soft (Fig. 7b). We compared our calculations with available experimental data of proton induced fission of 238 U (not of 235 U) at this beam energy taken from [28] (and normalized to our calculated cross sections, as in Figs. 5 and 6). We note that the small difference in the number of neutrons of the fissioning systems at this higher energy is not expected to substantially affect the mass yield comparisons of Fig. 7. The experimental distribution for U indicates that the asymmetric fission mode still prevails. This yield curve presents a plateau, in contrast to the Th distribution at comparable energy that has a symmetric peak and two shoulders at asymmetric mass splits (Fig. 4). Our calculations, as expected, indicate a symmetric peak, as we saw in the lower energy cases (Fig. 5, 6) of the proton induced U fission.
B. Mass yields: High Energy
We continue our investigations with the application of CoMD to high energy fission reactions with protons. We remind that the CoMD code has been successfully applied to the description of a large variety of nuclear reactions (e.g., [85,[99][100][101][102]). As a fully dynamical code, we expect that it may perform well also with spallationtype reactions with high-energy protons.
Toward this direction, we performed calculations for the proton induced fission of 238 U at 660 MeV proton energy. The motivation comes from the importance of this energy range in ADS-type applications [4,5] and, specifically, the recent experimental data for this reaction reported in [103][104][105][106] obtained by off-line gammaray techniques. In Fig. 8, we show the experimental data (full symbols) that despite the large experimental fluctuations indicate a prevailing symmetric fission mode. (We point out that the yield axis is logarithmic in this figure). Our CoMD calculations, with the two forms of the symmetry potential (Fig. 8a standard, Fig. 8b soft) are in reasonable agreement with the experimental data near symmetric mass splits.
We note that the shape of the experimental mass distribution is characterized by two low-yield very asymmetric fission components ( "super-asymmetric" fission [106]).
Our calculations show an overall symmetric curve that is wide enough to contain these superasymmetric mass splits, predicted, however, with larger cross sections than the data. Our calculations resemble the wide symmetric mass yields observed resently in high-quality mass spectrometric data obtained in inverse kinematics at relativistic energies [107][108][109]. Comparing the data of [103,104] with the higher-energy inverse-kinematics data (e.g. [107]), we speculate that the former data may be incomplete due to the nature of the measurements and we suggest that measurements of this very important reaction at ∼600 MeV be performed in inverse kinematics in the same fashion as the higher-energy data.
We wish to point out that the lower-energy fission data discussed above (Figs. 3-7) have been acquired with standard fission on-line counters, which cannot provide information on the atomic number Z of the fission fragments. We remind that to obtain Z information, either mass spectrometric tecniques (mainly in inverse kinematics) or off-line gamma-ray methods have to be used. As already mentioned, the data of [103,104] were obtained with gamma-ray techniques and thus can provide information on the Z-A correlation of the observed fission fragments.
In Fig. 9, we first present the experimentally observed mean Z (Fig. 9a) as well as the standard deviation of the Z distribution (Fig. 9b.) with respect to the mass number A of the fission fragments. Our CoMD calculations [solid (red) line] show that the fission fragments are more neutron-rich relative to the experimental data. Due to the fact that, for this high-energy reaction, the code causes one or two protons to be emitted before the scission point, we made a selection concerning the charge of the fissioning nucleus (Z=93) so that it corresponds to no pre-scission proton emission at the time of scission. With this selection, the CoMD calculations [dashed (blue) line] are in better agreement with the data, especially for the heavier fragments. From the above comparison for this high-energy reaction, we may conclude that the fission fragments, as obtained 2000 fm/c after scission, may still contain enough excitation en-ergy to further evaporate (predominantly) neutrons and, thus, move closer to the data in the Z-A plot. An explicit de-excitation of these fragments with a standard de-exciation code (e.g. [34,36]) was not performed in the present exploratory work, but will be performed by us in the near future within our plan of detailed studies of high-energy p-induced fission of actinides. In Fig. 9b, our calculations of the standard deviation are higher in comparison with the experimental data. However, when the selection of the fissioning nucleus is made, so that it corresponds to no pre-scission proton emission (Z=93), the calculations are in better agreement with the data, despite the large fluctuations due to the limited statistics of the calculations after the imposed selection.
C. Fission Cross Sections
After the presentation of the mass yield distribution, which is one of the most characteristic observables of fission reactions, we continue our investigation with several other fission observables, starting with the total fission cross sections. In The available experimental data for 232 Th are shown with closed (black) triangles connected with dotted lines. The experimental point for 235 U at E p =60 MeV is shown with a closed (black) circle and for 238 U at E p =660 MeV with closed (black) square.
Concerning the fission of thorium, we observe that increasing the proton energy (and thus the excitation energy of the fissioning nucleus), there is only a slight increase in the calculated cross section. However, the experimental data show an increase of approximately 30%, which our calculations do not reproduce. For the fission of 235 U, in the calculations we observe a jumb of the cross section, with increasing proton energy from 30 MeV to 60 MeV. The experimental point at 60 MeV is 20% larger than our calculated point. At higher energies, for the proton induced fission of 238 U , the total fission cross section is rather constant and in rough agreement with the experimenal data within errorbars.
In Fig. 11, we present the ratio of the fission cross section to the heavy-residue cross section as a function of the proton energy. This ratio is a very sensitive observable for the relative importance of fission as a deexcitation path for the nuclei examined.
The CoMD calculations (with the standard symme-try potential) are shown with the closed (red) symbols connected with solid (red) lines with exactly the same correspondence as in Fig. 10. In addition, CoMD calculations with the soft symmetry potential are also shown in this figure with closed (blue) symbols connected with dotted (blue) lines (such calculations were not shown in Fig. 10 because they would nearly overlap with the ones shown).
For the fission of thorium we observe an increase in the ratio from 27 to 63 MeV and a rather weak sensitivity to the choice of the symmetry potential, the ratio being slightly larger at the energy of 63 MeV with the choice of the soft symmetry potential. For the fission of 235 U, the ratio increases from 10 to 30 MeV and then diminishes at the higher energies of 60 and 100 MeV. Decerasing trend also exhibits the ratio for 238 U from 100 to 660 MeV, possibly pointing to the increasing role of fast evaporation processes for the more excited nuclei involved in the higher energy reactions.
Finally, focusing our attention to the behavior of the calculated ratio with the soft symmetry potential for the 235 U and 238 U isotopes, interestingly we observe that this ratio is substantially larger than the corresponding ratio calculated with the standard symmetry potential. Further detailed investigation of the features of this bevavior is in line. However, from the present work we conclude that the ratio is a rather sensitive observable of the density dependence of the nucleon-nucleon symmetry potential, and thus, of the nuclear symmetry energy, which is a topic of current importance in regards to studies of the nuclear equation of state (e.g. [110][111][112]).
D. Total Fission Kinetic Energy
In the following we will examine the mean total kinetic energy of the fission fragments as a function of the proton energy for the studied reactions. This is an important kinematical observable characterizing on average the degree of deformation, the compactness and the asymmetry at scission of the fissioning system and offers an important testing ground of the overall dynamical description offered by the employed code. In Fig. 12, we illustrate the mean total kinetic energy of the fission fragments for the aforementioned fission reactions. The symbols correspond to the same reactions as in Figs. 10 and 11.
For the p-induced fission of 232 Th, the calculated kinetic energy [closed (red) triangles] is nearly the same for the two studied reactions at 27 and 63 MeV. The values are lower than the experimental data [closed (black) triangles] which indicate an increase of the kinetic energy with the increase of the excitation energy of the fissioning system. For the p-induced fission of U, the calculated kinetic energy [closed (red) circles] increases slightly when we go from 10 to 30 MeV. At higher energies, there is a small but continuous decreasing trend. The available experimental data [91,98] for the first three proton energies [closed (black) circles] are higher than our calcula-tions, with an increasing trend from 30 to 60 MeV. Along with the proton-induced reactions studied in this work, we mention the recent experimental data of [113] on the fission kinetic energies of the neutron-induced fission of 235 U in a broad energy range below 50 MeV. We note that the measured fission energies in the neutron energy range [30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45] MeV are approximately 162 MeV, in overall agreement (albeit higher) with our calculation for the p-induced 235 U fission (Fig. 12).
We can relate the lower kinetic energy obtained by CoMD, as compared to the experimental data, to the observation that the CoMD code implies emission of about two (on average) pre-scission protons even at the lower energy fission reactions at 10 and 30 MeV. This is unphysical as can be concluded from our up-to-date experimental and theoretical understanding of the low-energy fission process. We discuss this feature quantitatively in the following. Furthermore, in all reactions studied, the CoMD calculations of the total energy with the soft symmetry potential [(blue) points connected with (blue) dotted lines] are lower than the corresponding ones with the standard symmetry potential [(red) points connected with (red) solid lines]. This may point to a scission configuration with a more elongated shape (and longer neck) in the soft case, as compared to a more compact shape (and a shorter neck) in the standard case.
E. Pre-scission and Post-scission Particle Emission
In Fig. 13 we show the calculated average pre-scission, post-scission and total proton multiplicity from the studied reactions. Of course, the pre-scission proton emission should not be present at the lower energies, but it should compete with the pre-scission neutron emission at higher energies. A clear increasing trend is present in the calculations with a substantial increrase at the highest energy of 660 MeV. We think that further detailed investigation is necessary to understand this feature of pre-scission proton emission of the code at low energies (see also below the corresponding situation for neutrons).
We now discuss the predictions of the CoMD code concerning the pre-scission, post-scission and total neutron multiplicity. We remind that the pre-scission neutron multiplicity serves as a clock of the evolution of the fissioning system up to the moment of scission, whereas the post-scission neutron multiplicity can be directly related to the excitation energy of the nascent fission fragments [114]. Both quantities can be obtained experimentally with proper, albeit especially difficult, measurements and model analysis and can offer very sensitive observables for any dynamical model of fission.
From the present study with the CoMD code, we show in Fig. 14 the pre-scission, post-scission and total neutron multiplicities versus proton energy. These quantities show an overall increasing trend with increasing energy for the studied fissioning systems.
More specifically, for the p-induced fission of 232 Th, the calculated pre-scission neutron multiplicity [closed (red) triangles] is nearly 3 at both energies 27 and 63 MeV (showing a small increassing trend). The experimental value is nearly 1 at the energy of 27 MeV, and thus about two units lower than the calculation. This observation now reveals that the CoMD calculation predicts a larger pre-scission neutron multiplicity (by about two units), as was the case for the pre-scission proton multiplicity that we examined before (that was assumed responsible for the observed lower total kinetic energy of the fission fragments, as compared to the experimental values). This conclusion, along with the corresponding one in regards to pre-scission proton emission, calls for further detailed study of the parameters of the CoMD code. We speculate that a careful fine-tuning of the surface term may be necessary to suppress the observed unrealistic feature of both pre-scission proton and neutron emission of the code at low energies. Furthermore, we note that the experimental value for the pre-scission neutron multiplicity at 63 MeV is in reasonable agreement with the CoMD calculation. In regards to the post-scission multiplicities, we can say that the calculation is in fair agreement with the available data, albeit larger at the higher energy.
For the p-induced fission of 235 U, the calculated prescission and post-scission neutron multiplicities (Figs. 14a and 14b, respectively) increase steadily as the energy increases from 10 to 100 MeV. Agreement is seen with the experimental point at 63 MeV for 238 U taken from [29]. For the p-induced fission of 238 U at 100 MeV, the pre-scission and post-scission neutron multiplicities are slighly higher than the corresponding values for 235 U, of course reflecting the larger neutron content of the former nucleus. Similar observations pertain to the total neutron multiplicities (Fig. 14c) for the reactions studied. We also note the agreement of the calculated value with the experimental point at 660 MeV [105].
F. Fission Timescale
We will complete the presentation of the predicted fission characteristics with a discussion of the fission time as obtained directly by our fully dynamical CoMD calculations. We point out that it is a difficult task to extract the fission time scale from experimental data (e.g. [114][115][116]). Furthermore, the exctracted values are unavoidably model and method dependent. On the other hand, a fully dynamical code, either a macroscopic one, as the current advanced Langevin codes (e.g. [50][51][52]) or a microscopic TDHF-type code (e.g. [81]) can in principle provide realistic information on the fission time scale, as long as, the code has been extensively benchmarked by comparison of its predicted fission observables with available experimental data on mass yield distributions, kinetic energies and neutron multiplicities.
In the present study, for the first time the semiclassical N-body CoMD code was tested with p-induced fission reactions and, as our presentation so far has indicated, it performed in an overall satisfactory manner, especially for intermediate and high energy fission reactions. The CoMD code, as a fully dynamical code, naturally describes the complete dynamical path of the fission process. Therefore, with CoMD we can determine the fission time in a direct way.
In Fig. 15, we show the extracted average fission time versus the proton energy of the reactions studied. The presentation of the calculations follows the same pattern as in Figs. 10-12. Two main groups of points are shown in Fig. 15. The upper group (closed points) corresponds to the CoMD predictions of the fission time using the full ensemble of the fissioning nuclei for each fission reaction. The lower group (open points) corresponds to the CoMD predictions with a selection made on the fissioning nucleus at the moment of fission to have exactly the initial Z value (thus, assuming no pre-scission proton emission). Within each group, the points connected with full (red) lines correspond to CoMD calculations with the standard symmetry potential, whereas the points connected with dotted (blue) lines are with the soft symmetry potential. When the selection of no pre-scission proton emission is made (lower group), the chosen fissioning nuclei are both more excited and more fissile (compared to those resulting after the emission of pre-scission protons), and thus their fission time is correspondingly lower. As a general observation, within both groups, for each reaction, there is an overall decreasing trend with increasing proton energy. Furthermore, we notice that the choice of the soft symmetry potential results in faster fission dynamics. This can be understood qualitatively by the higher potential energy that the soft symmetry potential implies for the neutron-rich low-density neck region for a highly deformed fissioning nucleus and, in turn, can be related to the corresponding lower total fission energy observed in Fig. 12.
For the Th reactions, the fission time is slighly higher than the U reactions at nearly the same energy, reflecting the lower fissility of the 232 Th nucleus. We notice that the fission time for the p-induced fission of 235 U is on average longer than that of 238 U at the energy of 100 MeV. From Fig. 13 we see that the average pre-scission and post-scission proton multiplicity is similar for these two reactions, whereas from Fig. 14 the pre-scission and post-scission neutron multiplicity is larger in the latter case. We cannot provide a simple explanation of this fission time difference (which is similar in both groups and, within each group, for each choice of the symmetry potential). This difference should be related to the details of the dynamical evolution of the two different fissioning systems.
The above discussion indicates that the CoMD code can provide detailed information of the fission time under various conditions of excitation energy and fissility. From our study so far, if we exclude the low energy fission, we expect that from 60 MeV and above, the time scale information may be consider realistic. It would be very interesting if the present predictions can be compared to experimental information when such information becomes available. Furthermore, it is exciting to notice the sensitivity of the fission time scale to the density dependence of the nucleon-nucleon symmetry potential, and thus isospin part of the nuclear equation of state, suggesting this observable as an additional probe of the latter [110].
G. CoMD Energies of Fissioning Nuclei
After the above detailed discussion of CoMD observables, we will close our presentation by examining the evolution of the interaction and kinetic energies of fissioning nuclei in their course toward scission.
As a representative example, we show in Fig. 16 the results for the reaction p(30MeV)+ 235 U for the standard symmetry potential [(red) points connected with solid line] and the soft symmetry potential [(blue) points connected with dotted line]. In order to examine the evolution of the average energies in the course to scission, and given the broad distribution of fission times (as seen in Fig. 15), in Fig. 16 the time is referenced with respect to the moment of scission, taken to be at t=0 fm/c.
In Fig. 16a, the event-average nuclear interaction energy is presented, taken to be the sum of the two-body, three-body, surface and symmetry energy terms. An overall increase of the interaction energy is observed as the fissioning system approaches the moment of scission, for both the standard and the soft symmetry potentials. Interestingly, the calculation with the soft symmetry potential results in higher interaction energy of the fissioning system, as can be understood by the overall larger nuclear symmetry energy in the low-density neck region, and the overall more repulsive dymanics implied. In Fig. 16b, the total Coulomb energy is presented, being nearly the same for the two choices of the symmetry potential (only slightly lower for the soft symmetry potential, since as we discussed, it results in more elongated configurations). A monotonic decrease of the Coulomb energy is observed as the fissioning system evolves toward scission, this decrease being essentially the main driving force of the nuclear system to fission.
In Fig. 16c, the total potential energy of the fissioning system, namely the sum of the nuclear interaction energy and the Coulomb energy, is shown. In Fig. 16d, the kinetic energy of the fissioning system is shown, being lower for the soft symmetry potential, that also results in higher potential energy (Figs. 16a, 16c) involving more elongated configurations and more repulsive dynamics in the low-density neck region. Finally, in Fig. 16e, the total energy of the fissioning system is shown, being slightly higher for the soft symmetry potential. The decrease with time is due to the prescission emission of neutrons and protons (Figs. 13 and 14).
In Fig. 17, we show the variation of the energies of the fissioning system p+ 235 U with respect to the change of the incident proton energy from 10 MeV [(blue) points connected with dotted line] to 30 MeV [(red) points connected with solid line] to 60 MeV [(green) points connected with dashed line]. The calculations are with the standard symmetry potential. In Fig. 17a, we observe that with proton energies of 10 and 30 MeV, the average nuclear interaction energy of the fissioning systems is nearly similar. The Coulomb energy (Fig. 17c) and the kinetic energy (Fig. 17d) increase in going from 10 to 30 MeV proton energy. It appears that the additional proton energy brought in the fissioning system is stored as kinetic energy (Fermi motion) and Coulomb energy. This increase is reflected in the total energy (Fig. 17e).
Interestingly, in going from 30 to 60 MeV proton energy, we observe that the nuclear interaction energy is increased by nearly this amount of energy (about 30 MeV), whereas the Coulomb energy is nearly the same. The kinetic energy decreases slighly, which is rather counterintuitive: we would expect an increase in the kinetic energy as we saw in going from 10 to 30 MeV proton energy. Thus, for the case of 60 MeV protons, the additional amount of energy brought in by the proton is stored as nuclear potential energy, indicating that essentially above the Fermi energy, the nuclear (mean-field) potential energy is effectively momentum depend. The origin of this momentum dependence is in the Pauli correlations imposed by the CoMD procedure, i.e. the phase-space constraint imposed by CoMD to ensure the Fermionic behavior of the classically evolving system of Gaussian wave packets (see Section II). The total energy for the case of 60 MeV protons (Fig. 17e) is consistently above that of the previous two energies and has a diminishing behavior with time toward scission due to the emission of prescission particles, as in the other two energies.
The above examination of the CoMD energy variations of the fissioning system provides a good check of the consistency and accuracy of the code, as applied to the description of a deforming system as it evolves toward scission. We wish to point out that from the present calculations we cannot obtain information regarding the fission barriers of the involved fissioning nuclei. The reason is that the calculations are performed at high enough energy, so that the fissioning systems are above the fission barrier expected to be near 6-8 MeV. In order to obtain average fission barriers with CoMD, a different methodology has to be followed: the total energy of a fissioning nucleus has to be obtained as a function of deformation, placing the nucleus into a deformed harmonic potential. This interesting project, however, requires further computational effort beyond the scope of the present paper that we plan to undertake in the near future.
IV. DISCUSSION AND CONCLUSIONS
In the present work we employed the semi-classical microscopic N-body code CoMD to describe proton-induced fission of 232 Th, 235 U and 238 U nuclei at various ener-gies. In retrospect, we chose these nuclei because of the availability of recent literature data and because of their significance in current applications of fission. We found that the CoMD code in its present implementation is able to describe fission at higher energies (i.e., above E p =60 MeV) where shell effects are mostly washed out. We remind that the effective nucleon-nucleon interaction employed in the code has no spin dependence, and thus the resulting mean field has no spin-orbit contribution.
The total fission cross sections of the studied reactions were rather well reproduced. Furthermore, the ratio of fission cross sections over residue cross sections showed sensitivity to the choice of the nucleon-nucleon symmetry potential and, thus, to the density dependence of the nuclear symmetry energy. Consequently, this ratio can be used as a probe of the nuclear equation of state at low density and moderate excitation energy, corresponding to intermediate and high energy light-particle induced fission. Concerning total fission energies and neutron multiplicities, we found that they were rather adequately reproduced by the CoMD code (except at the lower energies). Finally, information on the fission time scale can be obtained from the present calculations. The obtained fission times show a dependence on the excitation energy of the nucleus, as well as on the choice of the symmetry potential. Thus, the fission time offers one more observable sentitive to the isospin part of the nucleon-nucleon effective interaction.
In regards to the N/Z properties of the fission fragments, the CoMD code appears to perform well for the case of p (660 MeV) + 238 U that we tested in this work. We note that the majority of the data of pinduced fission reactions have been performed in direct kinematics with on-line techniques and, thus, Z information of the fission fragments cannot be obtained. Apart from off-line gamma-ray techniques, such information has been obtained in studies in inverse kinematics mostly at high energies (e.g. [107]). In parallel to these experimental efforts, we mention the novel mass-spectrometric study of the reaction 238 U (6.5 MeV/nucleon) + 12 C in which proton-pickup and other channels leading to fission were chosen by kinematical reconstruction [117,118]. From an application point of view, it would be very important to obtain experimental information in inverse kinematics at proton energies from 100 MeV to 1000 MeV. As we mentioned earlier, we plan to perform detailed calculations of high-energy proton-induced fission in the near future. Of course the CoMD code can be used for neutron and other light particle incuced fission reactions, heavy-ion fusion/fission reactions, as well as multinucleon-transfer/fission reactions and its predictions can be compared with existing or future experimental data. We also plan to undertake calculational and experimental efforts in this broad direction in the future.
To conclude, in the present study, the semiclassical N-body code CoMD was tested for the first time with p-induced fission reactions. We found that the code performs in an overall satisfactory manner, providing a de-scription of the full dynamics of the fission process, especially for intermediate and high energy fission reactions. We suggest that inclusion of spin dependence in the nucleon-nucleon effective interaction and further improvements of the code should be implemented toward achieving a satisfactory description of lower energy fission in which shell effects play a dominant role. We point out that the code parameters, as specified predominantly by ground-state properties of nuclei and nuclear matter, do not dependent on the specific reactions being explored and, as such, the CoMD code can offer valuable predictive power for the dynamics of the fission process in a broad range of excitation energy. Consequently, the CoMD code can be used for the study of fission of not only stable nuclei, but also of very neutron-rich (or very neutron-deficient) nuclei which have not been studied experimentally to date and may provide guidance to upcoming RIB experiments. Moreover, this possibility can be further exploited in studies of fission recycling [14][15][16], namely, the upper end of the r-process nucleosynthesis by the fission of the resulting very neutron-rich heavy nuclei.
V. ACKOWLEDGEMENTS
We are thankful to M. Papa for his version of the CoMD code, and to Hua Zheng for his rewritten version of the CoMD. We are also thankful to W. Loveland for his enlighting comments and suggestions on this work. Furthermore, we wish to acknowledge the motivation and recent discussions with Y.K. Kwon and K. Tshoo (Color online) a) normalized mass distributions (cross sections) of fission fragments from p (63 MeV) + 232 Th. Full points (black): experimental data [29]. Open points: CoMD calculations with the standard symmetry potential. b) as above, but CoMD calculations with the soft symmetry potential. (Color online) a) normalized mass distributions (cross sections) of fission fragments from p (660 MeV) + 238 U. Full points (black): experimental data [103] and [104]. Open points: CoMD calculations with the standard symmetry potential. b) as above, but CoMD calculations with the soft symmetry potential. | 2015-06-16T06:11:26.000Z | 2015-06-16T00:00:00.000 | {
"year": 2015,
"sha1": "7da86769b0e1758dc17d7775ee67cdf7c7fa44c3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1506.04844",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7da86769b0e1758dc17d7775ee67cdf7c7fa44c3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16769957 | pes2o/s2orc | v3-fos-license | Cytoskeletal network underlying the human erythrocyte membrane. Thin-section electron microscopy.
A filamentous network underlying the human erythrocyte membranes can be clearly visualized in situ by electron microscopy of thin sections of specimens fixed with tannic acid-glutaraldehyde. The network is composed of two layers: the first, a layer of vertical components with granular appearance, which are seen to be directly associated with the membrane proper, and the second, a horizontally disposed, anastomosing meshwork of filamentous components, approximately 9 nm in thickness, which are attached to the vertical components. The diameter and appearance of the filamentous components are similar to those of purified spectrin. EDTA treatment (0.1 mM, pH 8.0), which was used to extract spectrin and actin, resulted in the disappearance of the filamentous meshwork, leaving only the granular components.
in the disappearance of the filamentous meshwork, leaving only the granular components .
The cytoskeletal network underlying the human erythrocyte membrane has often been discussed in light of its roles in determining cell shape and in constraining the lateral movement of membranepenetrating proteins (14,19,23,30) . The accumulated evidence indicates that the network is mainly composed of spectrin (5,6,10,18,28), which accounts for ---75% of all of peripheral membrane proteins (31) . It is now proposed that spectrin may form an anastomosing network beneath the erythrocyte membrane in combination with actin (14,19,20,30,33) . Most recently, a series of proteins has been identified which connects spectrin to the erythrocyte membrane (I, 12, 37) . Spectrin molecules as heterodimers have a rodlike appearance of 100 nm in length, as revealed by shadow-casting electron microscopy (26,27) . Cytoskeletal networks were observed by scanning electron microscopy (8) and by negative staining after Triton X extraction (24). In spite of
Preparations of Erythrocyte Membranes
J . CELL BIOLOGY ©The Rockefeller University Press -0021-9525/80/06/0567/10 $1 .00 Volume 85 June 1980 567-576 recent great progress, our understanding of the ultrastructural aspects of the network is still limited, mainly because of its poor visualization in sectioned preparations .
In the present study, we have tested various chemical fixatives in an effort to clearly visualize the cytoskeletal network, and only tannic acid fixation has revealed network structures that have not been seen before. We believe this method to be particularly useful for ultrastructural analyses of the supramolecular organization of the network, of the mode of association with the membrane proper, and of the dynamic aspects of the network during various membrane activities.
Human erythrocyte membranes (ghosts) were prepared from freshly drawn human blood according to the procedure of Fair-banks et al. (7) using sodium phosphate buffer, pH 8.0 . In these preparations the hemoglobin component was not detected in SDS slab gel electrophoresis with 13% polyacrylamide gel loaded with 50 lAg of membrane protein. Some erythrocyte membranes were prepared by the same method except that pH 7.5 buffer was used. Loose white pellets of the membranes were immediately processed for electron microscopy .
To avoid the use of hypotonic buffer, another method of preparation was employed . Fresh human erythrocytes were washed four times in 5 mM sodium phosphate buffer (pH 8.0)-0.15 M NaCl, and then hemolysed at room temperature by being mixed into 6 Vol of 5 HIM sodium phosphate buffer (pH 8.0)-0.15 M NaCI containing 1.0 x 10-" g/ml saponin (ICN Nutritional Biochemicals, Cleveland, Ohio). After 10 min of treatment, the erythrocyte suspensions were diluted by adding 10 vol of the same buffer-NaCl solution without saponin. The erythrocyte membranes were then pelleted by centrifugation at 24,000 g for 12 min and washed three times in the buffer-NaCl solution. The resulting loose pellets were processed for electron microscopy .
EDTA Extraction of Erythrocyte Membranes
To elute membrane proteins at low ionic strength, the erythrocyte membranes were further treated with 0.1 mM EDTA (pH 8.0) at 37°C for 15 min as described by Fairbanks et al. (7). By this treatment, the erythrocyte membranes were fragmented into small inside-out vesicles, which were collected to be processed for electron microscopy. The degree of protein elution from the membranes was examined by electrophoresis .
Preparation of Spectrin
Erythrocyte membranes were prepared in 5 mM phosphate buffer (pH 8.0) from recently outdated blood, and incubated with 2 mM adenine and 20 mM inosine in 145 mM NaCI-5 mM sodium phosphate buffer (pH 7.4) at 37°C for 3 h (16) . Spectrin was extracted and partially purified by the method of Marchesi (l3) except that the column of Sephadex G-200 was eluted with 0.1 mM EDTA, 2.5 mM,8-mercaptoethanol, 100 mM NaCl in 25 mM Tris-HCl buffer (pH 8.0). After the partially purified Spectrin was concentrated by ultrafiltration (Amicon Diaflo UM-10 membrane, Amicon Corp ., Scientific Sys. Div., Lexington, Mass .), Spectrin dimers were purified by gel filtration on Sephadex CL-4B column in the same solution, according to the method of Ungewickell and Gratzer (34) and were immediately processed for electron microscopy. This preparation showed no trace of proteins other than Spectrin after SDS polyacrylamide gel electrophoresis (see Fig. 5d) .
Electron Microscopy THIN-SECTIONED PREPARATIONS : Erythrocyte membranes and purified Spectrin were prefixed with 2% tannic acid-2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer (pH 7.4) at 4°C overnight. To replace the fixative, the samples were centrifuged at 500 g for 5 min at each step until the membranes were embedded in agar. The samples were then rinsed with the buffer and postfixed with ice-cold 1% OsO, in 0.1 M sodium cacodylate buffer (pH 7.4) for 2 h. Some samples were fixed with 1% OSOn (in 0.1 M sodium cacodylate buffer, pH 7.4) alone, or 0.6% KMn0, (in 0.1 M Veronal-acetate buffer, pH 7.4) alone, or 2.5% glutaraldehyde followed by 1% OS0, (in 0.1 M sodium cacodylate buffer, pH 7.4) . After being rinsed in distilled water, pellets of the fixed membranes were embedded in 1% agar, stained en bloc with 0.5% aqueous uranyl acetate for 2 h at room 568 THE JOURNAL OF CELL BIOLOGY " VOLUME 85, 1980 temperature, dehydrated in graded concentrations of ethanol, and embedded in Epon 812. Thin sections were cut with a diamond knife on an LKB Ultratome (LKB Instruments, Inc., Rockville, Md .), stained doubly with uranyl acetate and lead citrate, and examined in a Hitachi HU-12 electron microscope operated at 100 kV and (for stereoscopic observations) in a Hitachi H-700 electron microscope equipped with goniostage operated at 200 kV . NEGATIVE srAINING : To obtain single-layered erythrocyte membranes for negative staining, two different procedures were applied. In some preparations, fresh human erythrocytes were washed four times in 5 mM sodium phosphate buffer (pH 8.0)-0.15 M NaCl and were added in a drop to 5 mM sodium phosphate buffer (pH 8.0). By this procedure, a small fraction of the cells was hemolysed at the air-water interface and spread flat by surface tension as originally described by Nicolson and Singer (17). The flattened membranes were picked up on carbon-coated, collodion-filmed grids. In the other preparations, fresh human erythrocytes were washed and attached to poly-L-lysine-coated grids, andthen the tops of such attached erythrocytes were ripped off by a stream of 5 mM sodium phosphate buffer, pH 8.0, according to the method used by Clarke et al. (3). The membrane on the grid was fixed for 5 min with adrop of 2.5%glutaraldehyde in 0.1 M sodium cacodylate buffer (pH 7.4) and negatively stained with saturated aqueous uranyl acetate. Some membrane preparations were treated with heavy meromyosin (HMM) (l-3 mg/ml) on a grid for 1-10 min at room temperature before negative staining was applied .
SDS Polyacrylamide Gel Electrophoresis
Samples were subjected to electrophoresis in either 9% polyacrylamide slab gels by the method of Laemmli (11) or 5.6% polyacrylamide disc gels by the method of Fairbanks et al. (7).
Cytoskeletal Network in Thin Sections
After tannic acid-glutaraldehyde fixation, a characteristic structure was clearly seen directly associated with the cytoplasmic surface of the erythrocyte membrane ( Fig . 1) . Such a structure appeared as an anastomosing network of filamentous structures in tangentially or obliquely cut sections of the membrane . In cross section the structure underlying the membrane was resolved into two layers (Fig . 2a). Connected with the membrane proper was a layer of vertical components, which appeared as granular structures with sizes ranging from 10 to 13 run. Interestingly, corresponding to the position of these granular components, the membrane proper often showed a moderate density in the middle clear zone of its trilamellar structure. In tangential section of the membrane these components were round or ellipsoidal in shape and distributed somewhat equidistantly, separated by spaces of 10 to 30 nm (Fig . 2b). The density of the granular components was difficult to be accurately estimated in thin sections, FIGURE l Thin section of an erythrocyte membrane fixed with tannic acid-glutaraldehyde mixture. The cytoskeletal network underlying the erythrocyte membrane is clearly visualized in obliquely or tangentially cut areas of the membrane (gray areas) . E, Extracellular space. !, Intracellular space. Bar, 0.1 jim. x 75,000. 569 570 FIGURE 2 High-power electron micrographs of erythrocyte membranes. E, Extracellular space. I, Intracellular space. (a) In cross section, the cytoskeletal network is clearly visible on the cytoplasmic surface of the membrane . The structure is resolved into two layers : vertical granular components (arrowheads) and horizontally arranged filamentous components (F). Note the moderate densities in the middle clear zone of the trilamellar structure, often corresponding to the positions of the granular components of the network. Bar, 0.1 tim. X 250,000 . (b) In obliquely cut section, the appearance of the layered structure of the cytoskeletal network is gradually changed. Next to the homogeneous layer of the membrane proper (gray area) is seen the granular layer (see inset), then there appears the filamentous meshwork superimposed on the granular layer, and finally only the £lamentous meshwork is observed . The diameter and the appearance of the filamentous components (large arrows) are very similar to those of purified spectrin . Note the scattered occurrence of round spots (small arrows), to each of which several filamentous components appear to attach . Bar, 0.1 jim. X 150,000.
for the area showing only the granular components is limited and some larger granules are likely to be aggregates of the unit components . Our rough THE JOURNAL OF CELL BIOLOGY " VOLUME 85, 1980 estimation indicated that the density of the granular components ranged from 1,600 to 2,300 /Ftm 2 depending on the membranes, with the mean of 1,800 /t M 2 . Abutting directly on this granular layer was a second layer composed of a horizontally arranged filamentous meshwork (Fig . 2a) . This meshwork extended parallel to the membrane all over the cytoplasmic surface . The meshwork was made up of filamentous components of a relatively uniform thickness, 9 ± 1 nm SD (No . of measurements : 100), which were connected with each other to form angular meshwork units . The arrangement of the meshwork was very irregular . The distance between different filaments of the meshwork ranged from 9 to 50 nm . One could often recognize the characteristic configuration of the filamentous components as possible repeating units ( Fig. 2 b), similar to that of purified spectrin (27) . Scattered in the filamentous meshworks were seen round spots of 25 ± 3 nm SD (No . of measurements: 100) in diameter, to each of which several filamentous components appeared to attach (Fig . 2b) . The distance between adjacent spots ranged from 80 to 280 nm. Because their counterparts could not be unequivocally identified in cross sections or in negatively stained membranes, more work is needed to clarify their involvement in network formation.
In sections cut oblique to the membrane, the appearance of the layered structure was gradually changed : next to the homogeneous layer of the membrane proper was seen the granular layer where the filamentous meshwork was not visible (Fig . 2 b) . Then, there appeared an area where the granular components were superimposed vertically over the filamentous components . Careful analysis of both obliquely and transversely cut sections showed that the granular components were cross-bridged between the membrane proper and the filamentous meshwork . Beyond the filamentous meshwork, some fuzzy or fine filamentous materials extended from the network . However, such materials were too irregular in configuration and occurrence to be seen as structures of the erythrocyte membrane. Stereoscopic observations of obliquely or tangentially cut sections showed the cytoskeletal network in three dimensions, confirming the spatial relationship described above (Fig. 3) .
The overall structure of the network underlying the erythrocyte membrane was essentially the same among the different membrane preparations examined: hypotonic preparations at pH 8 .0 and pH 7.5, and the isotonic preparations at pH 8 .0 by use of saponin (data not shown) . In saponin-lysed preparations, many holes -10-15 nm in diameter were found in tangential views of the membranes (data not shown) . With respect to the protein composition in different preparations, the electrophoretic pattern turned out to be the same in erythrocyte membranes prepared hypotonically at both pH 8 .0 and 7 .5, showing the typical pattern obtained by Fairbanks et al . (7) (see Fig . 5 a) . A similar protein composition, except for a complete lack of band-6 protein, is recognized in the saponin preparation .
When erythrocyte membranes were fixed with OS0 4 alone, KMn0 4 alone, or glutaraldehyde-Os0 4 , a layer of fluffy material was observed on the cytoplasmic surface. In tangential section, this layer only rarely revealed a filamentous network (see reference 33) in contrast to that seen in the tannic acid-glutaraldehyde preparation .
When erythrocyte membranes were treated with 0 .1 mM EDTA (pH 8 .0) to extract spectrin (bands 1 and 2) and actin (band 5), the membranes were fragmented into small vesicles, which were inverted inside-out with the cytoplasmic leaflet facing outside (Fig. 4) . These residual membranes varied in size up to 1 ,um in diameter and took more or less spherical shapes, but usually did not form closed vesicles. The SDS gel electrophoresis of this preparation indicated that -91% of the spectrin and almost all of the actin was extracted from membranes (Fig. 5) . In thin-section electron microscopy, the layer of filamentous meshwork was not visible on the vesicular fragments, and only the granular components were left behind (Fig . 6) . The granular components had a round or ellipsoidal appearance, ranging from 10 to 13 nm for the shorter diameter . The distribution density and pattern of the granular components did not seem to be changed significantly from the unextracted membrane, though they could not be estimated exactly, because of the vesicular configuration of the fragments .
Purified Spectrin in Thin Sections
In thin sections, spectrin appeared as a densely packed aggregate, in which filamentous structures could often be discerned as unit constituents (see Fig . 6) . They showed a relatively uniform thickness of 9 ± 2 nm SD (No . of measurements : 100), similar to that of the in situ filamentous structures ; their length could not be determined because of aggregation .
Cytoskeletal Network in Negative Staining
In single-layered membrane preparations, we TSUKITA HT AL.
Çytoskeletal Network of Erythrocyte Membrane 57 1 expected to directly observe the en face structures of the cytoplasmic surface of the disrupted membranes. The network was clearly observed on the cytoplasmic surface that faced upward on a grid, confirming the observations made on thin sections (Fig. 7) . The networks were irregular in pattern and composed of filamentous structures, 8 ± 2 nm SD (No. of measurements : 100) in diameter, somewhat finer than those in thin sections. At the vicinity of the edges of such a disrupted membrane, the network was seen to be stretched in one direction so that it appeared as parallel-arranged filaments, similar in diameter to the filamentous components of the meshwork . In HMM treatment of the disrupted membranes, these filamentous components of the meshwork were never decorated with HMM to form arrowhead complexes. 572 FIGURE 3 Stereo pair of electron micrographs of a tangentially cut erythrocyte membrane . The section was tilted at + and -8°. Bar, 0.1 pm . x 100,000.
DISCUSSION
In the present report, we have demonstrated that tannic acid fixation is useful for visualizing the THE JOURNAL OF CELL BIOLOGY " VOLUME 85, 1980 cytoskeletal network underlying the erythrocyte membrane in situ in thin-section electron microscopy. Since it was first introduced into electron microscopy by Mizuhira and Futaesaku (15), tannic acid has been widely used to obtain an improved fixation, which permits observations of the ultrastructure of a variety of cells, cell organelles, and protein polymers . Indeed, tannic acid enhances preservation and contrast for many cellular structures by acting as both fixative and mordant (15, 29).
The present result confirms the observations made by other different approaches : the network seen in thin section appears to be equivalent to the submembrane reticulum (8) and Triton shell (24) observed, respectively, by scanning electron microscopy and negative staining. Taken together, it can be concluded that a cytoskeletal network no doubt exists beneath the human erythrocyte membrane. In addition, the visualization of the cytoskeletal network in thin section has the advantage of studying the relationship with the membrane proper, and thus of analyzing dynamic aspects of the network during various membrane activities such as change in cell shape and lateral movement of membrane-penetrating proteins .
The observations described here are open to several interpretations. The cytoskeletal network was composed of two layers : a granular layer and a filamentous meshwork . The filamentous meshwork appeared to be connected with the membrane proper through vertical granular components. When spectrin and actin were extracted by EDTA treatment, the horizontally arranged filamentous meshwork disappeared, whereas the granular components were left on the cytoplasmic surface of the fragmented membranes. This suggests that the filamentous meshwork is mainly constructed of spectrin and actin.
Negative staining clearly shows the presence of filamentous structures, which appeared beaded and are 8-12 nm in diameter in the Triton shells (24) and 8 ± 2 nm in diameter in our singlelayered membrane preparations. These values coincide well with those (8-10 nm) of constituent filaments of the meshwork in thin section. In solution, spectrin exists as a heterodimer or as a tetramer of two heterodimers (9, 22,34). Lowangle shadowing reveals that the heterodimer is an asymmetric flexible molecule -100 nm in length, and that its monomer polypeptides may lie side by side (26,27). Our observations demonstrated that after tannic acid fixation, purified spectrin showed a relatively uniform thickness of 9 ± 2 nm, similar to that of the filamentous components of the erythrocyte membrane, though the exact length of the filamentous components was not determined in TsUKITA ET AL . Cytoskeletal Network of Erythrocyte Membrane FIGURE 6 Purified spectrin fixed in tannic acid-glutaraldehyde mixture. Spectrin shows a relatively uniform thickness, similar to that of the filamentous components of the erythrocyte membrane, though its unit length could not be determined because of aggregation . Bar, 0.1 pm . x 100,000.
situ . Furthermore, the characteristic wavy configuration of the filamentous components was often recognized as possible repeating units. These morphological similarities lead us to the speculation that the filamentous components themselves represent spectrin .
On the other hand, EDTA treatment of the erythrocyte membrane invariably elutes actin together with spectrin . Since the first demonstration of the presence of actin in the erythrocyte membrane by Tilnéy and Detmers (33), the association of actin with the cytoskeletal network has often been proposed (20, 25,36). However, it is not at present clear whether actin plays an essential role in forming the cytoskeletal network (4). We could not find any microfilaments corresponding to actin filaments. Our efforts to decorate the actin with HMM have failed so far in negative staining as well as in thin section (also see reference 33). If actin exists in the nonfilamentous state as often 574 THE JOURNAL. OF CELL BIOLOGY " VOLUME 85, 1980 suggested (20, 33), there may be no way, at present, to visualize it directly by electron microscopy .
We have no direct evidence for identifying the vertical components of the granular layers . However, our observations showed that these vertical components apparently connected the membrane proper and the filamentous meshwork . Furthermore, the vertical components appeared spatially related to some intramembrane densities. It has been demonstrated that the band-3 protein, a major integral protein, has a cytoplasmic projection as visualized by freeze-etch study (see reference 32). It is probable that the vertical component comprises the cytoplasmic projection of the band-3 protein. Recently, the spectrin-binding protein(s) (band 2.1 and its derivatives) has been identified (l, 12, 37), which provides a high-affinity membrane attachment site for spectrin . The band 2.1 is shown to be associated with 10-15% of the total band-3 protein (2). On the other hand, the density of the vertical components in thin sections was roughly estimated to be 1,600-2,300 /ttm', though the value is likely to be underestimated . This value corresponds to 55-75% of the density of the intramembrane particles as seen by freeze-fracture (2 1), each of which is believed to include the band-3 protein (35) . This discrepancy may suggest that the band-2 .1 protein is not associated with every individual vertical component. If this is the case, not all the vertical components may be involved in cytoskeletal network formation.
In thin-section electron microscopy, the dense materials underlying the plasma membranes can be observed in a limited number of cell types. Such "undercoating" structures have often been discussed with an emphasis on their possible roles in cell phenomena characteristic of the particular areas of the plasma membranes. Now the cytoskeletal network of the erythrocyte membrane can be included in the same category of the undercoating structures . With improved methods for specimen preparation, similar undercoating structures may be visualized, however thin, in many other cell types.
In conclusion, the erythrocyte membrane is one of few examples that has been extensively studied in terms of molecular organization (for reviews, see references 14 and 31). The in situ visibility of the cytoskeletal network may lead to further investigation of the identity of its molecular architecture as well as the direct association between it and the membrane proper . A study along these lines is being conducted in our laboratory . | 2014-10-01T00:00:00.000Z | 1980-06-01T00:00:00.000 | {
"year": 1980,
"sha1": "8414cf6bd33b4881da8ffc8836c1bdd0db36ff8e",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/85/3/567/1388667/567.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8414cf6bd33b4881da8ffc8836c1bdd0db36ff8e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
144612801 | pes2o/s2orc | v3-fos-license | Organising for power: solidarities and transformation
Political and social movements are both empowering and power-seeking: they seek both to mobilize civil society and overwhelm state institutions. As organisations they mobilize collective power, generating solidarities and transforming social structures. As such, political organisations both challenge power and exercise power. This article addresses organizational vehicles for political change in Australia, drawing out limits and possibilities. Three organizational forms are discussed the political party, the non-government organization (NGO), and the social movement in terms of their capacity and limits. The social solidarities and social structures that frame political organization are debated, highlighting the impact of political conflicts over ecological change. The article ends with a discussion of the proceeding three articles, drawing out shared themes and implications in terms of the relationships post-Howard, between the Australian state, political parties, NGOs and movements. Organization is the precondition for any sustained challenge to the status quo. For social and political movements, the challenge is to create the structures that can generate and express collective solidarities, producing and channeling collective power. This collective agency of movements is always conditioned by the structural power of capitalism, that in large part reproduces subordination through consent. Capitalism is in this sense a system of self-subjection, albeit buttressed by a coercive apparatus. As Antonio Gramsci argued, capitalist hegemony is the product both of coercive ‘political society’ and consensual ‘civil society’ (Gramsci 1971). Power is thereby exercised through the populace, as well as over it. Organising for power requires recognition of this double-sided character of capitalist society. Challenges to the existing order must be generative, in terms of creating the capacity to transform capitalist social relations from within. They must also be insurrectional, in terms of the capacity to take power over coercive structures. Any transformation beyond dominant power relations involves asserting a process of de-linking from capitalist values and structures. In this vein, for instance, John Holloway argues that instead of exercising power over others, movements must grasp the power to transform people’s lives beyond commodification (Holloway 2002). In doing so movements must reject the idea of ‘taking’ power, he argues, and instead directly dissolve and supersede power structures in everyday life. The social
movement here centres on civil society, becoming the vehicle for a transformation of social relations and social attitudes.Clearly, though, struggles against capitalist hegemony cannot be confined to civil society.Liberation from social power structures may be the key aspiration, but to be realized movements must, at one moment or another, address political structures: indeed, any violation of capitalist order, whether in the 'everyday' or not, invites coercive intervention.
In Gramscian terms, the transformation of civil society requires the capture of political society.Since the earliest days of capitalism the political structures of accumulation have been the central barrier to social transformation.If anything, these structures are more central today, under late capitalist neo-liberalism.Paradoxically, state power is vital to the process of maintaining, extending and enforcing market rule, and for maintaining social order in the face of resulting volatilities.As David Harvey has argued, the driving dynamic of profit-taking today proceeds primarily through a process of 'accumulation by dispossession', where peoples are literally disinherited and displaced (Harvey 2003).While markets may exert decentred structural power, an 'empire' perhaps, they are embedded in the highly-centred power of sovereign authority (as indeed amply demonstrated by neo-conservativism).
Movements must address these political structures: 'organizing for power' is embedded in existing power relations and must seek power over such structures.It necessarily involves creating formations that will be most effective within the prevailing political conditions.
Movement organizations are thus double-sided, in both affirming and transforming social contexts: they are embedded in social relations, but at the same time reach out beyond them, to envisage and construct new possibilities.That is to say, organizing for power cannot happen in the abstract: it has to happen 'on the ground'.The resulting praxis of reflective action, where movements act-on the categories that constitute them, is at the core of social change organizations as reflexive actors that literally remake society.Debates about the organization of collective power very clearly expose these dynamics.
For these various reasons, this Special Issue takes organisation as the first step in its dialogue for alternatives, beyond neo-conservativism.This article introduces the section with a survey of organizational concerns within the Australian political context.It begins by conceptualizing the broad field of social change organizations in terms of organizational categories, drawing-out the primary features of the three kinds of organization that are prevalent in the Australian context -the political party, the non-government organization and the social or political movement.Discussion in the second and third sections moves beyond the three-part typology to address the question of social solidarity and social change, asking how emergent social forces and attendant structures influence the logic of organization.In the final section, some specific questions of social change organization in Australia, in the light of Australia's 'extended state' and related social and ecological structures, are addressed, drawing out themes for the ensuing four chapters.
Organisational categories
Debates about organization centre on organizational form, on a spectrum from most hierarchic to most diffuse.The quintessentially hierarchic organization -the military insurgency -demands absolute obedience on pain of death.The most diffuse organizations may collapse the movement into expressions of subjecthood and individual experience.We may then imagine a continuum from military insurgencies, to religious cults, political parties, political movements, community and nongovernment organisations, social movements, movement networks and affinity groups, extending to the most diffuse attitudinal or affective phenomena, such as what Walgrave and Verhulst call 'emotional movements ' (2006' ( ), or what McDonald calls 'experience' movements (2006)).Conflicts and disputes over political organization are played-out along this continuum: anti-colonial struggles were fraught with conflicts between insurgent and electoral approaches; the first Communist International divided between between centralism and voluntarism; current movements are patterned by divisions between institutionalism and autonomism (Fominaya 2007;Juris 2008).
The question of which organization is most effective may be understood as a question of 'form' rather than 'content', a tactical rather than a strategic issue.Choice of organizational form may simply hinge on the question of what kind of change is being sought, and what kind of power is being mobilized, and thus what organizational vehicle is required.Within late-capitalist societies discussion of vehicles for social change revolves around three main organisational forms -the political party, the NGO and the social movement, each if which has its own limits and possibilities.Clearly if the objective is to deliver votes at a ballot box then forming a political party will be the priority; if the aim is to mobilise informational power we might expect an emphasis on constituting expert status, such as through think-tank NGOs; in contrast, a more participatory social movement model would be needed to deliver popular mobilization and more broad-based social transformation.
As summarised in Table 1, each organisational form targets a specific power source and entails a particular set of assumptions about organizational structure, the meaning of democracy, the nature of the political process and the role of ideology.
Organisation and social solidarity
When understood as schematic 'form' rather than 'content', organization is interpreted instrumentally, as the medium for the message.The question of organization is thereby reduced to tactical consideration of pros and cons given the prevailing power play.We may then be agnostic, or even complacent, on the question of organization.But organisation is not simply an expression of prevailing power structures.The demands of the movement do not of themselves deliver political traction: it is the embedding of these demands in social and political praxis, through organisational structures, that achieves this.Organisation expresses the power of agency, and is inseparable from the broader strategic process of collective mobilization that defines the content of movement demands.Insofar as organization produces collectivity and solidarity, to re-coin Marshall Macluhan's quip on the media, the medium is indeed the message.
Organisation shapes and expresses movement solidarity, framing movement agendas.The Multitude, they argue, is a 'distributed network', where 'each struggle remains singular and tied to its local conditions but at the same time is immersed in the common web', where the extension of a common field strengthens rather than undermines the singularities of its participants (Hardt and Negri 2004: 217).As with the universalist 'empire' that it challenges, the Multitude has no centre: instead it is diffused across social life, taking root wherever peoples assert self-rule and wrest control of their own means of social reproduction.There is no overarching metaframe for political change -only the liberation of social time from capital accumulation.The new model is presented as a synthesis of unity and difference, combining the commonalities expressed in centralised movement hierarchies with the affective engagement of autonomous difference politics; in doing so, 'the new network model of the multitude displaces both these options' (2004: 217).
These various treatments posit a dichotomy between relatively fixed political categorizations associated with modern class and state hierarchies, with more fluid formations prevalent in late modernity.The age of global reflexivity is said to be the age of contingency, and therefore, of agency.Movement organizations have little to do in these scenarios: they are submerged by the social impulses said to overwhelm structures of rule.The result may be, as Frank Furedi has argued, a collapse into episodic expressions of public outrage, a personalization of protest where movements become an outlet for individual expression (Furedi 2004a;2004b).Such 'emotional movements' may themselves be aligned with rather than against authorities, in generating as impulse to reaction rather than transformation (Walgrave and Verhulst 2006).Indeed, periods of rapid change have historically offered opportunities for forces of reaction as well as forces for transformation.As Karl Polanyi highlighted, the Great Depression of the 1930s laid the basis for a socialization process that gave rise to fascist as well as social democratic forces (Polanyi 1944) The priorities of the present are now weighed against the developmental survival of future generations and the intrinsic responsibility to avert species extinction.
Addressing climate change thus means reinventing democracy, embedding it in new ethical frameworks for existence.
If global warming creates a new global meta-frame for movements, it also forces a direct confrontation with dominant paradigms of growth and accumulation.Given there is no possibility of adaptation to climate change, it must be directly mitigated 'at source'.Such mitigation, to be effective, forces us beyond consumer capitalism: even the most optimistic scenarios for reducing carbon intensity signal the need to curtail global growth rates (Li 2008).To use the categories developed by Boaventura de Sousa Santos, there is no 'sub-paradigmatic' pathway: only a paradigmatic transition can address the crisis (Sousa Santos 1995).Unlike the social democrats who a century ago sought 'evolutionary socialism' (see Bernstein 1961), there is no scope for a long march through the institutions.While the welfare state could to a degree socialize class division, offering an evolutionary pathway for socialists, it is increasingly difficult to lend credence to evolutionary ecologism.Even if that were possible, time has run out.Indeed, the abject failure of existing institutional orientations, beholden in the first and last instance to vested interests, is already forcing new forms of ecological transformation, indeed, revolution, onto the agenda (Goodman 2009).We are faced with a profound clash of paradigms, and one that is irresistible, driven by 'nature's revenge' on capitalism, as Engels put it (see Anderson 2006).That 'revenge' cannot be managed or displaced, and as it intensifies unabated it is giving new traction to movements, generating new visions and possibilities for revolutionary change.In other contexts, the organization itself may move across categories.The effects of political regulation, though, are double-sided.The act of policing itself betrays a vulnerability, as the resort to coercion signals a failure of legitimation.The spectacle of coercive power, displayed for political effect, demonstrates the extent to which an issue has been contested and politicized.This, in itself, can be a key achievement of social movements, in forcing public attention and deliberation.As Sergio Fiedler highlights in his chapter for this Section, social movements most clearly deploy their autonomous power when they engage in mass civil disobedience, forcing the authorities to act.Where the response is coercive, the movement may be no less effective.The process of provoking and confronting coercion in the name of collective goals can have a powerful resonance, and historically has been central to the extension of political rights, of social provision, and of cultural recognition.The two great 'anti-systemic' social movements of the last two centuries that were identified by Immanuel Wallerstein -the movement for social democracy and the movement against colonialism -depended on such approaches (Wallerstein 2003).
Social and ecological change in
Fiedler argues that current globalizing forces now position refugees at the cuttingedge of political subjectivity, in forging and claiming new forms of agency.The extension of political community across borders, whether through migration or otherwise, has been a central question for social movements, although one that has often been deferred.That deferral now unravels in the face of cross-national flows: we now see the same phenomenon across multiple sites, including, as emphasised here, the issue of ecological survival.In these contexts where the meta-political framework, society's historicity, comes under challenge, the social movement organisation becomes most necessary, and effective.
Conclusion
Organisation sits at the centre of social and political change.Movement organisations transform latent potential into a social and political force capable of transformative action.Social change organisations thereby take on a constitutive role in terms of selfconsciously crystalising movement solidarities.Organisational questions are thus both tactical and strategic questions, entailing the creation of alternative agendas as well as a means of enacting them.As argued here, such movements paradoxically act-on society from within: by acting on society they confront immediate structures, politicizing and delegitimising power holders; by acting within society they create the power to transform social relations, making society anew.In the current period, where we sit on the cusp of a new social order forced into place by ecological exhaustion and climate change, we can expect movement organizations to move to centre stage.
As existing institutions fail in the face of mounting crisis, the creative praxis of movement organisations offers us the best foundation for the revolutionary transformations that are so urgently required.
Australia's 'extended state'New movement organizations are built to challenge new configurations of power and exploit new possibilities of mobilisation.In this sense, movement organisations could be understood more as processes than institutions: they are relational, constituted in the dynamics of contestation.With this in mind, how can we approach the question of organising power in Australia, in the post-Howard era?As noted, public political activity in Australia is channeled into parties, NGOs and movements.Most political players in the Australian context either fit closely into one or other category, or explicitly sit across two of the categories.As outlined in Lee Rhiannon's chapter for this Section, environmental organizations are represented across all three categories.The relationship between the resulting organisational imperatives can be crucial in shaping ecological outcomes.Where, for instance, movement and party are in alignment, there can be considerable rewards; alternatively, NGO and movement can come into contradiction, with, for instance, NGOs actively displacing environment movements.In different circumstances, tendencies to expressive movement actions can be complemented by NGO discursive interventions.More centrally, as Lee Rhiannon argues, links between the political party and social movements are an absolute imperative if Green party, indeed any progressive party, is to withstand pressures to electoral bloc-formation.For political parties dedicated to transformative social change -whether revolutionary or not -linkages to social movements are not an option, they are a necessity.Organisations themselves change over time, across organisational categories.A political movement may for instance create a political party that then takes on a life of its own, autonomous of the movement.Social-democratic parties, for instance, many of them established by trade unions, have often sought to sever organizational links with affiliated trade unions, enabling a shift to a more social-liberal party orientation.The divorce between trade unions and 'their' party or government can be initiated by either side: in NSW for instance in 2007, it was the trade union movement, represented through Unions NSW, that led a broad-based labour movement mobilization against the NSW Labor Government's proposal to privatize electricity.Demonstrating the possibilities of relatively autonomous union mobilization, in this instance in collaboration with environmental organizations, along with overwhelming support from the 2008 NSW ALP conference, the campaign was remarkably successful.With all ALP MPs voting in defiance of Labor conference, a sharp divide was opened up between party and government, dissolving the assumption of 'policy consent' that dated back to the 1940's(Kelly 2008).An internal coup followed, the Party installed a new Premier, privatization of electricity generation and distribution was shelved, with privatisaton of electricity retail, and provision for new private power stations, quietly brought forward (Kruse 2008).
Table 1 :
Parties, NGOs, movements: contrasting themes organizations, and within collective action.For organizations that straddle the logic of party, NGO, or movement, some of the resulting internal tensions can be particularly difficult, but also creative (one example is the World Social Forum).Beyond the question of categories, we may speculate what kinds of social power are most effective in particular contexts, and thus what kinds of organization are most needed, and how this may change over time.Underlying these debates is the two-way relationship between social solidarity and social structure.Social forces generate imperatives for political organization, and enable the solidarities that drive movements.Movements reorder and transform political structures and, in turn, generate new social forces.The schematic typologies of organizational form are embedded in these social dynamics and antagonisms.
Beck, Giddens and2006) 1994)darity enable social structures to be adapted and reproduced.Self-conscious solidarities, in contrast, arise from deliberate affiliation, and are distinct as they create new forms of identification, association and mobilization.It is these self-consciously constructed affiliations that generate transformative collective action, from within the social formation(Bamyeh 2007).They spring from existing solidarities, but in the process gain an autonomy which transforms and supersedes systemic practices.In this respect, movements are agents of change, rather than symptoms of change: but they are not free agents: paradoxically, they act on society from within society.The new dispensation raises qualitatively different problems, and possibilities.Industrial accumulation centres on the contradiction between capital and labour, and creates a 'realisation problem', of how to sell more commodities while constraining the income of workers.Intensive accumulation, by contrast, centres on a capitalnature contradiction, and presents the problem of how to manufacture more products with less ecological impact.As James O'Connor argues, this 'second contradiction' is now dominant, forcing new eco-centric frameworks into view(O'Connor 1998).As the conflict between capital and nature comes to pattern society as a whole, all social issues become unavoidably ecological issues.The point is made most clear by the example of climate change, where all possibilities of social development and social justice, in all contexts worldwide, hinge on the capacity to reduce greenhouse gas Where Gellner saw nationalism as providing the industrial identity-fix, so today various observers, Ulrich Beck and Anthony Giddens among them, argue that modes of precautionary self-regulation emerge in response to increased exposure to global flows and ecological risk (seeBeck, Giddens and Lash 1994).Reflexive modernity, in this reading, forces peoples to live with regard to their 'side-effects', and in the process, by necessity, solidarities become less fixed to territory and much more fluid and contingent.As the principal carrier of collective reflection and action, the movement moves to the core of reflexive modernity, generating the values and affiliations that underpin solidarities.In the process, as Mohamed Bamyeh argues, the emergent movements recombine existing 'primary values' -whether material, universal, spiritual, humanist or emancipatory -to produce the new solidarities and organizations capable of enacting them(2007).For some, the imperative for fluid affiliations supersedes solidarity.Kevin McDonald for instance argues that solidarity requires a single focus for affiliation, and cannot express the capacity to move across identifications that has emerged at the core of movement politics.He argues the imperative for 'fluidarity', not solidarity, centres the movement on affinity and experience.It dictates diffuse movement organization with loose horizontally-aligned networks that allow for convergence of shared values rather than coordination or control.Here, the very notion of a movement that expresses a singular 'collective identity' is redundant: identification is meshed across a range of affiliations and is enacted not so much through movement organizations as through embodied and affective experience in movement actions, what McDonald calls the 'public experience of self'(McDonald 2002;2006).
(Calhoun 2000)nisation must therefore reflect movement values: to claim legitimacy for social change, the movement's organizational practices must be in accord with its values.Put simply, movements must 'practice what they preach'.Organisations are not empty shells, they are 'social laboratories' that prefigure aspirations.Movement organisations produce movement agendas, are in this sense the crucible for movement values and solidarities.Unlike more embedded or systemic solidarities, movement solidarities have to be consciously generated to produce social change, and organization plays a key role in this.The self-conscious process of movement solidarity is generative -it builds political community.As expounded by 'classical' sociologists, systemic solidarities may emerge from the relationships between 'traditional' structures and 'modernising' systems, responding to emergent modes of production, divisions of labour, or rationalization pressures.The problem for movement solidarities is thus a problem of emergence -how can movements generate the solidarities to produce the required value shift?The answer to this question lies within the dynamic relationships between structural conflicts and movement agency.Given that we live under capitalist social relations, it is capitalism's structural conflicts that set the stage for movement agency: understood historically, those conflicts change over time, generating contrasting modes of movement and organisation.Organisation and social structure Specific forms of capitalism have very different social effects, and set the framework for different forms of movement organization.At the same time, movements act-on and transform capitalist relations, framing the conditions of emergence for succeeding forms of accumulation.Following the Dutch political economist, Kees van der Pijl, we can argue that industrial accumulation channeled movements into class-based solidarities, particularly through trade unions (Van Der Pijl 1998).The revolt against labourism and social democracy in the later decades of the Twentieth Century, from the so-called 'new social movements', then prefigures more 'flexible' intensive modes Such movements force new mutations in accumulation, although still in formation, as perhaps most clearly (and astoundingly) played out in the emergence global carbon trading, set to become the world's largest commodity market.emissions.Notwithstanding a continued faith in 'climate-proofed' development, if certain not-so-far-off tipping points are breached we have to expect nothing less than planetary developmental collapse.As Van der Pijl argues, the social is now unalterably subsumed into the ecological.Possibilities for survival under this 'intensive' capitalism hinge on the movement's capacity to generate solidarities and to construct and enact the required values.Global interconnectedness arising from ecological crisis creates new imperatives for reflexivity, which create new forms of ecological solidarity.Just as industrial capitalism created new sets of interchangeable roles, what Ernest Gellner called the 'musical chairs society', so today we see 'intensive' or informational capitalism forcing the emergence of new solidarities.To a significant degree, though, these modes of political mobilization are not new.As Calhoun notes, movements have historically combined affective or expressive forms of mobilization with cognitive and instrumental politics: movements of the early Nineteenth Century in Britain, such as the Chartists, were no less identity-centred and expressive than current-day movements(Calhoun 2000).There is change in these continuities though, not least as capitalism's freedoms and society's aspirations conflict and react to produce new dispensations.With 'neo-capitalism' breaking the constraints of accumulation in a new wave of intensive commodification, counterscripts in the form of social movement critiques force new agendas into view.In the (paradoxically) agglomerates mass individual reflexivity into revolutionary agency.
In its wake, the crisis literally subsumes all other political agendas.Indeed, there is nothing contingent about global warming.In this respect, we can predict, and perhaps witness, a growing global alignment of movements under the singularity of climate crisis.That is not to say there is a uniform response: there are multiple ways of addressing the crisis, and multiple possibilities for social change arising from it.Significantly, though, just as class has done in the past, climate is likely to, over time, emerge as a central fulcrum on which solidarities emerge and converge.Emergent climate action movements are already constructing deep roots.In the first instance they build on a deep-seated affective crisis -an epistemological crisis perhaps.Where Touraine addressed the counter-cultures of the 1960s and 70s in confrontation with the conformity of 'programmed' societies, the current climate action movements address a societal contradiction that literally pits capital against life itself.In this context, given the scale and imminence of the problem, a deeply fluidarity is in direct conflict with the notion that we are living under ecological crisis.Instead of disaggregating movement identities, the advancing crisis of global warming is undoubtedly more totalizing than any of the preceding crises of modernity.transformativeand affective response is to be expected.As with movements of the past, though, such responses rest on a shared cognitive interpretation, of the causes and effects of climate change.Global warming thereby generates a shared and now globalised political frame, albeit inflected with different perspectives.The effects of that reframing can be profound: on the question of democracy for instance, the crisis of survival produces a new demos that extends across generations, and across species.
NGOs, may, advertently or otherwise, find themselves marginalizing and policing movements.Relatively disaggregated environmental NGOs, for instance, actively compete with one another and with movements to capture political space:NGOs often deliberately position themselves as the 'reasonable' voice on an issue, to capture the middle ground in a policy debate, and in the process de-fang environmental movements that may compete for media exposure or membership.An important illustration of this process is the debate over climate change, where a number of environmental NGOs have deliberately taken positions that are known to be unsustainable, simply to gain an entré into the public debate.In such circumstances the NGO operates as a 'minesweeper' for the government, clearing the pathway of ideological obstacles, enabling the maintenance of a clearly non-viable status quo.Another important aspect of policing, and of the extended state, is the imposition of specific government regulations that delimit political categories, one from the other.Such regulations can directly shape the actions of potential players: in the case of the very large charitable sector for instance, there are very clear injunctions against what are deemed by the Australian Tax Office to be unacceptable 'political activities'.As as part of a movement that seeks to transform broader social relations.
thereby contained within a highly restricted field of protected industrial activity.In the context of 'enterprise'-based collective bargaining, unions have often become defined as interest groups, primarily acting to provide a service for their membership rather than | 2018-12-05T10:57:23.212Z | 2009-08-26T00:00:00.000 | {
"year": 2009,
"sha1": "b951ba96d33aed53059c9c134d4cfbde08d58000",
"oa_license": "CCBY",
"oa_url": "https://epress.lib.uts.edu.au/journals/index.php/mcs/article/download/1087/1304",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "060040ccda7eec57a0db3a40b6752ddc5d05ee6e",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.