text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Brainstem sources of cardiac vagal tone and respiratory sinus arrhythmia
Key points Cardiac vagal tone is a strong predictor of health, although its central origins are unknown. Respiratory‐linked fluctuations in cardiac vagal tone give rise to respiratory sinus arryhthmia (RSA), with maximum tone in the post‐inspiratory phase of respiration. In the present study, we investigated whether respiratory modulation of cardiac vagal tone is intrinsically linked to post‐inspiratory respiratory control using the unanaesthetized working heart‐brainstem preparation of the rat. Abolition of post‐inspiration, achieved by inhibition of the pontine Kolliker‐Fuse nucleus, removed post‐inspiratory peaks in efferent cardiac vagal activity and suppressed RSA, whereas substantial cardiac vagal tone persisted. After transection of the caudal pons, part of the remaining tone was removed by inhibition of nucleus of the solitary tract. We conclude that cardiac vagal tone depends upon at least 3 sites of the pontomedullary brainstem and that a significant proportion arises independently of RSA. Abstract Cardiac vagal tone is a strong predictor of health, although its central origins are unknown. The rat working heart‐brainstem preparation shows strong cardiac vagal tone and pronounced respiratory sinus arrhythmia. In this preparation, recordings from the cut left cardiac vagal branch showed efferent activity that peaked in post‐inspiration, ∼0.5 s before the cyclic minimum in heart rate (HR). We hypothesized that respiratory modulation of cardiac vagal tone and HR is intrinsically linked to the generation of post‐inspiration. Neurons in the pontine Kölliker‐Fuse nucleus (KF) were inhibited with bilateral microinjections of isoguvacine (50–70 nl, 10 mm) to remove the post‐inspiratory phase of respiration. This also abolished the post‐inspiratory peak of cardiac vagal discharge (and cyclical HR modulation), although a substantial level of activity remained. In separate preparations with intact cardiac vagal branches but sympathetically denervated by thoracic spinal pithing, cardiac chronotropic vagal tone was quantified by HR compared to its final level after systemic atropine (0.5 μm). Bilateral KF inhibition removed 88% of the cyclical fluctuation in HR but, on average, only 52% of the chronotropic vagal tone. Substantial chronotropic vagal tone also remained after transection of the brainstem through the caudal pons. Subsequent bilateral isoguvacine injections into the nucleus of the solitary tract further reduced vagal tone: remaining sources were untraced. We conclude that cardiac vagal tone depends on neurons in at least three sites of the pontomedullary brainstem, and much of it arises independently of respiratory sinus arrhythmia.
Introduction
From at least the middle of the 19th century onward, it has been known that the vagal supply to the heart tonically holds down heart rate (HR) (Weber, 1846;Bezold, 1858). Additionally, vagal activity provides control of atrioventricular conduction (Massari et al. 1995) and ventricular excitability and contractility (Machhada et al. 2015(Machhada et al. , 2016. The central origins of cardiac vagal tone remain essentially unknown. The principal preganglionic vagal motoneurons that regulate HR have myelinated axons (B fibres) and are located mainly in the external formation of the nucleus ambiguus (NA) (McAllen & Spyer, 1976;Nosaka et al. 1979Nosaka et al. , 1982Hopkins et al. 1996). In slice preparations of the medulla in vitro, vagal motorneurons in the NA identified by retrograde transport of dye from the epicardium show little or no ongoing spike activity (Mendelowitz, 1996;Dergacheva et al. 2010). Tonic activity of vagal motoneurons in vivo therefore presumably arises from ongoing synaptic input, driven directly or indirectly from areas of the brain that are disconnected in that preparation. Numerous projections from the medulla, pons and hypothalamus to cardiac vagal motorneurons (or regions of the NA known to contain them) have been identified (Herbert et al. 1990;Standish et al. 1995;Neff et al. 1998;Piñol et al. 2012;Poon & Song, 2014), although it remains unclear which (if any) contribute to ongoing cardiac vagal tone.
Cells in the other vagal preganglionic nucleus (DMNV) have unmyelinated axons (Cheng et al. 2004). Although electrical stimulation of unmyelinated vagal efferents in cats and rats can cause modest slowing of the heart (Jones et al. 1995), electrical stimulation of DMNV in cats was not found to slow the heart (Geis & Wurster, 1980) The main cardiac function of DMNV neurons is to exert inhibitory control of the excitability and inotropic state of the ventricles (Geis & Wurster, 1980). Accordingly, selective pharmacogenetic inhibition of DMNV neurons in vivo was found to alter ventricular function but not HR (Machhada et al. 2015(Machhada et al. , 2016, indicating that the DMNV does not provide tone to the cardiac pacemaker. Variations in vagal chronotropic drive give rise to the respiratory sinus arrhythmia (RSA), raising HR in inspiration and decreasing it during expiration (Japundzic et al. 1990). Sympathetic effects on HR are too sluggish to contribute significantly to the RSA (Warner & Cox, 1962). Hypotheses concerning the physiological significance of the RSA include optimization of ventilation-perfusion matching in the lung, minimization of cardiac energy expenditure and optimal maintenance of blood CO 2 partial pressure (Yasuma & Hayano, 2004;Sin et al. 2010;Ben-Tal et al. 2012). Clinically, 'high frequency' or beat-to-beat HR variability, which are essentially measures of RSA, are regarded as a direct index of ongoing cardiac vagal tone (Chess et al. 1975;Akselrod et al. 1981;Pomeranz et al. 1985;Pagani et al. 1986). Reduced RSA is an independent predictor of mortality in chronic heart failure and after myocardial infarction (La Rovere et al. 1998. It is clear that both central and reflex mechanisms contribute to this waxing and waning of vagal tone over the respiratory cycle (Anrep et al. 1936a, b). The relative contributions of each are still a matter of debate (Taha et al. 1995;Eckberg, 2009;Karemaker, 2009), although it is agreed that there is an important central component linked to central respiratory drive (Anrep et al. 1936b;Eckberg et al. 1980;Gilbey et al. 1984;Taha et al. 1995;Simms et al. 2007).
It is accepted that the 'kernel' of the respiratory rhythm generator is within the ventral respiratory column of the medulla (Feldman & Del Negro, 2006;Ramirez et al. 2012;Smith et al. 2013). The generation of a three-phased, eupnoeic respiratory motor 'pattern' , however, depends upon the integrity of connections between the medulla and pons (Markwald, 1887;Lumsden, 1923;Rybak et al. 2004Rybak et al. , 2007Dutschmann & Dick, 2012;Poon & Song, 2014). Transection and lesion studies using the in situ perfused working heart and brainstem preparation of the rat (WHBP) have demonstrated that pontine nuclei are essential to the expression of the post-inspiratory phase (Dutschmann & Herbert, 2006;Smith et al. 2007). In the absence of these nuclei, synaptic mechanisms leading to the termination of the inspiratory phase (the 'inspiratory off-switch') are delayed: spontaneous inspiratory phrenic nerve discharge takes the form of an apneustic 'square-wave' ) and sequential activation of the motor outputs that drive secondary respiratory muscles (e.g. the laryngeal adductor muscles) is abolished. Focal inhibition of the pontine Kölliker-Fuse nucleus (KF) also abolishes post-inspiration, suggesting that this is a crucial locus of pontine post-inspiratory control (Rybak et al. 2004;Dutschmann & Herbert, 2006;Smith et al. 2007;Dutschmann & Dick, 2012;Bautista & Dutschmann, 2014).
Studies on vagal tone have been significantly hampered by the use of general anaesthesia and invasive investigations. To a greater or lesser degree, anaesthetics suppress vagal tone (Inoue & Arndt, 1982), making its origins difficult to study. The WHBP, which is decerebrate but unanaesthetized, offers a simplified preparation that shows abundant cardiac vagal tone with a physiological pattern of RSA (Paton, 1996;Simms et al. 2007;Pickering et al. 2008). Under the conditions of the WHBP, cardiac vagal branch activity (CVBA) peaks, and HR is lowest, in the post-inspiratory period (Simms et al. 2007;Pickering et al. 2008). Furthermore, transection of the pons, which separates the respiratory kernel from the KF, abolishes RSA (Baekey et al. 2008). Based on these data, we hypothesized that respiratory modulation of cardiac vagal tone and HR is intrinsically linked to the generation of post-inspiration.
We predicted that focal inhibition of the KF would abolish respiratory-linked fluctuations in cardiac vagal activity, as well as reduce, or perhaps abolish, cardiac vagal tone. We have investigated the ongoing contributions of selected pontine and medullary nuclei, revealing their common and distinct influences on RSA and on chronotropic vagal drive.
Methods
All experimental procedures were performed in accordance with the Australian code of practice for the care and use of animals for scientific purposes and conform with the principles of international regulations. This study was approved by, and carried out in accordance with guidelines put in place by the ethics committee of the Florey Institute of Neuroscience and Mental Health, Melbourne, Australia.
WHBP
Experiments were performed using the arterially perfused in situ brainstem preparation (Paton, 1996) and used juvenile Sprague-Dawley rats of either sex, aged 17-29 days. All basic procedures performed were conducted in accordance with the protocols described previously (Dutschmann & Herbert, 2006;Farmer et al. 2014). Briefly, juvenile rats were anaesthetized with isoflurane, bisected below the diaphragm and immersed in ice-cold Ringer solution (in mM): 125 NaCl, 24 NaHCO 3 , 2.5 CaCl 2 , 1.25 MgSO 4 , 4 KCl, 1.25 KH 2 PO 4 and 10 D-glucose and 1.25% ficoll; Sigma, St Louis, MO, USA). Animals were decerebrated at the level of the superior colliculus and cerebellectomized to expose the brainstem. The lungs were excised, leaving a small amount of parenchymal tissue around the cut left bronchus. The thymus was resected to gain access to the left cardiac vagal branch. Taking care to preserve the left phrenic nerve, the diaphragm was resected, except for a small part around the oesophagus, which was tied to a length of silk thread. The left phrenic nerve was isolated, ready for recording. Preparations were transferred to a recording chamber and perfused via the descending aorta with Ringer solution bubbled with 95% O 2 and 5% CO 2 (carbogen) at 31ºC. Phrenic nerve activity was recorded with a suction electrode and this signal was used when determining the appropriate perfusion flow and pressure values to obtain a eupnoeic pattern of central respiratory drive. The eupnoeic pattern consisted of spontaneous rhythmic discharges lasting ࣘ1 s, showing a ramped onset and a rapid termination (Paton, 1996;Pickering & Paton, 2006). In different preparations, this required final flow rates of 18-22 ml min −1 . In some experiments, a bolus of sodium cyanide (0.1 ml, 0.1%) was added to the perfusate to stimulate arterial chemoreceptors. Similarly, arterial baroreceptors were activated by transiently (2-3 s), setting the perfusion pump to its maximum setting (Paton & Butcher, 1998).
Nerve recording
Activity was recorded from the cut proximal ends of isolated nerves using suction electrodes. In all experiments, phrenic nerve activity (PNA) was recorded monopolarly with respect to the bath ground electrode. The ECG was recorded from the ear bars used to immobilize the animal or from an electrode placed directly on the surface of the myocardium.
Quantification of cardiac vagal tone
Nerve recordings. To record from the cardiac vagal branch, the animal was positioned semi-prone and the oesophagus gently retracted caudally by its attached thread. The left thoracic vagus nerve could be observed parallel to the oesophagus by manipulation of the left bronchial root. The thoracic vagus was carefully dissected away from the surrounding tissue to expose its branches. The left cardiac vagus branch was usually found just caudal to the left recurrent laryngeal nerve, projecting between the left precava and pulmonary artery to innervate the heart (Burkholder et al. 1992;Simms et al. 2007). When necessary, these vessels were removed to improve access. The cardiac vagal branch was then separated from the surrounding tissue and cut as distally as possible. Cardiac vagal branch activity (CVBA) was recorded differentially between a fine silver wire placed at the origin of the cardiac branch and a suction electrode distally. Suction electrode tips were broken to size and, once in the electrode, the cardiac vagal branch was lifted from the surrounding circulating fluid into the air. Nerve signals were amplified (differential amplifier DP-311; Warner Instruments, Hamden, USA; Gain: 1000-10 000×), band-pass filtered (100 Hz to 5 kHz), digitized at 2 or 5 kHz (PowerLab/16SP; ADInstruments, Sydney, Australia) and recorded using LabChart, version 7/8 (ADInstruments, Sydney, Australia).
The viability of the cardiac vagal branch recording was tested with an excitatory baroreceptor challenge by briefly (2-3 s) increasing the perfusion pump setting to its maximum, taking care that rising fluid levels caused no artefact. Preparations that failed this test were either repositioned until the test was successful or discarded. At the end of each experiment, the perfusion pump or the carbogen gas was switched off and recordings continued until nerve activity ceased. The residual noise level was used to determine the threshold for subsequent analysis of discriminated nerve activity.
HR.
In some experiments, the cardiac vagal branches were left intact and fluctuations in cardiac vagal tone were J Physiol 594.24 assessed indirectly as changes in HR. In these experiments, the thoracic spinal cord was destroyed by pithing with a 16-G needle repeatedly inserted into the spinal cavity via the distal end of the severed spinal cord. In each case, the required length of needle was determined using the spinal processes of the proximal thoracic vertebrae as a measure prior to insertion. Thus, the influence of the sympathetic nervous system was effectively removed at the same time as ensuring that the phrenic motoneurons, located in the caudal cervical spinal cord, were not damaged (as indicated by preserved phrenic nerve activity).
Microinjections, transections and denervations
Local microinjection of drugs (50-70 nl) was performed using three-barrel borosilicate glass micropipettes, broken to a tip diameter of ß 20 µm. The individual barrels were filled with L-glutamate (10 mm in saline), the GABA A receptor agonist isoguvacine (10 mm in saline) or Chicago sky blue (2% in glucose-free Ringer solution). The volume of each microinjection was monitored microscopically by movement of the liquid meniscus in relation to a calibrated scale attached to the pipette. Landmarks on the dorsal surface of the brainstem were used to identify the rostrocaudal and mediolateral co-ordinates of the KF. As described previously (Dutschmann & Herbert, 2006), unilateral stimulation of the KF with L-glutamate produces an extension of the post-inspiratory phase, and this effect was used to select sites for subsequent inhibition with isoguvacine. If bilateral isoguvacine injections failed to prolong inspiration by > 200%, they were regarded to have missed the critical region (Dutschmann & Herbert, 2006) and the data were excluded. In several control experiments, isoguvacine was injected at a pontine site 1 mm medial to the KF after its identification by L-glutamate injection. In all cases, isoguvacine injection sites were marked by pressure-injecting ß100 nl Chicago sky blue.
Caudal pontine transections were performed using the caudal extent of the cerebellar paraflocculi as a reference point. A scalpel blade was inserted downwards with the cutting edge at a rostral angle of ß10°such that the brain stem was transected at a level near the rostral pole of the facial nucleus.
After pontine transection, nucleus tractus solitarii (NTS) neurons were inhibited by 2 × 100 nl bilateral microinjections of isoguvacine at two rostrocaudal levels: the first at the level of the calamus scriptorius, 300 µm lateral to midline and 500 µm ventral to the brain surface; and the second, 500 µm rostral to calamus scriptorius, 500 µm lateral and 500 µm ventral.
In six experiments, the contributions of peripheral chemoreceptor and/or baroreceptor afferents to ongoing cardiac vagal tone were assessed by cutting the carotid sinus and aortic depressor nerves (Pickering et al. 2008). The abolition of previously intact responses to raised perfusion pressure and to sodium cyanide was taken as confirmation of successful denervation. It was often necessary to sever the superior laryngeal nerves to complete the aortic baroreceptor denervation.
Histological assessment of microinjection sites and transections
Brainstems were immersion-fixed in 4% paraformaldehyde for several days, transferred to 30% sucrose in PBS for at least 24 h and then serially sectioned at 50 µm using a freezing microtome and counterstained with neutral red. The locations of microinjections, as indicated by Chicago sky blue, were documented on schematic drawings of coronal sections containing the KF. The level and completeness of transections were confirmed by examination of sagittal sections.
Statistical analysis
Data were exported from Lab Chart as text files and imported into Spike2 (Cambridge Electronic Design, Cambridge, UK). Instantaneous HR (beats min -1 ) was calculated using events triggered by the P wave of the atrial ECG because this provides the least confounded measure of chronotropic vagal tone. To quantify CVBA, action potentials rising above the noise threshold (established at the end of the experiment as described above) were counted. Mean values for HR and CVBA frequency were generated over representative sections of the experimental trace encompassing at least 10 respiratory cycles. Over the same period, event-triggered means (triggered by the rapid fall in phrenic activity which occurs at the end of inspiration) of HR and CVBA were generated and displayed as multiples of the mean frequency over the baseline averaging period. The magnitude of the RSA was defined as the difference between the maximum and minimum HR as calculated from the event-triggered mean. Similarly, the magnitude of respiratory-linked fluctuations in CVBA was calculated as the difference between the maximum and minimum frequencies of firing.
In experiments where CVBA was recorded, the effects of baroreceptor activation and unilateral microinjection of glutamate into KF on CVBA and HR were quantified. Peak CVBA (time constant = 0.25 s) was averaged over five respiratory cycles prior to baroreceptor activation or injection of glutamate, and then compared with peak CVBA immediately following the stimulus. The magnitude of bradycardia was assessed by comparison of the mean HR over the same five respiratory cycles with the minimum value observed after the stimulus. In the case of KF glutamate microinjections, the total duration of the respiratory cycle (i.e. the time between onset of a spontaneous phrenic nerve burst and the onset of the next) immediately following microinjection was compared with the mean duration of the five preceding respiratory cycles.
Chemoreceptor activation of the cardiac vagal branch was assessed by comparing mean peak CVB firing frequency (over five respiratory cycles prior to the administration of 0.1 ml of 0.1% sodium cyanide) with peak CVB firing frequency over the four or five respiratory cycles that followed.
Statistical analyses were carried out using the R statistical programming environment (Rstudio, version 0.99.902; RStudio Inc., Boston, MA, USA). The normality of each data set was established using the Shapiro-Wilk test for normality. Where more than two treatments were compared, data were analysed using a repeated measures ANOVA before post hoc analyses. The statistical significance of changes in mean HR, magnitude of RSA, respiratory cycle duration, mean CVBA and respiratory fluctuations in CVBA were assessed by a paired t test. Where appropriate, P values were adjusted for multiple comparisons using the Bonferroni method.
Results
Experiments were performed on three variations of the rat WHBP (Fig. 1).
Cardiac vagal branch recording
Cardiac vagal branch recordings were made in 12 preparations (Fig. 2). Even though the left cardiac vagal branch had been cut, these preparations displayed a clear RSA (mean difference in maximum -minimum HR over one respiratory cycle of 8.9 ± 2.1 beats min -1 ). CVBA showed marked respiratory modulation and increased during the inspiratory period with minimal discharge in late expiration. Respiratory cycle-triggered means of CVBA and HR confirmed in 11 of 12 preparations that CVBA increased during inspiration and peaked immediately after the termination of phrenic discharge, in the post-inspiratory phase. Peak cardiac vagal branch activity preceded minimum HR by 527 ± 43 ms.
As described previously (Dutschmann and Herbert 2006), unilateral microinjection of L-glutamate into the Red crosses indicate sites of microinjections. In the first (A), a cardiac branch of the left thoracic vagus was isolated and severed. This allowed direct, differential recordings of electrical activity of the neurons that project to the heart within this branch. Because making these recordings required that we unilaterally dennervate the heart, these experiments were repeated in preparations with cardiac vagal branches left intact (B). In this case, the thoracic spinal cord was destroyed, removing the confounding chronotopic influence of sympathetic cardiac nerves. Finally (C), the relative contribution of pontine and medullary sources of cardiac vagal tone was assessed by transection of the brainstem at the level of the caudal pons. In each case, the atrial ECG was recorded from the surface of the atrium. The HR was calculated using the interval between adjacent P waves. CVB, cardiac vagal branch; KF, Kölliker-Fuse; NA, nucleus ambiguus; NTS, nucleus tractus solitarii; SN, cardiac sympathetic nerve. [Colour figure can be viewed at wileyonlinelibrary.com] J Physiol 594.24 KF (n = 10; ipsilateral to the remaining, intact cardiac vagal branch) produced an increase in the respiratory period (control: 4.4 ± 0.4 s; glutamate: 13.7 ± 2.5 s; P = 0.002) (Fig. 3C). Cardiac vagal branch activity was maintained throughout the extended post-inspiratory period and fired at rates that were slightly higher than those observed during post-inspiration prior to microinjection (control: 100.3 ± 10.6 Hz; glutamate: 114.3 ± 10.3 Hz; P = 0.015). This was also associated with a bradycardia (control: 270.3 ± 11.7 beats min -1 ; KF glutamate: 243.0 ± 17.7 beats min -1 ; P = 0.040).
Bilateral inhibition of the KF with isoguvacine altered the pattern of phrenic nerve discharge to one of apneusis (n = 8) (Figs 4 and 5) as described previously (Dutschmann & Herbert, 2006). The post-inspiratory peak in CVBA activity disappeared, although brief dips in activity occurring in expiration sometimes persisted. These dips, when present, were not sufficient to induce any measurable tachycardia (Figs 5 and 6). Group mean data are shown in Figs 5 and 6.
Cardiac vagal tone in WHBP with intact vagi (and without sympathetic drive)
Because CVBA may include action potentials of neurons that are not involved in chonotropic control and, because recording CVBA requires unilateral vagal denervation of the heart, parallel experiments were performed by measuring HR in WHBP with intact cardiac vagal connections. To avoid any confounding action of sympathetic nerves, the sympathetic outflow was disabled by destruction of the thoracic spinal cord (Fig. 7). Evidence for an effect of the following treatments upon both the magnitude of RSA and HR was found (P = 0.0001 and 0.0069, respectively; one-way repeated measures ANOVA).
In seven of these preparations, bilateral inhibition of KF neurons with isoguvacine reduced RSA from 20.7 ± 8.1 beats min -1 to 2.5 ± 1.0 beats min -1 (P = 0.001) and increased HR (baseline: 214.6 ± 9.6 beats min -1 ; after isoguvacine: 256.1 ± 20.8 beats min -1 ; P = 0.025). Subsequent systemic administration of atropine (0.5 µM) further reduced RSA to 0.3 ± 0.1 beats min -1 (P = 0.0044 compared to KF isoguvacine). This resulted in a further increase in HR to 294.7 ± 16.7 beats min -1 (P = 0.065 compared to KF isoguvacine). Thus, inhibition of KF bilaterally removed ß52% of the vagal chronotropic tone. Examination of histological samples confirmed that all injections were in the immediate vicinity of the KF (Fig. 7C) with two exceptions: in animal 2, the ipsilateral injection was found to be in the intertrigeminal region, caudal and ventral to KF; the contralateral injection site for animal 4 could not be recovered. These results were included because they evidently spread sufficiently to the KF to fulfil the physiological criteria for a successful injection (see Methods).
To track down remaining sources of cardiac vagal tone, the caudal pons was transected in six other sympathetically-disabled preparations before inhibition of the NTS. Evidence for an effect of the following treatments upon HR was found (P < 0.0001; repeated measures ANOVA). Pontine transection produced HR changes of variable magnitude and direction (baseline: 205.6 ± 10.2 beats min -1 ; pontine transection: 233.9 ± 8.2 beats min -1 , P = 0.307) (Fig. 8A), although substantial vagal tone remained (final HR after atropine: 296.1 ± 7.7 beats min -1 ; P = 0.0041 compared to pontine transection) (Fig. 8A). Approximately half of that tone was removed by inhibition of the NTS (HR after pontine transection and inhibition of the NTS: 261.5 ± 8.0 beats min -1 ; P = 0.0012 compared to HR after atropine) (Fig. 8A).
To test whether any of the NTS-dependent component of vagal tone could be attributed to ongoing afferent inputs from arterial chemoreceptors or baroreceptors, we performed denervation experiments in six further sympathetically-disabled preparations. Bilateral section of the carotid sinus and aortic depressor nerves caused no increase in HR (baseline: 206.1 ± 20.6 beats min -1 ; denervated: 198.2 ± 23.0 beats min -1 ; P = 0.4573). In all cases, effective denervation was confirmed by abolition of baroreceptor and chemoreceptor reflexes (not shown).
Finally, and in these same preparations, the possibility that some vagal tone might arise from within the cardiac ganglion itself was addressed. The cervical vagi were cut bilaterally and, once the new baseline HR had stabilized, atropine was added to the perfusate. This did not increase HR (vagotomized: 264.0 ± 11.1 beats min -1 ; atropine: 259.1 ± 10.7; P = 0.162).
Discussion
The WHBP shows a consistent pattern of RSA, with minimum HR occurring in post-inspiration (Potts et al. 2000;Pickering et al. 2003;Simms et al. 2007;Baekey et al. 2008). As expected, this was maintained in preparations where the sympathetic drive to the heart had been removed, and was abolished by atropine, confirming the vagal origin of the RSA. RSA was still apparent after unilateral denervation of the heart to permit cardiac vagal branch recordings. This may be attributed to the fact that the cardiac pacemaker receives bilateral vagal inputs in the rat (Sampaio et al. 2014). As described previously (Potts et al. 2000;Simms et al. 2007), direct recordings from the cardiac vagal branch showed a corresponding peak in activity at the onset of post-inspiration; we found this to occur 0.53 s before the nadir in HR on average. This may be compared with the vagal neuroeffector delay measured at 37°C in adult rats in vivo of 200-300 ms (Cividjian et al. 2011), although a longer delay is expected in the WHBP because of its lower temperature. Interestingly, although peak cardiac vagal branch firing occurred in post-inspiration, its activity had already begun to build up during late inspiration. This pattern closely matches that of the vagal inputs to cardiac ganglion cells in the WHBP , supporting the view that their functional target was indeed the heart. The inspiratory build-up was also observed in cardiac preganglionic vagal neurons of the nucleus ambiguus in vivo, although without the post-inspiratory surge, which was perhaps suppressed by anaesthesia (Rentero et al. 2002). As might be predicted, the post-inspiratory peak in cardiac vagal activity of the WHBP disappeared when post-inspiration was abolished by KF inhibition (Dutschmann & Herbert, 2006) and RSA was suppressed. Yet mean cardiac vagal branch activity was not much changed, and substantial chronotropic vagal tone remained.
Neurons in the intermediate part of the KF are essential for the expression of the post-inspiratory phase (Dutschmann & Herbert, 2006) and their descending respiratory actions are excitatory (Dutschmann & Dick, 2012). These excitatory (glutamatergic) neurons project directly to the region of the nucleus ambiguus (Yokota et al. 2015). The simplest interpretation of our findings would thus be that KF post-inspiratory neurons also send direct excitatory connections to the nucleus ambiguus, driving vagal motoneurons in parallel with post-inspiration. Tracing studies have identified KF neurons with direct projections to the NA (Song et al. 2012), including to subregions that were shown to produce bradycardia on stimulation (Stuesse & Fish, 1984). However, it has not been unequivocally shown that these KF projections innervate cardioinhibitory motoneurons. Indeed, injection of a retrograde transynaptic viral tracer into the cardiac ganglia produced labelling of KF neurons only with comparatively long survival times, suggestive of an intervening synapse (Standish et al. 1995). Therefore, we cannot exclude the possibility that interneurons with a post-inspiratory firing pattern normally mediate the functional link from KF to cardiac vagal motoneurons. Pontine transection probably removes both inhibitory and excitatory descending inputs to medullary cardiorespiratory nuclei. This may be why pontine transections had quite variable effects on chronotropic vagal tone. The key new finding, however, is that a large component of cardiac vagal tone does not require connections from the pons, and is independent of central respiratory drive. Part of that medullary component depends on neurons in the NTS.
Cardiac vagal tone in mammals has been recognized ever since the 19th century, and is a general feature among vertebrates (Taylor et al. 1999). The reflex factors that increase or decrease vagal tone (baroreceptors, chemoreceptors, pulmonary afferents), as well as its modulation by central respiratory drive, have been well studied in mammals, including humans (Anrep et al. 1936a;Jewett, 1964;Kunze, 1972;McAllen & Spyer, 1978;Taha et al. 1995;Nosaka et al. 2000). By contrast, remarkably little is known the central nervous origin of this ongoing activity. Two main reasons may account for this. First, invasive studies on animals in vivo typically use general anaesthetics that strongly suppress vagal tone, making it hard to study (Inoue & Arndt, 1982). Second, in isolated in vitro preparations such as the transverse brainstem slice, putative vagal preganglionic neurons typically do not generate action potentials (i.e. show no vagal tone) (Mendelowitz, 1996;Dergacheva et al. 2010). Decerebrate, unanaesthetized preparations such as the WHBP avoid these drawbacks. The WHBP has a blood-free environment, which makes interventions such as brainstem transection relatively straightforward. It has robust vagal tone with a physiological pattern. This information provided the starting point for the present study: the sources of cardiac vagal tone appear to be extrinsic to the preganglionic motoneurons but, at least in large part, are intrinsic to the brain stem.
The WHBP is simplified, in that the known reflex sources of cardiac vagal modulation are removed or disabled. Arterial chemoreceptor activity is suppressed by the hyperoxic perfusion medium and the low, non-pulsatile perfusion pressure provides only minimal drive to arterial baroreceptors, below the cardiac vagal baroreflex threshold (Simms et al. 2007). The lack of contribution of these afferents to vagal tone was formally demonstrated here by denervation. There is no need to confirm the absence of lung stretch receptor feedback in the absence of lungs. Additionally, we found that no vagal tone was generated by neurons of the cardiac ganglia. Under these conditions, it is fair to conclude that cardiac vagal tone, and its respiratory modulation, originate in the brainstem.
In the WHBP, the principal pattern of vagal respiratory modulation is a post-inspiratory peak in activity. In this and a previous study where the pattern of cardiac vagal activity was studied in detail , we found that it was minimal around the expiratory-inspiratory transition but was followed by a build-up of activity during the latter part of inspiration. In other species, both similar and different patterns have been reported. In the cat, Gilbey et al. (1984) found that cardiac vagal motoneurones were subject to a wave of IPSPs that augmented during inspiration. Inspiratory inhibition also appeared to underlie the central component of RSA studied in the dog heart-lung preparation by Anrep et al. (1936b). On the other hand, a progressive rise in vagal excitability during inspiration (measured by the bradycardia following single shocks to the sinus nerve) was observed in anaesthetized dogs by Koepchen et al. (1961) and the same pattern was seen in humans in response to brief carotid baroreceptor stimuli applied by neck suction (Eckberg et al. 1980). Both excitatory and inhibitory inputs to cardiac vagal motoneurones can evidently contribute to RSA, with similar end results on HR. The excitatory mechanism seen here is present in rats, and may also be present in humans, although it is clearly not the only mechanism of RSA in all species.
Limitations
The WHBP lacks structures anterior to the midbrain, which can have both excitatory and inhibitory tonic influences on the cardiac vagus (Gellhorn et al. 1956;Mauck & Hockman, 1967). It runs at lower than normal body temperature, which serves to amplify vagal actions on HR (Potter et al. 1985), although there is no reason to assume that the underlying neural drive is affected very much, or that the lower temperature would somehow recruit de novo sources of vagal tone. As proposed elsewhere (Dutschmann et al. 2000;St -John & Paton, 2003), the eupnoeic, reproducible and stable patterns of respiratory drive in the WHBP, as well as a pronounced pattern of RSA, suggest that brainstem cardiorespiratory generator circuits are behaving normally.
Multifibre activity recorded from the cardiac vagal branch does not simply consist of fibres that control pacemaker function. Those are principally myelinated axons (B fibres) of neurons in the nucleus ambiguus (Cheng et al. 2004) and are excited by baroreceptors (Fig. 3). However, recordings almost always include an admixture of fibres probably destined for different regions of the heart or to unknown other targets (O'Leary & Jones, 2003). This factor may explain the imperfect concordance found between the experimental effects on CVBA and HR. Unmyelinated efferent fibres supplying the heart come from neurons in the DMNV (Jones et al. 1998); these are generally not barosensitive (Jones et al. 1998;O'Leary & Jones, 2003) and synapse with their own distinct subpopulation of cardiac ganglion cells (Cheng et al. 2004). Recent studies have shown that the principal function of this pathway is to regulate ventricular function (Machhada et al. 2015(Machhada et al. , 2016: selective inhibition of DMNV neurons in anaesthetized rats caused an increase in ventricular contractility and excitability but no change in HR (Machhada et al. 2015(Machhada et al. , 2016. Presumably, it provides vagal tone to the ventricles but not to the pacemaker, and so any spread to the DMNV of isoguvacine injected into the NTS probably did not confound our conclusions on the vagal control of HR.
To circumvent uncertainties about conclusions from cardiac vagal branch recording, we repeated the experimental series at the same time as measuring chronotropic vagal tone from HR. The results obtained using the two methods were broadly similar, indicating that a substantial component of chronotropic cardiac vagal tone arises independently of RSA. After the influence of pontine structures has been removed, a significant level of chronotropic vagal tone remains and evidently originates within the medulla from sites that include the NTS.
Finally, we can make no firm predictions about the nature of the generators of cardiac vagal tone. For sympathetic nerves, the intrinsic rhythmicity of their signals has been taken as a 'signature' of activity generated by central oscillator circuits (Barman & Gebber, 2000). Apart from respiratory modulation, cardiac vagal nerve activity in the WHBP shows no obvious intrinsic rhythmicity. As discussed above, it has been reported that cardiac vagal motoneurons do not appear to possess any intrinsic autoactivity, although the possibility remains that vagal tone is generated by neurons that do. In line with this idea, a select subpopulation of neurons in the medial ('cardiovascular') subregion of NTS was found to generate spontaneous pacemaker-like spike activity in vitro (Paton et al. 1991). Neurons of this subregion also possess projections to cardiac NA motorneurons (Standish et al. 1995;Neff et al., 1998). Whether the NTS neurons that reflexly drive cardiac vagal motoneurons in response to baroreceptor and/or chemoreceptor afferent stimulation also show autoactivity is currently unknown, although this would provide a parsimonious neural circuit.
In the clinical setting, cardiac vagal tone is measured non-invasively by beat-to-beat or 'high frequency' HR variability. This essentially measures RSA. Although RSA has been found to correlate with vagal tone, it is worth noting that the two measures are not identical and, as reported in the present study, and may have different origins. | 8,244.6 | 2016-12-14T00:00:00.000 | [
"Biology",
"Medicine"
] |
Estimating Similarity of Dose–Response Relationships in Phase I Clinical Trials—Case Study in Bridging Data Package
Bridging studies are designed to fill the gap between two populations in terms of clinical trial data, such as toxicity, efficacy, comorbidities and doses. According to ICH-E5 guidelines, clinical data can be extrapolated from one region to another if dose–reponse curves are similar between two populations. For instance, in Japan, Phase I clinical trials are often repeated due to this physiological/metabolic paradigm: the maximum tolerated dose (MTD) for Japanese patients is assumed to be lower than that for Caucasian patients, but not necessarily for all molecules. Therefore, proposing a statistical tool evaluating the similarity between two populations dose–response curves is of most interest. The aim of our work is to propose several indicators to evaluate the distance and the similarity of dose–toxicity curves and MTD distributions at the end of some of the Phase I trials, conducted on two populations or regions. For this purpose, we extended and adapted the commensurability criterion, initially proposed by Ollier et al. (2019), in the setting of completed phase I clinical trials. We evaluated their performance using three synthetic sets, built as examples, and six case studies found in the literature. Visualization plots and guidelines on the way to interpret the results are proposed.
Introduction
Bridging studies are designed to fill the gap between two populations in terms of clinical trial data, such as toxicity, efficacy, comorbidities and doses. A bridging data package consists of selected data from the Clinical Data Package of the population in the new region, including pharmacokinetic, any pharmacodynamic, dose-toxicity or doseefficacy data, and if appropriate, a bridging study to extrapolate the foreign dose-response data to the new region [1].
According to the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use E5 (ICH-E5) guidelines, data can be extrapolated from one region to another if "a bridging study [...] indicates that a different dose in the new region results in a safety and efficacy profile that is not substantially different from the one derived from the original region; it will often be possible to extrapolate the foreign data to the new region, with an appropriate dose adjustment, if this can be adequately justified (e.g., by pharmacokinetic and/or pharmacodynamic data)" [1]. This is the reason why proposing a statistical tool evaluating the similarity between two foreign dose-response curves is of great interest. If this is proven, then, other clinical trials data can be used and extrapolated for the new region.
In Japan, the Pharmaceuticals and Medical Devices Agency (PMDA) recommends the re-evaluation of a drug if there are insufficient data from Japanese patients [2]. Indeed, Phase I clinical trials in oncology, which aim to estimate the maximum tolerated dose (MTD), are often repeated. Ogura et al. [3] pointed out that MTD differences between populations could be due to the different distribution of genetic polymorphisms in enzymes involved in drug metabolism or of biomarker incidences in different populations. In particular, in Japan, Phase I trials are repeated based on a physiological/metabolic paradigm: MTDs for Japanese patients are often lower than the ones of for Caucasian patients [4]. Based on this assumption, Maeda and Kurokawa [5] have performed an intensive study comparing the MTD of 21 molecularly targeted cancer drugs in Japanese versus Caucasian populations. They found out that this assumption does not hold well: in their study, the MTD was lower for Japanese patients in only two cases, there were no differences between the two populations with 10 drugs and MTD was incommensurable as the evaluated dose range acted different with nine drugs. Moreover, Mizugaki et al. [6] have analyzed data of single-agent Phase I trials at the National Cancer Center Hospital between 1995 and 2012, comparing the dose-limiting toxicity (DLT) profiles and MTDs of Japanese trials with the trials from Caucasian populations.
Recently, methods for bridging dose-finding design have been proposed where previous population data were used to either calibrate the prior distribution of the Bayesian model parameter(s) or to choose the "working model" of the design for prospective trials [7]. Liu et al. [8] proposed using a Bayesian model to average the dose-finding method where the previous trial data were used to build three different skeletons which would then be averaged during the study. Moreover, Takeda and Morita recently defined an "historical-to-current" parameter that could describe the degree of borrowing from one population to the other [9]. Ollier et al. [10] proposed a bridging method where a borrowing parameter was estimated sequentially in a response adaptive design which quantifies the amount of reasonable borrowing according to the similarity between the two populations' estimates. Usually,the proposed methods focus on one parameter, strictly related to the MTD and not on the full dose-toxicity response curve. All these methods were proposed with the purpose of using the foreign data to plan and conduct the future Phase I trial in the new region. Indeed, at this stage, the idea is to use the foreign data to calibrate model-based priors to be used in the new region trial. However, in most cases, the trial in the new region will not be planned this way, but rather by using the MTD information from the foreign region only, if available. The sophisticated statistical approach will not be used.
Another option is to compare the two dose-response curves estimated from each region and to evaluate how similar they are. In this case, the overall purpose is different from before; if the curves prove to be similar (under the uncertainty estimation), the new purpose will be to extrapolate other trial data-such as that of Phase II-to the new region and to avoid further repetition of clinical investigations. For dose-response curves, Bretz et al. [11] introduced an asymptotic test to evaluate the difference of the minimum efficient dose among several groups of subjects, according to a threshold. However, this method was built for later clinical phases and presents weaknesses when applied to a small sample size. By contrast, Bayesian methods could mitigate the issue of estimation based on a small sample size setting, since they do not rely on asymptotic approximations and prior distributions can be used to ensure more stability in computation. Thereafter, the degree of similarity could be considered directly at the posterior distributions level. Therefore, methods proposing to estimate the similarity between dose-toxicity curves should be proposed when there is the need to evaluate if the safety data can be extrapolated or not.
The aim of our work is to propose some Bayesian indicators that evaluate the distance and the similarity of (1) dose-toxicity curves, taking into account the variability, (2) the MTD posterior distributions, by extending and adapting the commensurability criterion initially proposed by Ollier et al. [10]. These indicators were applied to several Phase I trials presented in Maeda and Kurokawa [5] and Mizugaki et al. [6], evaluating the similarity between Western dose-toxicity data to Eastern ones. The proposed tools should be used by trial stakeholders in order to decide if other trials data could be extrapolated from the new region, and, if so, to avoid the repetition of multiple clinical trials. In the next section, the original commensurability parameter is summarized along with the proposed extensions and the dose-toxicity model used. The case studies are described in Section 3, while Section 4 details the computational settings. The results are given in Section 5, followed by a Discussion section.
Methods
In this section, we briefly recall the Bayesian commensurability measure used in Ollier et al. [10], which was originally adopted into a power prior setting [12]; we then propose extensions and modifications to this measure to be applied at the end of the study. We also introduce the Bayesian dose-toxicity model, which will be used for retrospective data analyses.
Let D c denote the Caucasian data, D c = y j , x j n c , n c the sample size of D c , and y j the binary outcome of the j-th patient which received dose x j . In a similar way, we can define D a , the Japanese data and associated parameters. Let us also set a model for the probability of toxicity vs dose; p T (x) = f (x, β), where f (.) denotes a convenient monotonous link function parametrized by β. The likelihood function for each population can be written as L(β|D m ) = ∏ n m j=1 f (x, β) y j (1 − f (x, β)) 1−y j , for m = c, a.
Commensurability Distances
Ollier et al. [10] suggested to consider the likelihood function as a distribution, divided by a normalization constant. This type of normalized likelihood can also be seen as the resulting Bayesian posterior distribution when constant (probably improper) priors are used for the analysis. Then, the authors defined a measure of "commensurability" between the two data-sets through a distance d(D c , D a ), the Hellinger one, in the parameters space via the following relation The commensurability measure, denoted by γ, is then defined as γ = d q (D c , D a ), with q ∈ R + . Values of q higher than 1 will reduce the computed distance, while values lower than 1 will lead to a more conservative method, increasing the computed distance. In case of sequential trials, the authors proved that, when coupled with the power prior approach, a conservative value of γ leads to a better result in terms of operating characteristics, as a percentage of the right MTD selection. However, at the end of the trial, we are interested in comparing the achieved results, without any discount in the resulting distance. Therefore, in this paper, we will focus on the original Hellinger distance, which is q = 1. This computed distance is a positive number between 0 and 1, it tends towards the maximum value when the two datasets are quite different, and towards zero when they are close to each other. Each likelihood is divided by a normalization constant in order to ensure that it can be viewed as a probability distribution. The variance of the likelihood density depends on the sample size of the trial. To make the two likelihoods comparable in terms of precision (variance), if n c > n a , L(β|D c ) is raised to a power of less than 1, otherwise, L(β|D a ) is raised to a power of less than 1. Following this method, the variance of likelihood density of the trial with more patients is increased to almost fit the one of the trial with fewer patients. Practical examples are given in Ollier et al. [10].
A straightforward modification of the distance in Equation (1) was performed by changing the underlying flat prior into a proper one. The posterior distribution obtained with the weighted likelihood is then used in the Hellinger formula. Thus, denoted by π post,c (β|D c ) ∝ L(β|D c ) min(1, na nc ) π prior (β) and by π post,a (β|D a ) ∝ L(β|D a ) min(1, nc na ) π prior (β) the posterior distribution of β given D c and D a , respectively, we have This modification will ensure more stability in computation when the likelihoods involve more than one parameter. When flat/constant priors are used for π prior (β), Equation (2) is equivalent to Equation (1). Even if, theoretically, two different priors can be chosen for the two trials, we suggest using a single one for the sake of comparability.
Both previous distances work at the parameter level. They check if the whole dosetoxicity curve is similar or not. Using a single parameter model for the dose-toxicity relationship, as a one parameter logistic model used in the continual reassessment method (CRM) [13], is also equivalent to check the MTD distance. However, in models with more parameters, such as the Bayesian Logistic Regression Model (BLRM) [14] where we have two parameters, intercept and slope, we check if the bivariate distribution of β is the same.
Since the distance is difficult to interpret in case of the multidimensional parameters space, we propose a summary distance using the resulting posterior MTD distribution. In our setting, the MTD, x * , is estimated as the dose linked to a pre-specified toxicity target τ, that is, , is obtained evaluating x * through the posterior distribution of the parameter, π post,m (β|D m ), for m = c, a. Therefore, we can define Note that this distance always involves a one dimensional integral. Previous distances focused on understanding the similarity of the whole dose-toxicity curve between two populations. However, even with different slopes and intercepts, two populations can still have the same MTD. Those differences should generally indicate a difference in responsiveness to a drug and it is important to know when MTDs are similar but not the underlying curves. Therefore, we propose to couple the distances, previously described, with a measure denoting the difference in MTD point estimations. We can build this measure as a percentage using the median of the posterior MTD distributions, such as where I(.) is the indicator function, which assumes the value 1 if the statement in parentheses is true and zero otherwise, and med i with i = c, a, is the median of the posterior MTD distribution of Caucasians and Japanese, respectively. This formulation was chosen for its easy interpretation, indeed, we check how much the highest MTD differs in percentage in respect to the lowest one. For this reason, the formula implies the exponent 1 − 2I(med c < med a ), which allows us to always have the highest estimate at the numerator, and the −1 term. Similarly to the three previous measures, Equation (4) tends to zero when the two MTDs are very similar. However, this measure does not have an upper bound. We propose the use of the median since it is less impacted by outliers than the mean. The maximum a posteriori is another possible candidate, that is To summarize, the first three measures d, d mod , and d MTD are bounded between 0 and 1. Even if they are not built as percentages, their interpretation could be strictly linked to the percentage. Otherwise, the last two measures d p1 and d p2 have a ratio-like measure, lower bounded at 0. In practice, they give the information on the number of times the maximum MTD is higher than the lowest one.
Dose-Toxicity Model
In this section, we describe the model selected for the link function f (.). Instead of the CRM, originally used in Ollier et al. [10], which is better suited to prospective trials than retrospective analyses (retrospective CRM requires special techniques), we opted for a more flexible BLRM model, with two parameters, the intercept β 0 and the (logarithm of the) slope β 1 [14]. The dose-toxicity relationship is represented by where β ∈ R 2 , x r denotes a reference dose and exp(β 1 ) assures a positive final slope in the model. In this case, f −1 (.) is equal to the logit function and the BLRM formulation is similar to the one of Zheng and Hampson [15]. To close the Bayesian model, we suggest a bivariate normal distribution as prior for (β 0 , β 1 ).
Following the described model, the final MTD is estimated as x * = x r exp logit(τ)−β 0 exp(β 1 ) . In order to minimize the overdispersion generated by this formula, we compared the distribution of the log ratio of the MTD and the reference dose, x * * = log(x * /x r ) (instead of the real MTD). Therefore, we have also changed Equations (4) and (5), accordingly, to the new formulation (x * * ) in order to preserve the original distance meaning, that is Finally, in a previous sensitivity analysis (not shown), even when comparing the distribution of the log ratio of the MTD and the reference dose, we faced instability in computation due to the issue of outliers. We have found that truncating the posterior distribution of x * * between the 10 and 90 percentiles gives a good compromise between preserving trial information and computation stability.
Case Studies
To show the results and the interpretation of the proposed measures, we first introduce four different synthetic datasets (1 for Caucasian and 3 for Japanese), to check the results when two datasets are similar or not. We fixed the Caucasian dataset first: setting τ equal to 0.3, the MTD at dose 600 mg/day. The same setting was used for the Japanese synthetic-1 set. Moreover, the two datasets were generated to have the same dose-toxicity shape. Japanese synthetic-2 set shares the same MTD with the Caucasian set, but has a different dose-toxicity shape: the Japanese dose-toxicity is steeper at the MTD than the Caucasian one. The Japanese synthetic-3 set has a different dose-toxicity curve and MTD (200 mg/day). The data are summarized in Table 1.
Then, we applied our methods to eight examples found in the literature. Our research started by looking at the drugs presented in Maeda and Kurokawa [5] and Mizugaki et al. [6]. We selected only drugs for which both Caucasian and Japanese trial data were available. We then extracted the number of toxicities and the number of allocated patients to the administered doses in each trial. All those data are shown in Table 2, each time with the reference article. The MTD declared at the end of the trial is shown in a box. As we can see from Table 2, Caucasians and Japanese trials were not usually used with the same set of doses. Table 1. Number of dose-limiting toxicity and total number of patients accrued at each dose for 1 Caucasian trial and 3 Japanese synthetic trials. In the first column, the trial population is specified. A dash (-) means that the dose was not tested in the specified population. A box denotes the dose that has been defined as maximum tolerated dose (MTD).
Settings
We chose τ, the target toxicity probability, to be used to define the MTD, which equals 0.3 for the three synthetic set examples, while it equals 0.25 for the real case studies. Most of real case studies followed an algorithm base allocation; therefore, it seemed more natural to have a threshold lower than 0.3, which is more frequently used when model based designs are adopted in oncology.
A non-informative bivariate prior distribution, commonly used in this setting, was chosen for the BLRM model as follows: The hyperprior parameters of the bivariate prior were chosen after a preliminary sensitivity analysis (not shown) in order to ensure computational stability. In detail, this prior choice suggests a mean prior probability of toxicity at the reference dose, x r , of 0.1 and that the slope has the prior median centered at zero. Therefore, x r was chosen in the first half of the total dose panel for each example. In detail, 400 mg/day was set for the three synthetic examples, 1 mg/m 2 for Erilubin, 900 mg/day for Lapatinib, 200 mg/day for Sorafenib, 30 mg/m 2 for Ixabepilone, 8 mg/m 2 for Edotecarin and 700 mg/m 2 for E7070.
All distances were computed with q = 1, which is why we focus on the square root of Equation (1)-(3) and on the original value for Equation (4) and (5). The reference doses selected are reported along with the results in Table 3. All computations were performed in R, version 3.5.2. Monte Carlo approximations were adopted for all integrals involved, and uniform prior distribution on compact supports was set to approximate weighted likelihoods (as posterior distributions) in Equation (4). Details can be found in R scripts in the Supplementary Materials.
Results
The computed distances under all the proposed methods are shown in Table 3. When the MTD and the dose-toxicity curves are similar, like in synthetic-1 data, d, d mod , d MTD are lower than 0.23 and d p1 = d p2 = 0. When only the MTDs are similar (synthetic-2 data) but not the dose-toxicity curves, d p1 = d p2 = 0.02 but d, d mod , d MTD are higher than 0.37. Finally, when both curves and MTDs (synthetic-3 data) differ d p1 = 1.50, d p2 = 1.27 and d, d mod , d MTD are higher than 0.83.
Taking these cases' studies as reference, we then analyse the data from published papers with Caucasian and Japanese datasets. Erilubin has the highest values of d, d mod and d MTD , greater than 0.80, which suggests differences between the dose-toxicity curves. It is also shown in Figure 1. Its values of d p1 and d p2 are around 0.45. Ixabepilone and E7070 have quite large d, d mod and d MTD , greater than 0.56 and they also have similar results in term of d p2 . The value of d p1 is different in these two examples and reflects the presence of unbalanced heavy tails in the E7070 case. The heavy tail concern is observed, in at least one population, in all examples except for Erilubin. The results obtained in Table 3 show that d p1 is directly impacted by this phenomenon. For example, Lapatinbib and Sorafenb have a very high value of d p1 , greater than 7.29, whereas the maximum a posteriori, d p1 , has more stable and usual results. Edotecarin has close values of d, d mod and d MTD , around 0.3, representing similar dose-toxicity curves. Figure 2 and Figure A1, in the Appendix A, show how the Caucasian posterior distribution is different in the three synthetic examples even if it comes from the same Caucasian dataset. This behaviour is due to the variance adjustment given by min 1, n a n c . In general, the posterior peak is preserved and the variance increases when the exponent is less than 1 (as in the synthetic-3 example). Figure 3 represents the distance between dose-toxicity curves, d mod , and maximum of the posterior MTD distribution, d p2 . For the sake of interpretability, we have equally divided the axes into three parts, each one denoting a small, moderate or high distance, respectively. In this plot, Sorafenib has moderate distances between curves and high difference between MTDs. This is the opposite for Erilubin, where there is a moderate difference between MTD and a large distance between curves. When MTDs are similar or close (first column of the gradient), Edotecarin has similar dose-toxicity curves, while the distance between curves of Ixabepilone and E7070 is moderate. Lapatinib shows a moderate distance of both dose-toxicity curve and estimated MTDs. Small dose-toxicity distance and high MTD distance is incoherent, as such it is plotted in a darker color.
Discussion
The aim of our work was to propose several Bayesian indicators to support further decisions when using a bridging data package [1]. Bayesian methods permit the definition of a similarity degree based on posterior distribution, which do not rely on asymptotic approximations and can be used also in small sample size settings. Specifically, we proposed Bayesian indicators which evaluate the distance and the similarity of dose-toxicity curves and MTD. When evaluating a drug among different populations, assessing the dose-response curves similarity is of most importance, since, if it is proved, other clinical trial data can be used, as well as extrapolation from one population to the other. Maeda and Kurokawa [5] pointed out the difficulty of defining a commensurability measure for different populations.
We presented and studied five criteria, where three of them, d, d mod and d MTD , measure the similarity between dose-toxicity curves, and two of them, d p1 and d p2 , measure the distance between the median and the maximum a posteriori of the MTD posterior distributions. The first three measures are bounded between 0 and 1 and their interpretation could be linked to a proportion. The second ones, d p1 and d p2 have a ratio-like value with a lower bound at 0. In practice, they represent a relative risk measure.
Our approach allows for the identification and discussion of similarities and differences between dose-toxicity curves and MTDs. However, as small samples were used in these studies, estimation of the entire dose-toxicity curve, when only part of the doses in the panel were evaluated, is complex and leads to an estimation with high variability. This is reflected in the values of d, d mod and d MTD , which in our real case studies were above 0.2. When high differences between d and d mod are observed, this is probably due to computational difficulties in Equation (1), especially in computing the weighted likelihood without a stabilization term. In general, d mod is lower than d MTD . This could be expected for two reasons: (i) d MTD introduces, via the transformation, more variability (increased in the density estimation step); (ii) d MTD is computed after truncating the posterior induced distribution of the MTD. Moreover, we showed that d p2 , based on the maximum a posteriori, is more stable than d p1 , which is based on the median, in the presence of unbalanced heavy tails. Therefore, d p2 could be suggested as a more reliable measure in this setting. We have attempted the analysis while varying the variance matrix of the bivariate normal prior distribution and d p1 was less stable (results not shown).
The MTD definition can vary according to the trial and to the population. Therefore, even if the same MTD is claimed in both Caucasian and Japanese populations, our analysis can identify differences. For instance, in the Japanese trial of Sorafenib, 400 mg/day is defined in the clinical trial as the MTD, but at the closest higher dose level, 600 mg/day, only one patient experienced toxicities (16.7%). Otherwise, in the Caucasian trial, three patients out of seven experienced toxicity at 600 mg/day (42.6%). Even if the two trials find the same MTD, the toxicity probability associated with each one differs. That is the reason why our results showed otherwise. Indeed, in the published clinical trials, there is a discrepancy between the method section defining the MTD and the real given MTD at the end of the trial. Our methods are based on data only and allow for evaluation of the actual similarity.
We decided to present the plot of the posterior densities (of the parameters and of the MTD) as it shows the super-position (or not) of the information. Plotting directly one-dimensional dose-response curves could, instead, be misleading and give hazardous interpretation.
A first limitation of our work is that we used published data, where the reporting can be sometimes incomplete in terms of DLTs and doses. For instance, in the paper of Burris et al. [18], we had to re-compose the DLT table and the dose-allocation sequence. Therefore, some interpretation discrepancy can be found in our Table 2. The issue of poor reporting in cancer trials was already raised by Zohar et al. [28] and Comets and Zohar [29]. As a second limitation, we did not provide fixed cut-offs for each criterion. In our opinion, the choice of the cut-offs depends on the application and on the quantity of information in the two trials. The more information we have, the more stringent cut-offs can be considered. Figure 3 only represents a proposition on the way to display the results.
The criteria proposed in this manuscript may be extended to be used in other settings. For example, when several trials are available, a meta-analysis of the dose-toxicity curves or of the MTDs can be considered [30][31][32]. In this case, pairwise distances can be previously estimated, in an empirical Bayes approach, and then be used to model the heterogeneity parameter(s) or to set prior distribution(s). Other extensions, which do not involve necessarily Phase I studies, could be considered: (i) in adults-children extrapolation; (ii) when we are interested to jointly evaluate efficacy and toxicity [33]; (iii) when comparing outcomes (efficacy or toxicity) of the same drug in different indications; (iv) when dealing with similarities in subgroups; (v) in comparing historical control data with respect to the actual trial in randomized Phase III trials.
Being able to quantify distance and bridging between two populations at the end of early Phase I trials can be useful to better characterize the dose-toxicity relationship and differences. In case of small or acceptable differences, the extrapolation process can be considered, as suggested in the ICH-E5.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: Table 2. | 6,671 | 2021-02-01T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Evaluation of three methods for preservation of Azotobacter chroococcum and Azotobacter vinelandii
Plant growth-promoting bacteria –PGPB– are microorganisms that can grow in, on, or around plant tissues and stimulate plant growth by numerous mechanisms (Vessey 2003). Within this group, nitrogen-fixing bacteria play a remarkable role in plant nutritition. They can take N2 from atmosphere and make it available for plant uptake (Halbleib and Ludden 2000). Azotobacter, a nitrogen-fixing bacteria, is able to fix nitrogen aerobically and have the particularity of forming into cysts (Becking 2006, Garrity et al. 2005). This bacterial genus, has showed to be able to promote plant growth, and therefore it is usually included in the PGPB group. For example, Kisilkaya (2008) showed that inoculation of wheat with Evaluation of three methods for preservation of Azotobacter chroococcum and Azotobacter vinelandii
A. chroococcum RK49 resulted in an enhancement in grain yield compared to control.Similarly, Rojas-Tapias et al. (2012) demonstrated the role of A. chroococcum C5 and C9 to prevent saline stress in maize.
Preserving microorganisms gathered from various sources is critical for many fields of research, and maintaining their genetic consistency is crucial for elaborating biological products, which depend on the authenticity and viability of strains (Don andPemberton 1981, Malik andClaus 1987).To date, freeze-drying and ultra-freezing are considered the most efficient methods for preservation of microorganisms (Sorokulova et al. 2012).However, these techniques require the use of specialized or expensive equipment to preserve and maintain bacteria stable.While many researchers have focused on the improvement of these methods by working on process parameters, others have worked on the development of new techniques that do not require the use of specialized equipment or environmental conditions (e.g.preservation in dry natural biopolymers at room temperature) (Sorokulova et al. 2012).This because of under some situations, ultra-freezing could be impracticable and/or availability of specialized equipment may be restrictive.Some techniques to preserve bacteria such as freeze-drying and cryopreservation may maintain cells viable for many years; however, these techniques may also cause severe damages to bacterial cells (Miyamoto-Shinohara et al. 2000).Under optimal conditions cells may even be affected due to the processes used for preservation.Consequences of using these techniques include damages to cell wall, cell membrane, DNA, proteins, etc. (Leslie et al. 1995, Miyamoto-Shinohara et al. 2000).These side effects are considered undesirable because recovery of viable and non-mutated bacteria is critical (Krumnow et al. 2009).Cell damage is caused by the same methods used to preserve microorganisms.For example, freeze-drying involves the use of extreme low temperatures and vacuum, cryopreservation the use of extreme low temperatures, and the majority of spray-drying techniques the use of extreme high temperatures.
Furthermore, desiccated bacteria may also lose viability due to rehydration, which may alter protein structures (Krumnow et al. 2009).For this reason, use of preservation techniques is usually accompanied with use of protective agents, which can increase effectiveness of the technique preventing cell damage.For this reason, selection of the protective agent depends on the preservation method and type of bacteria.Some examples of protective agents include glycerol, trehalose, DMSO, glycine betaine, skim milk, glutamate, and sucrose.A preservation technique is said to be useful if bacteria can revive, maintain cellular functions and propagate after dehydration, storage and rehydration (Malik and Claus 1987).
To our knowledge, few reports have been focused on preservation of Azotobacter cells.Earlier reports suggested the success of preservation of Azotobacter cysts in dry soil at room temperature (Vela 1974) and vegetative cells in liquid nitrogen (Thompson 1987).Both reported they were able to maintain the viability of Azotobacter strains for more than 10 years.Some other techniques such as freeze-drying were tried and were found to not be entirely satisfactory for this genus as viability could not be maintained for long (Lapage et al. 1970, Antheunisse 1973, Thompson 1987).Hence, standardization of different preservation methods, which are available for most laboratories, for storage of Azotobacter strains remains a priority in order to maintain their genetic and metabolic characteristics.The goal of this study was to evaluate the effectiveness of freeze-drying, cryopreservation, and immobilization in dry polymers as preservation methods for Azotobacter (A. chroococcum and A. vinelandii); likewise, to study the effect of protective agents to improve the viability and activity of bacteria under storage.To our knowledge, this is the first report in which are evaluated several preservation methods for the maintenance of this bacterial genus.
Preservation methods: To find a suitable method for preservation of A. chroococcum and A. vinelandii, we tested three preservation methods and some protective agents.Selection of these methods and their respective protective agents was based on an exhaustive revision of literature.For cryopreservation, we used glycerol and DMSO as reported by Garrity et al. (2005), while TSA was chosen by suggestion of author Dr.Joseph Kloepper.For freeze-drying, we used S/BSA (Cleland et al. 2004), 10% skim milk + 1% sodium glutamate (Miyamoto-Shinohara et al. 2006), and 10% skin milk (Cody et al. 2008, and routine method in our Lab) since these show to be highly effective to preserve viability of many Gramnegative bacteria.Finally, for immobilization in dried polymers we used alginate (Bashan and Gonzales 1999), acacia gum (Krumnow et al. 2009), carrageenan (Denkova et al. 2004), and Polyox ® that was first evaluated in this study.
Cryopreservation:
We evaluated three protective agents at two concentrations.Hence, bacterial suspensions were prepared in the following sterile protective agents: 10% and 30% glycerol (Merck, USA), 10% and 20% dimethyl sulfoxide -DMSO-(Fisher Scientific, USA), and 0.5X and 1.0X trypticase soy broth -TSA-(Merck, USA).One-ml aliquots of cell suspensions with each protective agent were dispensed into labeled sterile 2-ml screw capped tubes of polypropylene, incubated at room temperature for 1.0 h, and then samples were frozen at -25ºC.Estimation of bacterial survival was performed 0, 5, 15, 30, and 60 days subsequent to preservation.
Immobilization in dry polymers:
Cell suspensions were prepared in the following four sterile protective agents: 1% sodium alginate, 15% acacia gum, 2.5% Polyox ® , and 1.5 % carrageenan.Sodium alginate and carrageenan were purchased from FMC BioPolymers (Ewing, USA), Polyox ® from Colorcon (Harleysville, USA), and Acacia gum from Sigma-Aldrich (St. Louis, USA).Concentrated solutions of polymers were maintained at 4ºC when required.One-ml mixtures were arranged into 5.0 ml vials and dried at 37ºC for 48 h.Immobilized cells were maintained at 15ºC and 40±2 % relative humidity and bacterial survival was measured after 15, 30, and 60 days.Final volumes of cell suspensions and times for drying were previously studied in order to obtain mixtures (cell suspension and protective agents), with the minimum amount of water as possible, and still maintaining cell viability (data not shown).
Estimation of viability and activity of bacterial samples:
We estimated both bacterial titers and bacterial ability to fix nitrogen to assess the efficiency of the methods.For cryopreservation, samples were thawed for 3 min using a serological bath at 33ºC.When we used freeze-drying and immobilization in polymers, the samples were re-hydrated using 200 μl and 1000 μl of sterile deionized water, respectively.Vials were then vortexed for 3 min, and incubated at 33ºC for 30 min.Bacterial viability was estimated by preparing serial dilutions and plating 20 μl of each dilution on LG solid medium.Plates were incubated under aerobic conditions at 30°C for 72 h.We counted the plates, containing between 30-300 bacterial colonies.Data were expressed as log CFU ml −1 .All experiments were performed in triplicate.Bacterial capacity to fix nitrogen was estimated after 60 days for cryopreservation, 15 days for freeze-drying, and 60 days for immobilization in dry polymers, respectively.Nitrogen fixation was assessed using the acetylene reduction assay (ARA).The bacterial survival ratio (BSR) was reported as the ratio of the log of the number of bacterial cells present in the suspension after preservation (AP) to the log number of viable cells before preservation (BP) multiplied by 100, i.e.BSR= (log AP /log BP ) x 100 (Muñoz-Rojas et al. 2006).
Analytical method; acetylene reduction assay (ARA): Two hundred-ml flasks, containing 20 ml of Ashby medium were inoculated with 25 μL of a bacterial suspension adjusted to OD600= 0.500 and incubated for 48 h at 30°C.Both strains were collected for ARA from cultures roughly in exponential phase or starting stationary phase, therefore the absorbance corresponded mainly to alive cells.We carried out independent comparisons for each strain.Acetylene reduction was measured by using a gas chromatograph (Perkin Elmer, USA) with flame ionization detector, and a Poropak column N 200/300 Mesh of 6.0 ft and 3.0 mm diameter (Eckert et al. 2001).Calibration curve was carried out using pure ethylene as standard (chromatographic grade).statistical analysis: Our hypotheses were: 1) to study if there were differences among the methods to maintain viability and activity of the Azotobacter strains, and 2) to determine if the different protective agents studied had effect on survival and activity of the Azotobacter strains.For cryopreservation, freeze-drying, and immobilization in dried polymers, we used a one-factor design with six, three and four levels, respectively (each protective agent at one specific concentration; see 'Preservation methods' in Materials and Methods).Plate count and nitrogen fixation were used as dependent variables.We tested normality on these two variables using the Shapiro-Wilk test, and homogeneity of variances was assessed using the Levene's Test.Means were compared using the ANOVA and HSD Tukey tests.All experiments were performed at 95% confidence level.We used the statistical package SPSS 17.0 for statistical analysis.P-values less than 0.05 were considered statistically significant.
Results
We assessed three techniques for preservation of Azotobacter in this study: cryopreservation at -25ºC, freeze-drying, and immobilization in polymers at room temperature.Results showed that the bacteria survival rate decreased over the time for both strains, and the decrease rate depended on the used strain, preservation technique, and protective agent.
Azotobacter cells subjected to cryopreservation resulted in a rapid rate of bacterial death regardless of the protective agent (Table 1).The BSR decreased for C26 and C27 between 25-30% and 15-20% after 60 days; however this diminished more than 20% and 10% between days 0th and 5th, respectively.Viability of C26 under freezing depended on the protective agent used, and the best results were obtained with DMSO > Glycerol > TSA (P < 0.05).For A. vinelandii C27, all protective agents maintained the viability between 81-85% (P > 0.05).Regarding bacterial activity, a decrease in viability of C26 was accompanied with a dramatically reduction of bacterial activity (P < 0.05).After 60 days, nitrogen fixation decreased more than 50% (Figure 1).In contrast, nitrogen fixation was usually maintained for C27, and only was slightly reduced when DMSO and 0.5X TSA were used as protective agents (P < 0.05).
Freeze-drying exhibited the best results regarding bacterial activity.After 15 days (i.e.20 simulated years at 4ºC) at 37ºC the nitrogen fixation activity was maintained stable for C27 (P > 0.05), and even was increased for C26 (P < 0.05) compared to control (Figure 1).The observed BSR after 15 days were reduced in 26.2%, 15.1%, and 16.4% for C26 when used SM, SM/Glu, and S/BSA, respectively.The C27 viability was Table 1.Effect of cryopreservation on survival of the Azotobacter strains.Letters indicate subhomogeneous groups obtained using the hsd Tukey test.We compared separately the effect of the protective agents on survival of each strain at each time.Values in parentheses mean: ± standard deviation.The bsr was calculated as described in Materials and Methods.decreased by 28.1% and 9.1% in SM/Glu and S/BSA, respectively (Table 2).Although the bacterial viability was dramatically reduced after 30 days (i.e.40 simulated years at 4ºC), we were able to recover both strains regardless of the protective agent used.Interestingly, when we used S/BSA the BSR was decreased only 20% after 30 days.In general, bacterial cells were well preserved by using polymers as carriers.The best results after 60 days were displayed by carrageenan > Polyox ® > alginate > acacia gum for C26, and Polyox ® > acacia gum > carrageenan > alginate for C27 (Table 3).The reduction in the BSR was generally not higher than 20% within 60 days.Efficiency of immobilization to maintain viability depended on Table 3.Effect of immobilization in polymers on survival of the Azotobacter strains.Letters indicate subhomogeneous groups obtained using the hsd Tukey test.We compared separately the effect of the protective agents on survival of each strain at each time.Values in parentheses mean: ± standard deviation.The bsr was calculated as described in Materials and Methods.
Methods for preservation of Azotobacter
Table 2. Effect of freeze-drying on survival of the Azotobacter strains.Letters indicate sub-homogeneous groups obtained using the hsd Tukey test.We compared separately the effect of the protective agents on survival of each strain at each time.Values in parentheses mean: ± standard deviation.The bsr was calculated as described in Materials and Methods.The + symbol indicates that the strains could be directly recovered from vials, and nd indicates: not determined.
the polymer used (Table 3).Using this method we usually maintained the BSR higher than 85% for both strains.We also observed a significant reduction in nitrogen fixation, but this was not greater than 30% except for Polyox ® on C26 (Figure 1).
Discussion
Preservation of Azotobacter is a main concern because this bacterial genus represents an important source for several biotechnological applications (e.g.plant growth promotion, biopolymers synthesis, hydrocarbons bioremediation), which requires the maintaining of its physiological and genetic properties.Our results evidence that bacterial viability and activity depend on the preservation technique, the protective agent, and the bacterial species.
Viability of bacteria after cryopreservation at -25ºC significantly decreased during the time of evaluation.Interestingly, we observed that the highest rate of bacterial death occurred within the first five days of preservation, which could suggest that bacteria died during the initial stages of the process.Dumont et al. (2004) reported that under cryopreservation, cell survival depends on the cooling rates.They demonstrated that the lowest viabilities are observed at intermediate cooling rates (i.e.~100-1,000 °C min -1 ), whereas the highest ones are observed at very low or very high rates (i.e.~10 and 30,000 °C min - 1 , respectively).Fonseca et al. (2006) also reported that high cooling rates significantly improve survival rates for lactic acid bacteria.It is worth noting that although freeze-drying usually causes more damage to cells than cryopreservation, we observed a minor loss of viability using this first method (Table 2, Table 3).Likely, the higher cooling rates used for freeze-drying (i.e.freezing using liquid nitrogen) resulted in a greater Azotobacter survival.Thompson (1987) indicated that Azotobacter cells frozen with liquid nitrogen usually result in higher survival rates.Both results would support the hypothesis that the freezing rate could affect bacterial survival.
The protective agents prevented the bacterial death compared with 0.85% NaCl (data not shown); however we usually observed no differences in bacterial survival when each protective agent was studied at both concentrations (Table 1).Possibly, the concentrations in which the protectant is no longer useful did not belong to our range of study.Additionally, we observed that viability of each strain specifically depended on the protective agent (Table 1); hence, we concluded that efficiency of the cryoprotective agents was species-dependent.Suspending medium is considered to be a major factor in determining the ability of microorganisms to survive stress (Safronova and Novikova 1996).Leslie et al. (1995) showed how some carbohydrates protect membranes during the rehydration process, decreasing the rate of transition in membranes from gel to the liquid crystalline phase.Other agents as glycerol or DMSO reduce the eutectic point of water preventing the formation of ice crystals (Fonseca et al. 2006).Therefore, it is critical to select the appropriate protective agent for improving the storage conditions for Azotobacter cells.Important to note that possibly our cryopreservation method was not a quite suitable technique for preserving Azotobacter cells because of the temperature used.There is consistent evidence supporting that the lower the temperature, the higher the efficiency of cryopreservation (Trummer et al. 1998, Zhao andZhang 2005), and hence to study more temperatures might evidence more advantages derived from its use for Azotobacter cells.
Despite freeze-drying has been one of the most frequently used techniques for bacterial preservation, this technique causes undesirable side effects, including bacterial membrane damage, protein denaturation, water crystallization, and decreased viability of many cell types (Giulio et al. 2005).As a consequence, freeze-drying is carried out using protective agents to both prevent or reduce these adverse effects, since cells suspended in water or saline solution do not generally survive (Diniz-Mendes et al. 1999).Previous reports showed BSR values of 68.8% for A. chroococcum and 72.2% for A. vinelandii when they were stored at 37°C for 15 days using 5% skim milk + 0.1% actocol (Sakane and Kuroshima 1997).These same authors compared both simulated and natural survival rates of 60 freeze-dried bacteria, where they found that storage of vials at 37°C for 15 days is useful to simulate the die-off caused by storage at 4°C for about 20 years.Under these conditions, C26 and C27 survived for 30 days regardless of the used lyoprotectant, but we evidenced some differences associated to the strains and the protective agent (Table 2).These differences indicate that certain additives are more effective than others in protecting Azotobacter bacteria.Interestingly, the results from our study evidenced larger bacterial survival compared with previous reports about Azotobacter preservation.Antheunisse et al. (1981) studied the Azotobacter survival after drying in dextran and found out that most strains did not survive after 48 months.Moreover, Kupletskaya and Netrusov (2011) demonstrated that under freeze-drying using 1% gelatin -10% sucrose and skim milk -7% glucose, the A. chroococcum strains had BSR values between 43-87%.Although SM and SM/Glut maintain a high number of viable Azotobacter cells, S/BSA maintained the highest Azotobacter viability.Cleland et al. (2004) showed that using S/BSA for freeze-drying also resulted on high bacterial survival when used it to preserve Silicibacter, Psychromonas, Staphylococcus, and Neisseria.
Under some circumstances refrigeration and freezing could be impracticable (Cody et al. 2008).Hence, to maintain microbial cultures at room temperature is paramount.Earlier studies showed that Escherichia coli and Bacillus subtilis are well preserved using dried acacia gum and pullulan as carriers at room temperature (Krumnow et al. 2009).Similarly, Bashan and Gonzalez (1999) showed at these conditions the survival of two PGPB strains in dry alginate.We evaluated four dried polymers for preservation of Azotobacter at room temperature.At room temperature viability was maintained during 60 days with a decrease of the BSR of about 20% in most cases.Earlier reports showed that Azotobacter could be stored for more than twenty years at room temperature into a dry carrier (Moreno et al. 1986, Vela 1974), which supports our findings.Similarly, prior reports displayed that gel entrapment with dehydration has potential for storage (Cassidy et al. 1997).
We observed that freeze-drying and immobilization in dry polymers were useful techniques to preserve the Azotobacter viability (Table 2, Table 3).Muñoz-Rojas et al. (2006) reported that cells of Pseudomonas putida KT2440 at late-stationary phase have more rigid cell membranes and can survive better than those in the exponential phase to freeze-drying; the authors elucidated that a greater proportion of C 17 : cyclopropane fatty acid mediates this resistance.A. vinelandii also exhibits the same pattern, and the proportion of C 17 :Δ fatty acid is larger in old cells (Su et al. 1979).In addition, formation of cysts confers high resistance to deleterious physical conditions such as desiccation (Sadoff 1975).It is therefore likely that particular physiological properties of Azotobacter could influence their survival.
An essential feature required from the preservation method is preservation of the biological activity and genetic stability of bacterial cultures (Safronova and Novikova 1996).Therefore, we analyzed the nitrogen fixation activity of Azotobacter cells subjected or not to preservation, as this bacterial activity requires a complex transcriptional and post-transcriptional regulation systems (Halbleib and Ludden 2000).Hamilton et al. (2011) illustrated that in A. vinelandii about 30% of genes can be differentially expressed under diazotrophic growth.Even though both strains fixed nitrogen after preservation using the three methods and the different protective agents, we observed that both strains exhibited distinctive responses to the preservation methods (Figure 1).In general, the best methods retaining the stability of bacteria were freeze-drying followed by immobilization in polymers.Freeze-drying did not affect the capacity to fix nitrogen of C27 regardless of the lyoprotectant.Furthermore, we observed that after freeze-drying the strain C26 resulted in an increased nitrogenase activity.Similar results were reported on Bradyrhizobium after freeze-drying with SGA (Safronova and Novikova 1996).
Conclusion
In this study, we evaluated three methods for preservation of Azotobacter cells.Our present findings evidenced that the efficiency of the methods depends on the Azotobacter species and the protective agents used.We also observed that preservation using freeze-drying and immobilization in dry polymers are the most suitable methods for maintaining Azotobacter viability.Surprisingly, although cryopreservation is considered one of the best techniques for bacterial preservation, the cryopreserved cells of C26 exhibited a great loss of viability, suggesting that either this method is not useful for all Azotobacter strains or the used temperature is not suitable for their preservation.Concerning bacterial activity, the freeze-dried cells maintained their ability to fix nitrogen.Conversely, the immobilized and cryopreserved cells were affected by storage, but the extent of affectation was strain-dependent.Overall, we found that the best technique for storing Azotobacter cells is freezedrying accompanied with S/BSA. | 4,823 | 2013-05-31T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Gene Selection and Classification in Microarray Datasets using a Hybrid Approach of PCC-BPSO/GA with Multi Classifiers
: In this study, a three-phase hybrid approach is proposed for the selection and classification of high dimensional microarray data. The method uses Pearson’s Correlation Coefficient (PCC) in combination with Binary Particle Swarm Optimization (BPSO) or Genetic Algorithm (GA) along with various classifiers, thereby forming a PCC-BPSO/GA-multi classifiers approach. As such, five various classifiers are employed in the final stage of the classification. It was noticed that the PCC filter showed a remarkable improvement in the classification accuracy when it was combined with BPSO or GA. This positive impact was seen to be varied for different datasets based on the final applied classifier. The performance of various combination of the hybrid technique was compared in terms of accuracy and number of selected genes. In addition to the fact that BPSO is working faster than GA, it was noticed that BPSO has better performance than GA when it is combined with PCC feature selection.
Introduction
Advances in microarray technology and the need of analyzing gene expression have stimulated a shining road of research in bioinformatics, biotechnology, cancer informatics and similar fields (Bolón-Canedo et al., 2014). The microarray data holds information about how the genes are expressed. By analyzing these data, one can find the altered genes, thereby facilitating easy diagnosis and classification of the genetic-related diseases. Consequently, biologists can perform cost-effective and efficient studies upon the altered genes when few number of selected genes are targeted (Cosma et al., 2017). Prediction and classification of cancer types is a great challenge in the medical sector.
Gene expression profiles play a vital role in this regard. However, because of the existence of small number of samples compared with the large number of genes, many computational methods are failed to identify a small subset of important genes in microarray data, which ultimately increases the challenge of microarray analysis (Singh and Sivabalakrishnan, 2015). Furthermore, microarray data usually contains redundant and irrelevant features (genes). These features can significantly increase the computational burden (Wang, 2012). The redundant features do not contribute to modeling a better predictor because the information they provide is basically presented by other feature(s) (Song et al., 2013).
It is imperative to know that redundant features negatively affect the performance of a model and hence in order to achieve better performance, it is desirable to perform feature selection. Feature selection, a concept whose purpose is the finding of a subset of discriminative/altered features, becomes essential and is widely recognized as one of the centrally important areas in biomedical, bioinformatics and data mining (Conilione and Wang, 2005). Three main techniques are used in feature selection which include filter-based, wrapperbased and hybrid-based methods (Bolón-Canedo et al., 2013;Hira and Gillies, 2015;Singh and Sivabalakrishnan, 2015). These methods are categorized based on their criteria of using learning algorithm. The filter selection method chooses variables regardless of the used model and it works by suppressing variables that are least interesting. The non-suppressed variables will be part of a regression or a classification model which is used for the classification or prediction of data (Hira and Gillies, 2015). As filter techniques are not applied to build predictors (Lazar et al., 2012), the classifier accuracy becomes lower if the results of these filters are directly given to the learning algorithm (Hira and Gillies, 2015). Taking the distributed data into consideration, filters are divided among parametric and nonparametric methods (Hameed et al., 2018).
Parametric filters assume equal distribution of samples in different classes, such ANOVA, chi-squared and Bayesian (Saeys et al., 2007). However, this assumption cannot be guaranteed in most datasets. Therefore, the utilization of non-parametric methods might yield a better result when there is uncertainty regarding the dataset distribution. Examples of nonparametric filters are Relief-F, Information gain, Correlation coefficient (Pearson) and Gain ratio. Pearson Correlation Coefficient (PCC) is utilized to determine interrelation between the features and to investigate the correlation between classes (Hall, 1999). In the wrapperbased feature selection, the evaluation is performed on subsets of the variables, through which the possible communications between the variables can be observed. This is achieved by using the classifier accuracy (Saeys et al., 2007). Wrappers choose the best subset of features that gives highest accuracy to the model. The result of this selection usually consists of fewer number of features with robust discriminative power (Xiong et al., 2001). In addition, wrappers are classifier dependent and hence the same result is not guaranteed when another classifier is applied (Lazar et al., 2012;Santana and de Paula Canuto, 2014). Furthermore, the overall performance of wrappers is decreased and may lead to over fitting if they are directly applied on the data without using any pre-processing step (Bolón-Canedo et al., 2014). Hybrid approaches are established based on the useful combination of filter and wrapper algorithms (Alba et al., 2007;Hameed et al., 2017;Lu et al., 2017). Hence, the disadvantages of filters and wrappers can be overcome through using a hybrid technique. Conventional optimization algorithms are not efficiently working in the feature selection of large scale problems (Chen et al., 2012).
Problems in high-dimensional data analysis have motivated the researchers to search for possible solutions and propose viable algorithms. A novel Markov Blanket-Embedded Genetic Algorithm (MBEGA) was proposed for gene selection problem (Zhu et al., 2007). The embedded Markov blanket-based memetic operators add or delete features (genes) from a Genetic Algorithm (GA) solution so as to quickly improve the solution and fine-tune the search. A modified Support Vector Machine (SVM) was also suggested to select the minimum possible genes (Ghaddar and aoum-Sawaya, 2018). Multi-objective version of bat algorithm for binary feature selection (Dashtban et al., 2018) and Genetic Bee Colony (GBC) algorithm (Alshamlan et al., 2015) were successfully utilized in high dimensional datasets. Moreover, a hybrid feature selection algorithm was proposed that combines the Mutual Information Maximization (MIM) and the Adaptive Genetic Algorithm (AGA) (Lu et al., 2017). The reduced gene expression dataset presented higher classification accuracy compared with conventional feature selection algorithms. In order to improve classification accuracy, further study has been made to utilize a hybrid form of filter and wrapper, consisting of information gain and standard genetic algorithm (Maldonado et al., 2014). Besides, a binary version of Black Hole Algorithm called BBHA was proposed for solving feature selection problem in biological data. However, the tested classifiers were under tree family and other kinds of classifiers were not assessed (Pashaei and Aydin, 2017). Along this line, the assessment of different classifiers such as Artificial Neural Network (ANN) (Aziz et al., 2017) and fuzzy decision tree algorithm (Ludwig et al., 2018) has been made upon microarray data.
The two evolutionary algorithms of PSO and GA are usually used in wrapper form (Alba et al., 2007;Chen et al., 2012). PSO is known to be a memory enabled algorithm compared with other algorithms, it requires few parameters to be adjusted, so it is simple and efficient (Chandra Sekhara Rao Annavarapu and Banka, 2016; Hameed et al., 2017). Kar et al. (2015) proposed a PSO-adaptive K-nearest neighbor (KNN) based gene selection method and they used a heuristic for selecting the optimal values of K, while the classification accuracies has been tested using SVM algorithm. We have previously reported a hybrid method which combines three filters with geometric binary particle PSO and SVM for effective gene selection and classification in the high dimensional data of autism (Hameed et al., 2017). Very recently, Jain et al. (2018) reported a two phase hybrid model for cancer classification, integrating Correlation-based Feature Selection (CFS) with improved-Binary Particle Swarm Optimization (iBPSO) using Naive-Bayes as the only classifier. In the current research work, a three-phase hybrid form of filter-wrappers-multi classifiers is proposed aiming at performing effective selection and classification task in the high dimensional microarray data. Pearson Correlation Coefficient (PCC) in combination with binary form of PSO (BPSO) or Genetic Algorithm (GA) are utilized in the feature selection process, while five various classifiers are being employed in the final stage of classification. As such, the proposed PCC-BPSO-multi classifier and PCC-GAmulti classifier are applied to eleven microarray datasets and their results are compared with each other. (PNETs), 10 non-embryonal brain tumors and 4 normal human cerebella. The initial oligonucleotide microarrays contain 6817 genes. They were pre-processed with thresholding (Dettling and Bühlmann, 2002). Hence, the remaining genes are 5597 for the complete dataset with five different sample classes. The Leukemia cancer dataset was generated from a gene expression study in two types of acute leukemia: Acute Myeloid Leukemia (AML) and, Acute Lymphoblastic Leukemia (ALL). The levels of gene expression were measured using Affymetrix highdensity oligonucleotide arrays which consist of 6817 genes, although this was reduced to 3051 genes and further analyzed by Golub et al. (1999). The dataset consists of 25 cases of AML and 47 cases of ALL (38 B-cell ALL and 9 T-cell ALL). The dataset was further pre-processed by Dudoit et al. (2002). Lymphoma microarray dataset is achieved from (Dettling and Bühlmann, 2002). It has 4026 genes and 62 samples. The data samples are mainly from 3 different adult lymphoid malignancies, where 42 samples represent the diffuse large B-cell lymphoma (DLBCL), 9 from Follicular Lymphoma (FL) and 11 of Chronic Lymphocytic Leukemia (CLL). The colon cancer microarray dataset was originally analyzed by Alon et al. (1999). The original authors of the dataset performed treatment on the raw data from the Affymetrix oligonucleotide arrays. The dataset is consisting of normal and tumor tissue samples. The total number of samples are 62 and total gene numbers after pre-processing given by previous authors is 2000. The prostate cancer dataset consists of 102 patterns of gene expression, where 50 of the samples are normal prostate specimens and the other 52 are tumors. This microarray dataset is based on oligonucleotide microarray and consists of approximately 12600 genes. After pre-processing the remaining number of genes in the dataset is 6033 (Díaz-Uriarte and De Andres, 2006). Small Round Blue-Cell Tumor (SRBCT) microarray dataset has four different classes which originally had 6567 genes and 63 samples. Where, 23 samples are from EWS, 20 from RMS, 12 from NB and 8 samples from NHL. After pre-processing the genes are reduced to 2308.This dataset is achieved from (Díaz-Uriarte and De Andres, 2006). The rest of the datasets (Breast, CNS, Lung and MLL) were achieved from (Zhu et al., 2007). The main characteristics of the datasets are given in Table 1.
Pearson Correlation Coefficient (PCC)
The Pearson correlation coefficient, also known as r, R, or Pearson's r, is defined as the strength and direction measure of the linear dependency (correlation) between two features. It can be defined as the covariance of the variables divided by the product of their standard deviations (Benesty et al., 2009). PCC requires all features to be of the same type, hence a discretization pre-processing step is required (Hall, 1999;Huertas and Juárez-Ramírez, 2014). It was originally developed by Karl Pearson based on the idea of Francis Galton who discovered it in 1888 (Stigler, 1989).
Particle Swarm Optimization (PSO)
Particle Swarm Optimization (PSO) is a technique which is based on stochastic population optimization. It was first suggested by Kennedy and Eberhart (1997). PSO algorithm took its inspiration The PSO algorithm is implemented through three simple steps which include; generating the position and velocity of particles, updating their velocity and then updating their position.
In PSO, individual particles are moving in the search space and they are communicating with each other via iterations in order to search for optimal solutions (Tran et al., 2014). If a search space of D-dimensions is assumed, then the ith swarm particle can have a Ddimensional position vector represented by Xi = [1, 2,...; ]. Therefore, the velocity of the ith particle is denoted by Vi= [1, 2,...; ]. It is considered that the best visited position, which produces the best fitness value for the particle, is PBi= [ 1, 2,...; ], while the best explored position so far is GB = [ 1, 2,...; ]. In this way, the velocity of each particle is updated by the following equation: Where: d = 1, 2,………… D c 1 = Cognitive learning factor c 2 = The social learning factor c 1 and c 2 = Positive constants with values ranging from 0 to 4 The inertia weight (w) in "Equation 1" acts to gradually reduce the particles velocity and hence controlling the swarms. The value of w is usually located between 0.4 and 0.9, whereas the random variables rand 1 and rand 2 are uniformly distributed between 0 and 1 (Tran et al., 2014). As such, the velocities of particles are bounded within [v min , v max ]. The vector function of velocity is holding by these bounds, that is to avoid the very sharp movement of particles in the search space. The formula which is used to update the particles position is represented by: Where: d = 1, 2,……… D i = 1, 2, ………. N N = The size of the swarms A modified version of the standard PSO, known as binary PSO (BPSO), was also introduced by Kennedy and Eberhart (1997) in order to handle discrete variables. When BPSO is applied for feature selection, a feature subset is represented by a string vector of n binary bits X i = (x 1 , x 2 ,...x n ) comprising of '0' and '1'. Consequently, if x id is '0', then the d th feature is not selected in this subset, while x id of '1' is alternatively chosen in the subset. In this regard, each binary string vector (X i ) defines the particle position in BPSO. When GBPSO is utilized for the feature selection purpose, the genes are represented by a binary vector. The selected gene is denoted by 1, while the non-selected gene is encoded by 0. For instance, a particle with seven features is encoded as '0100010', implying that the second and sixth features are selected. Therefore, initially the length of each particle is the same as the number of genes in the dataset. Moreover, in the traditional BPSO the dimension of each particle is updated using function 3.2 [21, 45]; ( ) The fitness function in BPSO is employed as an evaluator to choose the best feature subsets. The subset of particles that are giving best fitness values are recorded to maintain a better solution at given population. Consequently, the best subset of genes which provides better accuracy can be recalled. This process is applied in 10-fold cross validation, such that all the training set is used in the determination of the best genes. The inclusion of each gene in the best set is based on the number of repeatability of that gene out of the whole number of folds. Here, the maximum repeatability number is set to 10, so few number of genes with high accuracy are most probably to be imported into the selected set of genes.
Genetic Algorithm
Genetic Algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of Evolutionary Algorithms (EA). It is first generating a random initial population. Later on, the individual chromosomes are evaluated by a fitness function. A detailed description of GA can be found in (Goldberg and Holland, 1988). In this technique, the GA operators which include selection, crossover and mutation are used to search for the best solutions by the individuals. From the current population, the chromosomes having high adapting value are chosen by the selection operator. Meanwhile, the crossover operator is applied to combine two chromosomes, thereby generating two new chromosomes known as offspring. The use of mutation operator is to modify the value of one or more genes in a chromosome from its initial state. This process will be repeated to get the best satisfactory fitness or to arrive the last generation. During the evaluation step, a fitness function is utilized to estimate the quality of each chromosome. Binary coding system is used to represent the chromosome. Each chromosome bit denotes a gene mask. The bit value of '1' implying that the gene is chosen, while '0' indicates that the gene is discarded. In this way, the genes with value '1' are selected and combined as a subset of candidate genes. In this work, the fitness of each chromosome (gene subset) is evaluated by the classification accuracy of SVM. The 10-CV classification accuracy is adopted with the gene subset on the training samples. The higher the 10-CV classification accuracy provides the better gene subset. Ultimately, the gene subset with the highest 10-CV classification accuracy is considered as the optimal gene subset .
Classifiers
In this study, a group of well-known classifiers are applied. The choice of various classifiers is due to the fact that there is no any specific algorithm to work perfectly for all datasets and not all algorithms work in the same way on a dataset. The applied classifiers are Bayes Net (BN), K-Nearest Neighbor (KNN), Naïve Bayes (NB), Random Forest (RF) and Support Vector Machine (SVM). The accuracy of all classifiers is measured based on 10-folds cross validation. This is to make sure that each dataset is equally participated in the training and testing process. Figure 1 shows the complete methodology that was carried out to implement the current work, while the detailed description of the experimental procedures is given below:
Experimental Design
• In the first step of analysis, the datasets were filtered using Pearson Correlation Coefficient (PCC) method in 10 runs. This is to ensure that the whole dataset is passed through this phase and the reduction result is accurate enough at this stage. Different thresholds were tested for considering the number of the filtered genes. This has been made manually by setting the number of genes to 100 and 200 alternatively and automatically by the method itself based on the most attributed gene. Selection set of 100 genes was considered as it was found that the accuracy and performance of the classifier performed better compared to that of the 200 selected genes • Because of the data filtration, the datasets were reduced to be tested against the applied classifiers. This was done in order to compare the performance of the dataset with the one before filtration • The reduced/filtered datasets were again purified by another step of feature selection. BPSO and GA were comparably used as a hybrid method with different classifiers, in which the fitness function was derived from the classification algorithms. This was performed in 10-folds cross validation in order to confirm that the whole dataset is used in the training and testing phases. After the application of this step, the datasets were further reduced to be tested by the same classifiers • The classification results are compared with each other as well as with the results of the previous step. It is fair to mention that the same fitness function was used for each of the BPSO and GA algorithms
Results and Discussion
In the first stage of analysis, the accuracy of the classifiers applied on the original datasets was evaluated. In each classifier, a 10-fold cross validation was applied on the training and testing partitions. Table 2 illustrates the results that were obtained in this stage. It can be seen that Support Vector Machine has presented the highest classification accuracy among all the other classifiers.
The features of the datasets were ranked using PCC filter feature selection method using 10-fold cross validation. Thus, the features are ordered based on the ranking results. To avoid over fitting in the next steps of feature selections (wrapper), possibly due to having low number of samples, the first 100 important attributes were selected. Subsequently, these features were used by the five classifiers. The results of the classifiers performance are tabulated in Table 3. Results in bold indicate the best performed classifier for each specific dataset. The methods highlighted in grey show the best approach for each dataset. The dashed cells indicate that the method is not appropriate for application. The results show that generally the accuracy of the classifiers on the filtered dataset performed better results when compared with those applied directly on the original datasets. However, there are some cases with few classifiers in which the accuracy on the original dataset is better. It was noticed that Bayes net classifier was not working on some of the original datasets, while for the filtered datasets did not show problem. This is because those datasets were having some properties that Bayes net is unable to handle them. This shows one of the differences between our proposed method and others. In other works, only one classifier is applied, while in the current work multiple classifiers are utilized to show the quality of each of them and to follow that rule saying (not all classifiers are best for same dataset and not one classifier is best for all datasets). Moreover, in our work, 11 different high dimensional datasets are applied against the method. This is to show the applicability of our proposed methods, which again confirms the viability of the proposed method. In the next step, wrapper feature selection was applied to all eleven datasets. This was applied on the reduced dataset with 100 attributes that were selected by the PCC filter method, which is considered as a hybrid method (BPSO-Classifier and GA-Classifier). In this step, all the datasets are further reduced by (BPSO-Classifier and GA-Classifier), which is repeated for all classifiers. This is because wrapper is classifier dependent. It is not perfect idea to apply a classifier on a reduced dataset when its features are selected using another classifier. Thus, we considered this fact and feature selection is done using all classifiers separately.
After the feature selection by BPSO-Classifier in this phase, the datasets are further reduced based on the selected genes. Table 4 illustrates the better performance of the hybrid feature selection method (BPSO-Classifier) on the reduced high dimensional datasets. It is noticeable that the accuracy of all classifiers is improved compared with their accuracy on the filtered datasets, as shown in Table 3. This indicates that the feature selection by BPSO not only improved the efficiency of the classification process but also its accuracy is enhanced.
To see how GA is working as a feature selection, it is also applied on the same filtered datasets using the same fitness function as used for BPSO. The datasets are reduced based on the selected features (genes) by each GA-Classifier. Again, all classifiers are used with GA as feature selection, separately. Then, the classifiers are applied on the reduced datasets to see the effect of this phase. It is clear that the classifier's accuracy is improved compared with the one of the filtered datasets, as shown in Table 5.
Here, we have clearly noticed that BPSO was generally better than GA in terms of accuracy of the classifiers after selection process, as it is illustrated in Table 6. This is also in agreement with the results reported previously that PSO can outperform GA when it comes to feature selection (Hameed et al., 2017;Hassan et al., 2005). Bold classification accuracies indicate better performance for same classifier and same dataset but different selection method. Grey highlighted method shows the winner or the best approach of selection.
Furthermore, the number of selected genes by each method is compared. It is worth to mention that in this study more attention is given to achieving high accuracy rather than achieving fewest number of genes. The number of selected genes is tabulated in Table 7. From the table, we can notice in general that BPSO has selected fewer number of genes compared to that of GA. Neighbor ( Moreover, it was seen that BPSO is performing faster than GA. The final dataset generated by BPSO and GA are illustrated in scatter plot for two representative random genes for Leukemia dataset in Fig. 2 and 3, respectively. For further demonstration, the Andrews plot is carried out for all selected genes by BPSO and GA, as shown in Fig. 4 and 5. This analysis is performed for worst dataset among them which is Breast dataset. This is to show the quality of the applied methods even in worst case. The scatter plots for two representative genes of the final Breast dataset, which are selected by BPSO and GA, are illustrated in Fig. 6 and 7, respectively. It was concluded that the performance of the proposed method, in terms of accuracy and efficiency, is better than other methods reported in literature (Dash, 2018;Gonzalez-Navarro and Belanche-Muñoz, 2014).
Conclusion
High dimensional datasets such as gene expression datasets are characterized by high number of genes (aka features) with few number of samples. That means they need special and careful analysis. Bioinspired and evolutionary algorithms such as BPSO and GA are tremendously used in the field of machine learning and data mining in different forms. In this study, these two methods were successfully applied in a hybrid wrapper form after the application of filter feature selection. The proposed method was composed of three-phase hybrid form of filter-wrappers-multi classifiers, in which Pearson correlation coefficient (PCC) in combination with binary form of PSO (BPSO) or Genetic Algorithm (GA) were utilized in the feature selection process, while five various classifiers were employed in the final stage of classification. It was noticed that filter feature selection has a remarkable impact on the classification accuracy. This positive impact was seen to be improved when the filtered datasets are reduced by each of BPSO and GA algorithms with different classifiers. Later on, their performances are compared in terms of accuracy and number of selected genes. In addition to the fact that BPSO is working faster than GA, it was noticed that BPSO has better performance than GA.
Ethics
There are no ethical issues that may arise after the publication of this manuscript. | 6,015.8 | 2018-06-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Keywords-To-Text Synthesis Using Recurrent Neural Network
. This paper concerns an application of Recurrent Neural Networks to text synthesis in the word level, with the help of keywords. First, a Parts Of Speech tagging library is employed to extract verbs and nouns from the texts used in our work, a part of which are then considered, after automatic elimina-tions, as the aforementioned keywords. Our ultimate aim is to train a Recurrent Neural Network to map the keyword sequence of a text to the entire text. Successive reformulations of the keyword and full text word sequences are performed, so that they can serve as the input and target of the network as efficient-ly as possible. The predicted texts are understandable enough, and their performance depends on the problem difficulty, determined by the percentage of full text words that are considered as keywords (ranging from 1/3 to 1/2), and the training memory cost, mainly affected by the network architecture.
Introduction
Keywords-to-text synthesis appertains to the general field of the well known Natural Language Processing (NLP).NLP regards the understanding of a human language by the computer, and also, conversely, the ability of the computer to synthesize text or speech.Examples of applications include speech recognition, machine translation, text-to-speech synthesis, Parts Of Speech (POS) tagging and text summarization [1].NLP research generally started in the 1950's, but machine learning techniques for NLP were firstly employed in the 1980's, when the tasks started to be solved by statistical inference instead of the former disadvantageous handwritten rules.Even more especially, deep learning was involved very recently [1,2].Most approaches are supervised, but recently semi-supervised and unsupervised approaches are investigated as well.In this paper the supervised option is preferred, because it demands less data to achieve desirable performance.
In this work the problem of keywords-to-text synthesis is addressed.Especially, our main goal is to provide a tool which, taking a keyword sequence as input, is able to produce a full text containing the keywords, or synonyms of them, in the same order.This synthesis facilitates the text composition, since it demands less typing by the user.The Recurrent Neural Network (RNN) is chosen as such a tool, since RNNs are particularly appropriate for sequence modeling problems and they have many applications related to NLP, knowledge representation, reasoning and question answering [1,3].This paper concerns a supervised learning method and some full texts are used as a data set.As for the keywords, here they are defined as the least necessary words from which a unique full text can be inferred.Since such words are almost always verbs and nouns, the verbs and nouns are extracted from the texts using an available POS tagging algorithm.Finally, for the automatic synonym matching, a library containing a dictionary which includes synonyms for each of its words is used.
In this paper a relatively simple family (among those presented in [1,2]) of an RNN is eventually employed.It is empirically confirmed that our text synthesis goal does not demand gated RNNs [e.g.Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU)], which deal with long-term dependencies, something rather irrelevant to this problem, where for a word production the present and a few neighboring input keywords and full text predicted words obviously suffice, as also shown by the results.
The remainder of the paper is organized as follows.In section 2 related work is cited and the uniqueness of ours is briefly described.In section 3 the preprocessing of our data set before training is presented, and in section 4 our experimental results from several training approaches are shown and discussed.Finally, section 5 summarizes our work and proposes possible next steps.
Previous Work and Motivation
In this section previous work is presented, related to the two coarse phases of ours: the keywords extraction from texts and the reproduction of the texts (text synthesis) from these keywords.Considerable work on keywords extraction and text summarization is found in the bibliography, e.g. in [4][5][6][7][8].The criteria of defining/selecting keywords, or even the goal, differ among references, so the methodologies are not directly linked and comparable with each other and with ours.The work of [4] is somehow relevant to ours, because in that paper the keywords are considered as the nouns (as characterized by a POS tagging algorithm) that imply many other nouns in the same sentence.However, the final aim in that work is not the text synthesis from the extracted keywords, which constitute a percentage of the total word number that is too small to serve such a goal.Thus, despite our inspiration by the cited keywords extraction methodology, it needed to be modified in our work.In [5] the purpose of keyword extraction is the classification (annotation) of texts, so words are evaluated by the number of families in which they appear and their mean frequency in them.In [6] keywords are defined according to their frequency and extracted from abstracts and titles.Finally, in [7,8] keywords [7] or whole key-sentences [8] are extracted from texts according to several criteria, with the purpose of summarization.Particularly, in [7] a supervised and an unsupervised approach (both graph-based) are introduced for the extraction.
References about text synthesis regard mainly text-to-speech, speech-to-text and text-to-image synthesis.Regarding keywords-to-text synthesis, there is some work found about text generators based on keywords [9][10][11], where the meaning of the keywords or a categorization of them is necessary for their appropriate mapping to the full texts.Also, in those works the input of the text generator is either too complex for manual assignment [9,10] or domain-specific [11].In this paper the mapping model has been defined with the help of neural network (especially RNN) training, so that the manual specification of the meaning, category or part of speech of the input keywords is not necessary.In our methodology the input used for text synthesis is very simple; just a sequence of keywords, from the domain used in training, which may be any desired.So, a user may benefit from the provided tool even by assigning the input manually.Also, it is experimentally observed that our goal cannot be addressed by a standard encoder-decoder sequence-to-sequence architecture [1,2,12], which is too complex for our problem and not well-suited enough to it.(This is discussed more extensively in section 3.5.)Other pieces of work on RNNs (some of which regard NLP) appear in [1].
Preprocessing Methodology
Apparently, the first step of the experiment is the definition of the data set.It should consist of texts of similar content, for the sake of sufficient training and appropriate evaluation.A family of steak recipes is constructed according to [13] and considered as such a data set, but any other content could have been selected.The used texts are 15, with a total length of 761 words.The first 9 of them are treated as the training set and the rest 6 as the test set.Two examples are shown below.Observe similarities and dissimilarities.
"Mix garlic and oil in a bowl.Pour marinade into a resealable plastic bag over the steaks.Later squeeze excess air and seal bag.Afterwards marinate beef in the refrigerator for 4 hours.Preheat grill for medium-high heat and bribe grate.Then remove meat from the marinade and shake off excess.Later discard marinade and afterwards cook steaks on preheated oven to desired degree for about 8 minutes." "In a bowl combine garlic, oil, sauce and sugar.Pour marinade into a resealable plastic bag over the beef.Later in the fridge marinate meat for 8 hours.Preheat grill for high heat and then bake steaks to desired degree for about 60 minutes."In the following, the dot and the comma are considered as separate words and all letters are treated as capitals.
In the rest of this section all preprocessing steps, included also in Figure 1, are described in detail, as executed for our recipes data set.As mentioned above, first the verbs and nouns from all texts are extracted with POS tagging [14].Afterwards, synonyms are detected and unified using an appropriate library containing a dictionary [15], and then some of the remaining verbs/nouns are automatically selected as key-words.Later, according to the extracted keywords, the texts are (also automatically) separated into chunks.Finally, the data are reformulated in a form acceptable by the network, which is taught using the training set.Our trained model is evaluated using mainly the test set.
Detecting Verbs and Nouns with Parts Of Speech Tagging
In any of the employed texts the predicted nouns and verbs by the POS tagging algorithm (which was already available in [14]) plus the adjectives "high", "mediumhigh" and the numerical adjectives are initially treated as keywords.The participles with suffix "-ing" or "-ed" and the indefinite article "a" are prohibited from being considered as keywords, because it has been observed that they do not play a key role in a sentence.Therefore, no stemming by unifying words with the same prefix has been performed.
Unifying Synonyms
The elimination of the vocabulary to be used in the RNN not only may reduce the number of its parameters (weights), but it also decreases the size of the training set that is required for satisfactory results.Furthermore, by reducing the percentage of full text words that are considered as keywords, the mapping tool is rendered smarter, since less words suffice for the full text synthesis.Thus, only one word is attempted to be used for every group of synonyms, both in input and output/target.(The output is the prediction of the target.)The automatic identification of synonyms for the sake of training is subject to a library [15], which consists of a language dictionary providing meanings, translations, synonyms and antonyms of words.In the training scope two words are initially defined as synonyms when at least one of them is proposed by the library as one of the (up to 5) synonyms of the other one.However, this procedure leads to several wrong matches, so then the user is proposed to confirm which of the suggested pairs of words (s)he desires to be considered as synonyms (something sub-jective and dependent on the context, which is not taken into account by the library).This manual verification is practical only for a small training text corpus.Also, there are pairs of synonyms that are not identified, but this is not faced manually for training.For each finally considered group of synonyms, a representative is automatically selected to be used in training.
Other Keyword Elimination Measures
The preprocessing steps of the current and the following sections are programmed mainly from scratch, i.e. without employing some package.
For the further decrease of the number of keywords with respect to the sizes of the full texts, further elimination measures are taken for keywords.Particularly, it has been accepted that when two or more verbs or nouns always coexist in the same sentence, then only one of them implies the existence of the others.(An example is obvious in Figure 2.) So only the first one is considered as keyword.Also, when the presence of a verb/noun always implies the presence of another verb/noun, even without the reverse holding, and the pair of these two words appears at least 5 times (in order to avoid cases of random coexistences), then optionally the second word is not considered as keyword (see also the term "degree of inclusion" in [4]).After this last optional measure, the non-distinct keywords constitute 31.8% of the total number of non-distinct words of the full texts.In case it is not applied, the keyword rate is 47.7%.Of course, the above implications depend on the data set.The remaining keywords are inserted into a single list (which will generate the RNN input during the data reformulation), and the extra keyword "." is introduced to separate adjacent recipes.
Segmentation of Texts
In the scope of preprocessing and training all words are replaced by their synonym group representative, and they appear in this way in the next figures.
Definition 1.A "chunk" is defined in this paper as a sequence of words (text segment) ending at a keyword and starting at the word after the previous keyword (or, if not any, at the first word).
According to this definition, a chunk may be seen as a part of a full text which is generated by a single keyword.
Fig. 2. Segmentation of a part of the first recipe shown in the beginning of the section into chunks according to the extracted keywords (colored).The noun "air" is not considered as keyword, because it always coexists with the verb "squeeze" in the same sentence.The word "excess" is an adjective in this context, but it has been mistakenly recognized as noun.
The full texts are divided into chunks.An example is depicted in Figure 2. Our goal is to teach the RNN to map each input keyword to its chunk (with the additional help of the neighboring keywords and the predicted context), so that the full texts are automatically produced by concatenating the learned chunks.
Data Reformulation
Another interesting phase is to reformulate the data so that they suit an RNN.
Fig. 3. Data reformulation corresponding to the first part of the first recipe shown in the beginning of the section.First auxiliary words are added to both the target and the first input column, so that they have the same length.(Empty words are represented as "nan".)Then, the serial numbers of the positions of the data segments defined by the chunks are appended as the second input column, and finally two other input columns are created (3,4), giving information about the precedent and the upcoming keyword.
The keywords and the corresponding full texts have to serve as input and output respectively in this work.Sequence-to-sequence models with and without attention [1,12] were initially employed, but the result was fully unsatisfactory (worse predicted texts than the empty text, according to the measures of section 4.2).A major mistake of such a model was that it was predicting many words multiple times, although these words were related to the respective chunks.This fact indicates that the sequence-to-sequence model is improper for word-to-word mappings, maybe due to the intervention of the context variable between the input and output.Therefore, a more appropriate, tailored idea has been implemented to resolve our problem.The model family used in this paper demands the input and output data sequences to have the same size.However, the keywords of each text are apparently less than all of its words, and also the chunks do not have fixed number of words.The consideration of a chunk as an undivided entity would not be a solution, because then the fact that same words appear in different chunks would not have been taken into account.That is, the output needs to be predicted in the word and not in the chunk level.The above issues are overcome by reformulating the keyword and text sequences, according to the first of the following steps.The second step regards the appending of the previous and the next keyword to each keyword, and the third one converts the words to vectors, so that they can be inserted in the network.All these steps are also executed automatically.The effect of the first two is clear in Figure 3. 1.All training chunks are stored in groups according to their keyword, and empty words are inserted where needed such that all chunks of the same keyword have the same length, and same words are at the same position of the text segment (so that their alignment with the input is more appropriate).Then, the reformulated chunks compose the extended full texts (with the empty words inserted), which are concatenated, forming an undivided sequence, that will be called "verbal target", since it is the verbal form of the network target sequence.In the RNN input the keywords are replicated in order to fit their chunk length.In our application the texts are artificial, and thus there was the possibility to construct a training set containing all the words of the test set.Auxiliary ordinal keywords are also created, indicating the serial numbers of the positions of each chunk, so that the prediction of every word of a chunk at the correct position is facilitated.Optionally, these numbers are bonded with the keywords, forming new words [e.g. the pair (oil, 2) becomes oil2].2. As soon as the extended keyword list(s) resulting from the previous step has/have been constructed, two other auxiliary lists of the same length are created for the network's input, informing about the keyword of the previous and the next chunk.This step is based on our observations about the dependence between the current chunk and the neighboring keywords.Its advantage has also been inferred with trials.3. The words are converted to vectors, so that they can be used by the network.In this step two approaches are examined and compared [1,2] for the input and/or target words; a. a one-hot and b. a word embedding conversion.a.In the first one, the words are converted to one-hot vectors, (i.e.vectors with all their entries equal to 0 except one, which equals 1).This is applied for every column (i.e.list) separately.Only the empty word is represented as zero vector instead of one-hot.b.In the second approach, the words are represented as vectors of floats instead of one-hot vectors, as obtained by a word embedding (word-to-vector) algorithm [16].In this way, not only the dimension of vectors may be reduced in comparison with the one-hot conversion, but also the distance of representations of all word pairs is determined by the degree of their context relevance.
For the training of the word embedding model, the chunks of the training recipes have been used (since the training set contains all the words of the test set).Another option was to use a pre-trained word embedding model.This was examined with GloVe, which is considered as the most preferred by NLP practitioners [16].As expected, the text synthesis results were remarkably worse, since the word-to-vector training was not based on the domain context.In case the keywords of the first input columns are not bonded with the serial numbers, every input word is also an output word, so word embedding may be applied both to the whole input and to the output/target.With the above reformulation steps, the sequences of the RNN input (x(t)) and target (y(t)) are created.That is, y(t) is the vector representation of the (t + 1)-th word of the verbal target sequence (assuming that t starts from 0), and x(t) is the concatenation of the representations of the (t + 1)-th words of the input sequences.Although the argument t does not really stand for time in our case, it will be mentioned as such, because the data are sequential.The examined RNN predictive models belong to the family where I, J are finite subsets of {0,1,…}, {1,2,…} respectively.Particular assumptions have been made for the multidimensional function f [17,18].
Recurrent Neural Network Training and Results
After the preprocessing of the data so that they suit the network architecture, training is ready to start.
Training Details
The Mean Square Error is used as cost function.It is proportional to the Sum of Square Errors, i.e.
where Ttrain denotes the set of "time points" of the training set.A 2nd order iterative optimization algorithm (e.g.Levenberg-Marquardt, Broyden-Fletcher-Goldfarb-Shanno) is practical in our problem only in case of sufficient word embedding, due to its usually extreme memory cost (resulting from the Hessian approximations and the big input and output dimensions).Thus, a 1st order algorithm is employed otherwise.
Better results are achieved by setting the initial values of the model parameters to 0 than with random initialization.This is explained rather in the one-hot approach by the sparsity (big percentage of zeros) of the optimal model, which is anticipated due to the one-hot (or zero) encoding of the input and target.
Evaluation
As follows by the above, this work aims to resolve a classification problem using a regression model, so the continuous output has to be converted to a class (a word of the target vocabulary in our case).In the one-hot approach the output is converted to a one-hot vector by setting the maximum value of the output vector to 1 if the output vector's sum exceeds 0.5 (which indicates that possibly a non-empty word corresponds to that position), and to the zero vector otherwise.In the word embedding approach the output is converted to the closest vector representation of the target vocabulary.The modified output is then compared to the target for the sake of evaluation and text production.
For vocabulary diversity in predictions, each predicted word is randomly mapped to one of its synonyms or itself.For the sake of evaluation, the pairs of synonyms that have not been identified by the library have been manually defined as synonyms.
For the evaluation of the model's performance, the following measures are used for each of the training and the test set.Both aim at quantifying the quality/correctness of the predicted texts, which are compared to the ground truth (i.e.actual) ones.However, these metrics do not consider the case of multiple correct syntaxes of a sentence, so they may admit a bit worse quality than real. Mean Word Error Rate (MWER): The widely known Word Error Rate (WER), common for NLP problems, is computed for each text (recipe) separately and the mean is taken as the evaluation measure.The WER equals the also well known Levenshtein distance between the ground truth and the predicted text divided by the number of ground truth non-empty text words. Mistake Rate (MR): This is a simple, heuristic measure similar to WER, but it never rewards the prediction of the correct word in the false position, even if the non-empty words are predicted in the correct order.It is the number of wrong predictions (in terms of time) for all texts divided by the number of ground truth non-empty words of all texts.A mistake is considered as single when an empty word is predicted as non-empty or vice versa, and as double when a non-empty word is predicted as another non-empty word.Apparently, the empty predicted text corresponds to an error of 1, as in the WER case.
Selecting Recurrent Neural Network Mapping Model
With intuitive trial and error and after search of several shallow and deep architectures (including also LSTM and GRU [17]), there is a general conclusion that one of the best models is the linear simple RNN with input delay 0 and output (direct) delays from 1 to 5, i.e. the shallow model where U, Vj, b contain the weights.(The first 5 predictions are set as equal to the target values.)This intuition results from the fact that a word of a full text is dependent on the present and the neighboring keywords, as well as a few previous full text words.The use of even one hidden layer has proven to be completely helpless, since it leads to training and test errors comparable with those of the empty text.Consequently, the best architecture is found very easily.As for shallow gated RNNs, LSTM has comparable, but often slightly poorer performance, whereas GRU is even worse.Maybe LSTM would be the best if the average chunk length were higher (or, in other words, if the keyword rate were smaller).1 the results of training the selected simple RNN of (3) [18] are summarized and compared to the best of LSTM [17] (shallow, stateful network with dropout and recurrent dropout equal to 0.01 and no feedforward activation), according to all possible approaches discussed above.In each case, one of the best numbers of iterations has been chosen.The optimization time of these executions ranges from a few seconds to a few minutes, and depends on the RNN type and dimensionality, the optimizer and the chosen number of iterations.It is worth mentioning that there are 66 distinct non-empty full text words after the unification of synonyms among the initial 73 distinct non-empty full text words.Word embedding, especially on the output/target, reduces the number of weights significantly.For example, all shallow simple RNNs have O(n 2 + nm) parameters, where m, n are the input and the output/target dimensions respectively.This would be particularly important in case that more (and more complex) texts were used, where the need to save memory and computational time would emerge.A general remark is that the results worsen as the problem requirements (in terms of keyword rate -resulting from section 3.3-and number of weights) strengthen.It can be subjectively inferred that when the keyword rate is 31.8%(which is the most useful and challenging scenario), the best choice is to apply embedding only on the out-put and not on the input, so that the first two input columns can be bonded (6th row of the table ).
Results and Discussion
Word embedding training with the skip-gram instead of Continuous-Bag-Of-Words algorithm and use of hierarchical softmax yield the best results here.
The results with a Mean WER of about 0.3 can be characterized as quite satisfactory, because the meaning of the text is almost understandable.For an illustrative example, the prediction ".Preheated oven bake meat to desired extent for about 30 minutes." of the text "Preheat oven and then bake beef to desired extent for about 30 minutes." has WER=0.29.
The variety of the keywords' order in some input sequence among recipes not only does not cause significant problem in training, but also results to an aesthetically appealing variety in the chunks order, which changes correspondingly.
Conclusion and Future Work
In this work a tailored and quite simple algorithm which, with a sequence of known keywords as input, is able to produce full text containing these keywords, or synonyms of them, in the same order, has been developed.This automatic mapping relies on an RNN model, based on some training texts, from which the keywords are automatically extracted with the help of a POS tagging algorithm and a dictionary library.
The results, which depend on the problem difficulty, are rather satisfactory, because the meaning of the predicted text is almost clear.So far it seems that the proposed methodology is applicable to any domain/vocabulary, although the used texts have to be quite similar (maybe after deliberate editing), especially if they are few.A basic limitation is that when the test set contains words neither included in the training set nor synonymous of a training word, then the word embedding may be based only on a pre-trained word-to-vector model, which is then necessary to determine the chunk length of a new keyword as that of a training neighboring one.The new words need to belong to the domain of the training texts for effective training as well.
The text synthesis work done for steak recipes is going to be repeated for a data set from another domain, containing much more distinct and non-distinct words.By increasing the amount and diversity of texts, it will be interesting to examine the necessary word embedding size to yield fairly good results.Furthermore, there is an ambition that our program will be useful for everyday applications, like news production.
Table 1 .
Evaluation of best RNNs with test errors.IE/OE = Input/Output Embedding (+size), ICB = Input Columns 1 & 2 bonded, SRNN = Simple RNN with direct delays of orders from 1 to 5. The percentages in the headings denote keyword rate. | 6,225.8 | 2018-05-25T00:00:00.000 | [
"Computer Science"
] |
Achieving Solar‐Thermal‐Electro Integration Evaporator Nine‐Grid Array with Asymmetric Strategy for Simultaneous Harvesting Clean Water and Electricity
Abstract Water evaporation is a ubiquitous and spontaneous phase transition process. The utilization of solar‐driven interface water evaporation that simultaneously obtains clean water and power generation can effectively alleviate people's concerns about fresh water and energy shortages. However, it remains a great challenge to efficiently integrate the required functions into the same device to reduce the complexity of the system and alleviate its dependence on solar energy to achieve full‐time operation. In this work, a multifunctional device based on reduced graphene oxide (RGO)/Mn3O4/Al2O3 composite nanomaterials is realized by an asymmetric strategy for effective solar‐thermal‐electro integration that can induce power generation by water evaporation in the presence/absence of light. Under one sun irradiation, the solar‐driven evaporation rate and output voltage are 1.74 kg m−2 h−1 and 0.778 V, respectively. More strikingly, the nine‐grid evaporation/power generation array integrated with multiple devices in series has the advantages of small volume, large evaporation area, and high power generation, and can light up light‐emitting diodes (LEDs), providing the possibility for large‐scale production and application. Based on the high photothermal conversion efficiency and power production capacity of the RGO/Mn3O4/Al2O3 composite evaporation/generator, it will be a promising energy conversion device for future sustainable energy development and applications.
Introduction
[3][4] Solar water evaporation technology based on interfacial heating can effectively concentrate thermal energy at the air-water interface.By heating a thin layer of water at the water interface, clean water can be obtained through efficient photothermal conversion, realizing the utilization of renewable solar energy and abundant seawater.[7][8] Nevertheless, the integrated desalination device still suffers from low energy conversion efficiency and unsatisfactory freshwater productivity in the initial research phase.To solve the above problems, photothermal utilization is a boost for ensuring adequate freshwater supply, and high energy conversion efficiency is achieved by suitable material selection and structural optimization.Moreover, clean water and electricity can be simultaneously generated to achieve a high energy conversion rate by combining solar thermal evaporation systems with energy collection technology. [9]For example, Ho's team innovatively proposed the parallel collection of condensed water and frictional electricity during solar water evaporation.An Au nanoparticle hydrogel was prepared as a solar absorber, which showed an evaporation rate of 1.356 k m −2 h −1 under one sun irradiation.A small device was designed to generate frictional electrical energy by flowing condensed water on the PTFE surface, with a maximum voltage of ≈3 V. [10] Zhou's group adopted a comprehensive energy utilization technology that uses solar energy for simultaneous desalination and extraction of electrical energy from evaporation-induced salinity gradients and designed an integrated system based on carbon nanotubes (CNTs) modified filter paper and Nafion membrane.The system can achieve an evaporation rate of 1.15 kg m −2 h −1 and a voltage output of ≈0.062 V under 1 kW m −2 light intensity. [11]Li' group simultaneously obtained water evaporation and power generation by converting food waste (FW) into a highly porous carbon-based photothermal material for low-cost solar desalination, and FW combined with a thermoelectric module can achieve a stable water evaporation rate of 1.26 kg m −2 h −1 and a continuous voltage of 0.095 V under one sun. [12]owever, on the one hand, the problem of operation stagnation in the absence of solar illumination has not been fundamentally solved for most solar evaporation devices.On the other hand, most reported integrated systems tend to be assembled from independent work modules with complex and inflexible designs, which will harm the efficiency of solar evaporation and power generation.Recent theoretical and experimental studies have demonstrated that fluid flow in micro/nanochannels of surfacecharged materials (including CNTs, carbon black, graphene, metal oxide nanowires, layered double hydroxides, and biological fibers) may induce unique electrokinetic charge transfer for power generation by forming a double charge layer (EDL) at the interface between the fluid and the channel wall. [13]Nevertheless, a prerequisite for achieving power generation is the existence of an ion gradient in the device.Unfortunately, the fluid in a solar interface evaporation system is uniformly distributed and transmitted on the surface of the light-absorbing layer. [14]Therefore, the distribution of photothermal materials in the evaporation system should be regulated to promote the formation of ion gradient distribution/transport in the nanochannels, which is a necessary means to achieve continuous and efficient power generation.Simultaneous water evaporation and power generation can be carried out in a good integrated way without reducing the efficiency of solar evaporation conversion.
In this study, we report a multifunctional device based on reduced graphene oxide (RGO)/Mn 3 O 4 /Al 2 O 3 (RMA) composite nanomaterials for effective solar-thermal-electro integration through an asymmetric strategy.RGO has been successfully applied as a highly efficient solar photothermal material in solar thermal conversion due to its inherent light stability and low toxicity.However, RGO has poor compatibility as a lightabsorbing material. [15,16]Mn 3 O 4 has attracted much attention because of its low cost, environmental friendliness, high stability, and theoretical specific capacitance, but poor conductivity limits its application. [17,18]The composite material RMA com-posed of RGO, Mn 3 O 4 , and Al 2 O 3 with high thermal conductivity can form a synergistic effect that can improve the conductivity, structural stability, and thermal conductivity of the composite material. [19,20]It provides the possibility for efficient simultaneous solar water evaporation and power generation.Therefore, solar evaporators composed of RMA composites uniformly loaded on air-laid paper and sodium alginate (SA) aerogels can achieve an efficient evaporation rate of 1.88 kg m −2 h −1 , the solar photothermal conversion efficiency can reach 94.3%, and the devices are easy to prepare (room temperature preparation), environmentally friendly, and low-cost.Based on the excellent photothermal performance of RMA composites, a sandwich film strategy was adopted to load RMA and GO in a nonuniform layering on air-laid paper to form an RMA/GO/RMA layout, which expanded the asymmetry of the film in different aspects.As a result, the asymmetric RMA/GO@air-laid paper (A-RMA/GO@P-SA) evaporator/generator can achieve efficient power generation under ambient conditions and solar illumination, and has the ability to work full-time without entirely relying on solar energy.Note that the power generation capacity shows a remarkable enhancement and the synchronous solar evaporation rate remains very efficient under solar irradiation.In addition, the assembly of integrated equipment is realized for the first time.The nine-grid evaporation/power generation array constructed by nine devices in series has the characteristics of small space occupation, flexibility, and portability, and can realize the work capacity under light conditions both in experimental conditions and outdoors.Furthermore, this integration mode not only increases the evaporation area, but also improves the power production capacity (Figure 1).The nine-grid evaporation/power generation can output voltages up to 6.917 V and successfully light an LED, which offers the possibility of large-scale seawater desalination and energy power generation.
Results and Discussion
The process of spontaneous reduction and assembly of RGO/Mn 3 O 4 -based composite films at room temperature is shown in (Figure 2a).MnCl 2 • 4H 2 O was added to the GO suspension containing NH 3 • H 2 O, and Mn 3 O 4 nanoparticles grew on the GO sheet layer.RGO/Mn 3 O 4 /Al 2 O 3 composites were prepared by adding aluminum powder to the above solution.The microstructure and morphology of the RGO/Mn 3 O 4 -based composites were characterized by scanning electron microscopy (SEM) (Figure 2b,d).Due to the strong interaction between the surface functional groups of GO and Mn 3 O 4 nanoparticles, Mn 3 O 4 nanoparticles with block structure could be tightly adsorbed to GO. [21] In Figure 2c,e, the ripples and wrinkles on the surface of RGO can be observed, demonstrating that GO was reduced to RGO, which can provide more active sites.Therefore, Mn 3 O 4 nanoparticles were induced to adhere to the surface of RGO sheets. [18]Moreover, Al 2 O 3 nanoparticles were also distributed on RGO in the RMA composites.The consistent characteristics of the nanocomposites are attributed to the fact that graphene sheets can effectively inhibit the aggregation of metal oxides, resulting in better dispersion of Mn 3 O 4 nanoparticles on the surfaces. [22]In addition, through the surface analysis of the elements in the selected area, such as Figure 2f-j, C, Mn, O, and Al elements are distributed on the surface of RMA, and the surface analysis of RM elements is shown in Figure S1 (Supporting Information).Figure S2 (Supporting Information) shows the X-ray energy dispersive spectroscopy (EDS) spectrum, which further indicates the elemental composition of the synthesized composite.To investigate the structure and morphology of the samples, transmission electron microscopy (TEM) images of RM and RMA were studied.Figure S3a,b (Supporting Information) shows the close contact between the RGO sheet and Mn 3 O 4 nanoparticles, the uniform distribution of Mn 3 O 4 on the RGO sheet, and the size of Mn 3 O 4 nanoparticles in the range of 10-50 nm.Meanwhile, the presence of Al 2 O 3 is demonstrated.
Subsequently, the air-laid paper was immersed in a uniformly dispersed RMA suspension, and the uniform state of the RMA suspension facilitated the acquisition of a uniform RMA composite film during the assembly process.Figure S4a (Supporting Information) shows an optical picture of the blank air-laid paper.The blank film is white.Then, we tested the SEM of the blank film and the RMA film, and the results are shown in Figure 2k,l.The blank film presents a paper-fibrous structure with a smooth surface.After loading the RMA composite material, the white blank film transformed to black, as shown in Figure S4b (Supporting Information).From the SEM image (Figure 2l), the surface of the RMA film is rougher, the paper fiber nanowires are covered with RMA composites, and the RMA nanosheets are tightly attached to the surface of the nanowires.
To ensure thermal management and water transportation in the solar photothermal conversion system, 3D aerogels were prepared with SA and calcium carbonate as raw materials to serve as the thermal insulation, support layer, and water channel of the solar thermal conversion system in this work.SA that is environmentally friendly is an edible binder and can easily form ionic cross-links with Ca 2+ to obtain water-insoluble porous gel-like structures. [23]Figure S5 (Supporting Information) shows the synthesis process of the SA porous system.SA and CaCO 3 were uniformly mixed in deionized water, and CaCO 3 provided a source of Ca 2+ ions.Subsequently, gluconate lactone was added to the mixed solution, which reacted with CaCO 3 to release Ca 2+ .After vertical freezing and freeze drying, the 3D aerogels with waterinsoluble porous gel-like structures were obtained (Figure S6, Supporting Information).SEM shows that the 3D aerogel has abundant pores and cells, and the size is between micron and submicron.The 3D porous structure can ensure that water can be continuously delivered to the evaporator, and promote salt ion exchange, thereby preventing the accumulation of salt in seawater in the support layer (Figure 2m; Figure S7, Supporting Information).When the evaporator was placed on the water surface, the water was immediately transported to the top of the evaporator through the interconnected and continuous pores in the evaporator.The water absorption properties of SA aerogels were studied.The results showed that the aerogels can absorb 8.25 g of water in 1 min (Figure S8, Supporting Information).The strong wicking effect ensures that SA-based 3D aerogel possesses excellent water transport capability.In Figure S9 (Supporting Information), the MO placed on the surface of the evaporator quickly dissolved and entered the bottom liquid.After 2 min, the water below started turning yellow, and the color gradually deepened with time, fully demonstrating the outstanding water supply and ion exchange capabilities of the aerogel.For the thermal insulation water supply layer, in addition to ensuring sufficient water supply, the heat loss should be reduced as much as possible.The thermal conductivity of RMA is 0.025 W m −1 K −1 , which is significantly lower than that of water (0.59 W m −1 K −1 ). [24]Therefore, the light absorption layer can maintain continuous and stable local heating during the evaporation process, thereby minimizing heat dissipation and achieving a high evaporation rate.Meanwhile, 3D aerogels have very low density (0.06 g cm −3 ), indicating that the aerogels have great application potential in the fields of light weight.The 3D aerogel can realize continuous water supply and effective salt ion exchange.Therefore, SA 3D aerogels with many excellent properties can be used as an ideal heat insulation layer and water channel in solar photothermal evaporation devices.
To determine the chemical composition and chemical bond state of RMA composites, the samples were investigated by X-ray photoelectron spectroscopy (XPS).The complete detection spectrum of the compound in Figure 3a confirmed that Mn, O, C, and Al exist in the RMA composites without other impurities.In the spectrum of the RMA composite, the corresponding Mn 2p, O 1s, C 1s, Al 2p, and Al 2s peaks are ≈640.4,[27] The XPS spectra of Mn 2p in RMA are shown in Figure 3b.The two peaks located at 640.6 and 652.2 eV are consistent with the Mn 2p 3/2 and Mn 2p 1/2 spin-orbit peaks of Mn 3 O 4 , [17] and the energy gap between the two energy levels is 11.6 eV, which is consistent with previous studies reported on Mn 3 O 4 , [28] indicating the successful preparation of Mn 3 O 4 . [29]eanwhile, the Mn 2p peaks were deconvoluted into four peaks located at 640.6, 652.1, 643.9, and 655.1 eV, which are attributed to Mn 3+ and Mn 2+ , respectively.Note that the corresponding peak area ratio of Mn 3+ and Mn 2+ was close to 1:2, which confirmed the formation of Mn 3 O 4 . [30]Figure 3c shows the XPS spectrum of O 1s, which can be decomposed into three components, namely the O─Mn bond at 529.9 eV, the O─Al bond at 531 eV, and the O─H bond at 531.7 eV, [25] combined with the Al 2p peak located at 73.7 eV and the Al 2s peak at 119.5 eV, indicating the presence of Al 2 O 3 in the RMA composite.The conversion of GO to RGO can be observed by comparing the binding energy spectrum of RMA with the C1s of GO.In the C1s spectrum of GO, three peaks are located at 283.8, 285.8, and 287.4 eV, representing the peaks of C─C/C═C, C─O, and C═O, respectively (Figure 3d). [26]he peak intensities of C─O and C═O in the C1s spectrum of the RMA composite both show a significant decrease, indicating that some of the oxygen-containing groups on the surface of GO were removed and GO was successfully reduced to RGO (Figure 3e). [24]o analyze the chemical structure of GO, RM, and RMA composites, FT-IR patterns are shown in Figure 3f.The FT-IR spectra of GO reveal the presence of O─H (3411 cm −1 ), C═O (1718 cm −1 ), C─C/C═C (1612 cm −1 ), and C─O (1035 cm −1 ). [31,32]In the FT-IR spectra of RM and RMA, the stretching vibration peaks of hydroxyl groups on the surface of GO were removed.In contrast, RM and RMA also have reduced C═O content, which means a reduction in GO. [20] The characteristic peaks of oxygen-containing functional groups of GO were significantly weakened or completely eliminated, indicating that GO was successfully reduced to RGO during the reaction.The peak at 606 cm −1 was caused by the vibration of the Al─O─Al bond, which further proved the successful formation of Al 2 O 3 in RMA. [33]he properties of carbon-containing materials can be effectively characterized by Raman spectroscopy for the verification of GO to RGO conversion. [34]The Raman spectra of GO, RGO, and RMA are shown in Figure 3g.Among them, the bands located at 1342 and 1597 cm −1 of the GO Raman spectra correspond to the vibration modes of the dangling bonded carbon atoms (A 1g vibration) in the disordered band (D band) and the vibration modes of all sp 2 -bonded carbon atoms (E 2g vibration) in the ordered graphite band (G band), respectively.[37] The I D /I G values of RGO and RMA composites are 0.87 and 0.77, respectively, which are higher than the value of 0.72 for GO.This indicates that a reduction reaction occurred during the assembly process and a higher degree of graphitization of RGO, which is beneficial to improving the electrical conductivity of RMA nanocomposites. [18]Note that the main scattering peak at 650 cm −1 and the secondary scattering peaks at 317 and 359 cm −1 in RMA are characteristic of the Mn─O bond vibration mode, revealing the successful synthesis of Mn 3 O 4 in the composites. [38]o study the crystal structures of the different materials, measurements were performed by X-ray diffractometry, as shown in Figure 3h.In RM and RMA, the XRD patterns of Mn 3 O 4 in the synthesized state show strong diffraction peaks, indicating that it has a high degree of crystallinity.The diffraction peaks indicate that Mn 3 O 4 has a tetragonal structure (JCPDS, No. 24-0734), the lattice constants are a = b = 5.76 Å, c = 9.47 Å, and the space group is I41/amd. [39]The positions of the main diffraction peaks at ≈18.11°, 28.92°, 31.02°,32.32°, 36.12°,38.02°, 44.47°, 50.77°, 59.91°, and 64.5°correspond well to the (101), (112), ( 200), ( 103), (211), (004), ( 200), (105), (224), and (400) crystal planes of the Mn 3 O 4 phases, respectively.The characteristic peaks of Al 2 O 3 are also detected.Furthermore, by comparing the XRD patterns of GO and RGO, the peak appearing near 2≈20.41°matchesthe characteristic peak (002) of graphene.Thus, a large number of oxygen-containing functional groups in GO have been removed during the chemical reduction process, implying the reduction of GO to RGO. [40] Therefore, it can be clearly illustrated that the RMA composites are composed of RGO, Mn 3 O 4 , and Al 2 O 3 .Effective light harvesting performance is a prerequisite for achieving efficient solar evaporation.The light absorbance of different samples was tested by UV-vis-NIR spectrophotometry, as shown in Figure 3i.RMA showed a light absorption of more than 90% in the wavelength range from 200 to 2500 nm.The high optical absorption performance of RMA was primarily due to the synergistic effect between the composite materials.Compared to the Raman spectra and FTIR spectra of RM (Figure 3f; Figure S10, Supporting Information), RMA exhibited a higher degree of RGO reduction.Additionally, the SEM images of RM and RMA are shown in Figure S11 (Supporting Information).The surface of RM has a smoother appearance with a lower degree of RGO reduction, while the surface of RMA exhibits a very obvious hierarchical layered structure.This structure enables internal refraction and reflection of light, increasing the propagation path of light and reducing the reflection of light.As a result, RMA demonstrates higher optical absorption performance.With such a good light absorption capacity, RMA is expected to achieve an efficient solar photothermal conversion.
A schematic diagram of the RMA@P-SA solar evaporation system is shown in Figure 4a.The photothermal evaporation performance was tested under one sun through a real-time photothermal evaporation measurement system.In the combination of RMA@P and the 3D aerogel, the photothermal material RMA could effectively absorb the incident solar energy and convert it into thermal energy, creating a localized heating zone.Because of its rich pore structure, the adjacent pores of the 3D aerogel are connected to form well-interpenetrated low-curvature pore channels.It can be regarded as a microwater reservoir, which is more conducive to water transport and salt ion exchange.Additionally, 3D aerogels have extremely low thermal conductivity and low density, which can not only effectively suppress the heat loss from interfacial thermal conduction to the bulk water below, but also ensure that the entire system floats on the water, so that the evaporation process can be carried out continuously and stably.To investigate the water evaporation capacity of solar evaporators with different structures, the evaporation curves of RMA@P-SA, RM@P-SA, GO@P-SA, blank airlaid paper-SA, SA, and pure water with time under one solar irradiation were plotted, as shown in Figure 4b.RMA@P-SA evaporated the most water per unit area compared to the other evaporators.The calculated evaporation rates of different photothermal devices are shown in Figure 4c.As a blank control, the evaporation rates of SA and air-laid paper solar evaporation systems without photothermal materials were 0.43 and 0.47 kg m −2 h −1 under 1 sun, respectively.Meanwhile, the pure water evaporation rate was also investigated to exclude its effect.The results show that the RMA-based solar evaporator had an excellent water evaporation rate of 1.88 kg m −2 h −1 , which was superior to the other two evaporators and was nine times higher than the pure water evaporation rate (0.22 kg m −2 h −1 ) (Figure S12, Supporting Information).
The corresponding solar-to-thermal conversion efficiency () can be estimated by the following equation where is the solar-to-vapor photothermal conversion efficiency, m is the net evaporation rate after subtracting the evaporation rate in the dark from the evaporation rate under light conditions, and H LV is the total enthalpy change from water to vapor (including latent heat and sensible heat).The standard dimension is kJ kg −1 , which is determined by Calculation S1 (Supporting Information), and q solar is the simulated solar energy intensity (1 kW m −2 ).Therefore, the calculated solar photothermal conversion efficiency was up to 94.3% under one sun.Since there is an unavoidable heat loss process during evaporation, the radiation loss, transmission loss, and convection loss of the RMA@P-SA evaporation system were calculated as shown in Calculation S2 (Supporting Information).Temperature changes were recorded to test the photothermal behavior of solar evaporators composed of different materials under one sun irradiation.
As shown in Figure 4d, the surface temperature of the evaporator rapidly increased in the first 500 s.As the evaporation time is prolonged, the temperature gradually tends to be stable without significant fluctuations.The surface temperature of RMA@P remained constant at 45.3 °C, which was higher than the steadystate temperature of other solar absorbers under the same irradiation time.This indicates that RMA@P exhibits a rapid thermal response.However, the surface temperature of the solar evaporator gradually decreased as the light source was turned off.In addition, the surface temperature of the RMA@P-SA photothermal device increased rapidly and reached a steady-state temperature of 45.3 °C after 30 min of illumination (Figure 4e), showing the rapid photothermal response.The above results show that the RMA@P-SA evaporator achieved excellent photothermal conversion, timely water supply, and effective thermal management.The improvement in evaporation performance was attributed to the synergistic effect generated by the introduction of Al 2 O 3 and the excellent thermal management performance of SA.The formation of Al 2 O 3 significantly strengthened the thermal conductivity of RMA composites.The thermal conductivity of RM is 0.139 W m −1 K −1 , while the thermal conductivity of RMA (1.425 W m −1 K −1 ) is ten times higher than that of RM (Figure S13, Supporting Information).The high thermal conductivity of the RMA composites could enhance the heat transfer from the RMA@P surface to the interface water, which promoted water evaporation to improve the photothermal conversion efficiency.Moreover, the SA aerogel with low thermal conductivity could reduce the heat loss caused by the heat transfer from the interface heating zone to the bulk water below.
In general, the RMA@P-SA evaporator achieves significant and comprehensive advantages over other evaporation systems (Figure 4f). Figure 4g,h shows the evaporation rate and mass change of water for 2-9 RMA@P-SA evaporators in the 1800s under one sun irradiation, respectively.The evaporators were tested for 20 cycles and always maintained a stable evaporation rate (Figure S14, Supporting Information), which fully proves the photothermal stability, reusability, and durability of the RMA@P-SA evaporators.In addition, the salt tolerance of the RMA-P@SA evaporator was also evaluated.The evaporator was placed in high-concentration brine (10 wt.%) and subjected to continuous photothermal evaporation testing for 24 h.As shown in Figures S15 and S16 (Supporting Information), the evaporation rate of the system remained stable during the evaporation process in high-concentration brine, with an average evaporation rate of 1.8 kg m −2 h −1 .There were no salt crystallizations observed on the surface of the evaporator, ensuring that the evaporation rate was not affected (Figure S17, Supporting Information).
As a green process, the operability of solar photothermal evaporation technology in seawater desalination and wastewater treatment is of great importance for practical applications.The concentrations of sodium (Na + ), magnesium (Mg 2+ ), potassium (K + ), and calcium (Ca 2+ ) in real seawater and desalinated samples were measured by inductively coupled plasma-optical emission spectrometry (ICP-OES).As shown in Figure 5a, the ion concentrations (Na + , Mg 2+ , K + , and Ca 2+ ) in the water during desalination significantly decreased from 24 970, 1491, 801, and 450.3 ppm to 13.79, 1.892, 1.486, and 2.334 ppm, respectively.The ion concentrations in the desalted water were much lower than the drinking water standards set by the World Health Organization (WHO), indicating that the photothermal evaporation of RMA@P-SA could effectively remove four ions from the original solution.The practical application of freshwater production can be realized through the ion rejection of the RMA@P-SA evaporator.Figure 5b,c shows the absorption spectra of methylene blue and methyl orange solutions before and after purification.The water collected after evaporation did not show the characteristic absorption peaks of methylene blue and methyl orange, which confirmed that solar evaporation could effectively remove the dyes from the wastewater.All these results demonstrate that the RMA@P-SA evaporator can be applied to seawater desalination and wastewater treatment.
To verify the practical feasibility of the hybrid solar evaporation system, the hybrid system was placed in a natural light environment to test its continuous evaporation performance.We designed a portable interfacial solar evaporator prototype, as shown in Figure 5d, which can be applied to collect pure water on land.The RMA@P-SA-based nine-grid portable prototype was placed in an outdoor environment, and the mass changes during evaporation were recorded (Figure 5e,f).Real-time monitoring of outdoor solar intensity was carried out, and the corresponding ambient temperature, relative humidity, and evaporation rate as well as the evaporation curve over time were recorded at the same time (Figure 5j,k).The evaporation performance of the solar evaporator was enhanced with increasing light intensity.The evaporation rate of the RMA@P-SA-based evaporation system showed a peak with a maximum rate of 1.85 kg m −2 h −1 from 12:00 a.m. to 2:00 p.m.During the cloudy period, the evaporation rate decreased slightly due to the weak sunlight intensity.With the recovery of sunlight intensity, the evaporation rate increased rapidly because of the rapid interface positioning heating of<EMAIL_ADDRESS>sunlight irradiation, a large amount of water evaporated rapidly, and subsequently, the vapor condensed on the surface of the condenser wall to collect clean water (Figure 5f).The infrared images show the apparent thermal localization of RMA@P to efficiently generate steam (Figure 5g-i).In practical applications of the solar-driven interfacial evaporator, the ultimate performance of the device depends on the solar intensity and other environmental factors.From 8:00 a.m. to 5:00 p.m. on 17 September 2022, a full-day outdoor test was conducted in Beijing to evaluate the photothermal evaporative performance of the solar thermal evaporation system based on the RMA membrane array.The net water collection rate (total water yield) of the evaporation system was ≈7.39 kg m −2 .The results indicate that the device can provide enough daily water for two adults using only 1 square meter of the system in one day.This demonstrates the effective condensation of the thermal vapor and highlights the extensive commercial potential of the device.
With the study of solar water evaporation, people have proposed combining the solar water evaporation process with power generation to simultaneously solve the problem of clean water and energy shortages. [41]Therefore, the water evaporationinduced power generation performance of the RMA@P-based solar evaporator was investigated.GO@P, RM@P, and RMA@P were placed in deionized water and used to test the waterinduced power generation performance, as shown in Figure 6a.In Figure S18 (Supporting Information), the GO, RM, and RMA devices uniformly loaded on air-laid paper generated only 0.05, 0.13, and 0.135 V, respectively, at room temperature.The stable open-circuit voltages of the asymmetrically loaded GO@P (A-GO@P), RM@P (A-RM@P), and RMA@P (A-RMA@P) devices were 0.09, 0.245, and 0.253 V, respectively (Figure 6b).Then, they were placed in NaCl solution (3.5 wt.%).The improvement in the output voltage of the three generators can be observed in Figure 6c, where A-RMA@P produced the largest voltage, reaching up to 0.352 V.According to previous experiments and reports, water-induced power generation is based on the electrokinetic effect. [42]As shown in Figure S19 (Supporting Information), the zeta potential measurements of GO, RM, and RMA are all negative, indicating that the surfaces of the materials are negatively charged.The negatively charged surfaces will repel ions with the same charge polarity (OH − or Cl − ) and attract ions of opposite polarity (H 3 O + or Na + ) at the same time, implying that the main charge carriers are H 3 O + and Na + , respectively, in deionized water and NaCl solutions.According to the EDL theory, a thin layer of counter ions (H 3 O + or Na + ) will be attached to the negatively charged channel wall to form a Stern layer, and the ions away from it will collectively form a Diffusion layer, [43] that is, an overlapping EDL is formed on both sides of the nanochannel, as shown in Figure 6d.When water flows through a negatively charged channel, anions with the same charge polarity (OH − or Cl − ) will be repelled, while more polar opposite ions (H 3 O + or Na + ) will be attracted, and preferentially pass the channel, resulting in a higher potential downstream.Continuous evaporation c) Voltage output from the A-GO@P, A-RM@P, and A-RMA@P generators immersed in water and 3.5 wt.% NaCl under ambient conditions.d) Schematic of ion transport within the EDL channels.e) Schematic illustration of the voltage generation from the asymmetrical load device.f) The measured voltage output of the device when the beaker was periodically sealed and open.g) Long-term output measurement over 6 h with the A-RMA/GO@P device under ambient conditions.can drive consecutive capillary flow in the channel, creating a successive potential difference based on the streaming potential effect.The level of zeta potential directly determines the interaction between the material the liquid and the power output.The high zeta potential can attract more charges into the EDL. [14]Thus, A-RMA@P has a stronger power generation capacity.For materials with uniformly distributed surfaces, it is almost impossible to form ion concentration gradients, resulting in a smaller potential difference and a lower power generation capacity.Therefore, an asymmetric charge distribution was achieved by a nonuniformly loaded evaporator to improve the power generation capacity, as shown in Figure 6e.Due to the inhomogeneous load, the charges formed an asymmetric distribution in the film, which led to different ion adsorption behaviors in the EDL at different positions of the film.The side of the high-quality loaded RMA acted as an electron collector, and the strong ionic attraction made cations gather on this side and repel the anions.To maintain ion balance, the anions were distributed in the remaining regions with low loading mass.Thus, the load gradient in the film caused significant differences in ion adsorption capacity.The dominant charge carriers (H 3 O + or Na + ) moved upward through the negatively charged RMA channel walls and formed a unique gradient EDL and potential difference via the nanochannel network to induce the directional movement of charges, generating a flow current for the external output of electric energy.
Therefore, we tried to increase the load gradient of the film to expand the asymmetry of the film.A GO layer was nonuniformly loaded on the A-RMA@P film to form an asymmetric film with an RMA/GO/RMA sandwich structure (A-RMA/GO@P).The introduction of the GO layer led to differences in the oxygencontaining functional groups between the different film layers, so there was also a certain concentration gradient of the functional groups between the film layers. [44,45]This further strengthened the asymmetry of the film to obtain an expanded distribution gradient EDL and potential difference to improve the power generation capability of the device.As shown in Figure S20 (Supporting Information), the zeta potential of the mixture of RMA and GO (mass ratio 2:1) reached −52.9 mV, indicating that the addition of GO played a significant role in improving the voltage.This could be confirmed from Figure S21 (Supporting Information).After the introduction of an asymmetric GO layer, the output voltage of the A-RMA/GO@P device with an asymmetric sandwich structure was significantly increased and stabilized at ≈0.544 mV, indicating that the introduced GO layer significantly enhanced the power generation capability of the film.To validate that the continuous power output of the A-RFA/GO@P device is induced by water evaporation, the following experiment was conducted.First, the A-RFA/GO@P device was placed in a beaker containing a 3.5 wt.% NaCl solution.When the output voltage was stable at ≈0.55 V, the beaker was sealed with a polyethylene (PE) film to cover the opening.Once sealed, the water vapor inside the beaker quickly reached saturation, and the water evaporation on the surface of the device gradually slowed down and eventually stopped, resulting in the stagnation of evaporationinduced capillary flow within the film.As shown in Figure 6f, when the beaker was sealed, the output voltage rapidly decreased and gradually approached 0. After opening the beaker, evaporation resumed, leading to an increase in the output voltage of the device.It is worth noting that this process exhibited excellent reproducibility when repeated experiments were conducted, further confirming that the power generation of the A-RFA/GO@P device is indeed induced by water evaporation.From the perspective of practical application, the ability of continuous and stable operation is an important indicator to consider the reliability of the device.Therefore, the 6 h continuous power generation performance of the A-RMA/GO@P device was tested, as shown in Figure 6g.One device stably outputs an open circuit voltage of ≈0.544 mV, and the output current was also consistently maintained at ≈13.07 μA under ambient conditions.Ignoring the influence of other factors, the results demonstrated that the A-RMA/GO@P composite film had the ability to work stably over long periods of time.It was confirmed that the device could continuously output a relatively stable voltage by the test of sixday power generation (Figure S22, Supporting Information).Furthermore, the effects of different NaCl solution concentrations on the power generation performance were also studied, as shown in Figure S23 (Supporting Information).The dry A-RMA/GO@P device was almost incapable of generating electrical energy, while in deionized water, the device could output an average voltage of 0.372 V.Moreover, the voltage increased gradually as the NaCl concentration in the solution increased, which was attributed to the monotonic increase in the charge number and the charge carrier (Na + ) transport gradient in the solution with increasing NaCl content.When the solution salinity increased from 0.1 to 3.5 wt.%, the continuous output voltage could be significantly enhanced from 0.384 to 0.544 mV.Therefore, the output voltage of the device is dependent on the NaCl concentration in the solution.
To investigate the effect of solar irradiation on the water evaporation-induced power generation capability of this asymmetric nanofluidic photothermal thin film, an experimental device was designed (Figure 7a).Interestingly, the power generation capacity in the presence of solar irradiation exhibited a significant improvement over the ambient conditions without light.The voltage output performance was tested under sunlight.The voltage generated by the A-RMA@P and A-RMA/GO@P devices increased to 0.441 and 0.778 V, respectively (Figure S24, Supporting Information).Under one sun illumination, the voltage output of the single A-RMA/GO@P device was ≈0.234 V higher than the voltage output under the environmental conditions, as shown in Figure 7b.Similarly, the voltage generated by nine de-vices in series under one sun illumination could reach ≈6.917 V, but only ≈4.731 V under ambient conditions.Figure 7c,d shows the change in voltage output of one device and nine devices before and after the light switch.As the lamp was lit, the output voltage began to rise and finally reached a stable voltage output.Once the lamp was turned off, the voltage output decreased.This may be because light helped to accelerate the evaporation and transport of water, which expanded the gradient distribution of ions, thereby promoting the transport of ions in micropores and causing high-performance energy conversion. [46,47]The output current performance of nine series devices was tested under solar illumination (Figure 7e).After irradiation for ≈15 min, the current increased to 32.9 μA.When the illumination stopped, the current would gradually decrease.Similarly, the current change results of a single device are shown in Figure S25 (Supporting Information).The output voltage performance of multiple A-RMA/GO@P devices in series was studied to meet the practical application requirements, as shown in Figure 7f.As the number of devices in the series increased, the output voltage increased monotonically (Figure S26, Supporting Information).In addition, to further validate the operational stability of the nine-grid array device, it was placed in a saltwater solution with a concentration of 3.5 wt.% for cyclic testing.Under one sun illumination, the water evaporation-induced power generation performance of the array device was tested at intervals of 0.5 h, and the power generation of the device within 0.5 h was recorded.As shown in Figure S27 (Supporting Information), the results indicate that the average output voltage of the array device was not significantly attenuated and remained above 6.88 V, during ten cycles of testing.This demonstrates its excellent stability and durability, providing essential conditions for its wide application in simultaneous solar seawater desalination and power generation.Figure 7g shows a schematic diagram of the nine-grid integrated array formed by nine devices in series lighting an LED.Without any additional components, the LED can be successfully lit by the power output of the nine-grid integrated evaporator/generator under light conditions (Figure 7h).Under ambient conditions, the nine-grid array can also light the LED, as shown in Figure S28 (Supporting Information).
The performance of the A-RMA/GO@P devices was studied for simultaneous evaporation and electricity production under sunlight.Under one sun, the average output voltage of the single A-RMA/GO@P device was ≈0.778 V, while the output voltage of the nine devices could be as high as ≈6.918V, as shown in Figure 8a,b.Moreover, the output current could rise to 32.94 and 32.87 μA for single and nine devices, respectively, after 15 min of one sun irradiation (Figure 8c,d).Both single and nine A-RMA/GO@P devices generating high power output also maintained a stable evaporation rate, showing an evaporation rate of ≈1.74 kg m −2 h −1 , as shown in Figure 8e.The above results indicate that the A-RMA/GO@P device prepared by the asymmetric load can be used as an efficient solar evaporator, and the electrical energy will be collected from the water evaporation process under light/dark conditions around the clock.The overall design achieves the effective utilization of the water evaporation process to generate both electrical energy and clean water, which can maximize the use of environmental resources and amplify the output of electrical energy through a series of multiple devices to ensure practical application requirements.][50][51][52][53][54][55] More importantly, we demonstrate a scalable strategy for the practical applications of solar evaporation and power generation.The electrical energy generated by evaporation is more reasonably and effectively utilized.The total evaporation area and power output are improved through the series connection of multiple devices.Moreover, a nine-grid device array containing nine devices is designed, which integrates solar water evaporation and full-time power generation into one device.The nine-grid device array achieves a competitively high evaporation rate and power output, showing the practical application value of powering devices directly.Our designed RMA-based evaporator for synergistic photothermal evaporation and water evaporation-induced power generation has good appli-cation prospects, which provides a new method for the conversion and comprehensive utilization of solar energy.
Conclusion
In summary, we have developed a multifunctional solar interface evaporator through electrokinetic effect-induced power generation, which enables the simultaneous generation of fresh water and electricity.The device can achieve a high evaporation rate of 1.88 kg m −2 h −1 and solar photothermal conversion efficiency of 94.3% due to the good synergy of the RMA composites and the excellent thermal management properties of the 3D aerogel.More importantly, we designed and demonstrated an RMAbased asymmetric strategy to prepare a nonuniform sandwich structure film by adding GO layers to multidimensionally expand the asymmetry of the film.The evaporation rate and average voltage output of the asymmetric RMA/GO@air-laid paper (A-RMA/GO@P-SA) evaporator that can work around the clock can be maintained at 1.74 kg m −2 h −1 and 0.778 V, respectively, under one sun irradiation.The device can continue to operate under ambient conditions with an electric energy output of ≈0.554 V and 13.07 μA, showing excellent stability, durability, and versatility.In addition, the nine-grid integrated array assembled by nine devices in series can improve the evaporation area and power production capacity.The output voltage is as high as 6.917 V and an LED lamp can be lit.The nine-grid array with the advantages of small size, easy portability, and low environmental impact has the potential for large-scale production and application.This work may provide a new method for long-term sustainable freshwater harvesting and power generation with stable power output.(3 mL) was added to the GO dispersion (6 mg mL −1 ) to obtain an alkaline dispersion under magnetic stirring.Then, MnCl 2 •4H 2 O (660 mg) was stirred in the dispersion (60 mL) for 2 h.Subsequently, aluminum powder (100 mg) was mixed with the solution and allowed to stand for 7 h at room temperature.Then, the product was washed and centrifuged five times and dried at 60 °C for 12 h.Finally, the composites (100 mg) were ultrasonically dispersed in deionized (DI) water (30 mL), and airlaid paper (3 cm × 3 cm) was immersed in the solution for 20 min and dried at 60 °C.After three repetitions, the RGO/Mn 3 O 4 /Al 2 O 3 -air-laid paper (RMA-P) membrane was obtained.The same process was used for synthesizing the RGO/Mn 3 O 4 -air-laid paper (RM-P) film without the addition of aluminum powder.
Experimental Section
Preparation of 3D Aerogel: Sodium alginate (SA 500 mg) and calcium carbonate (500 mg) were added to DI water (30 mL) and magnetically stirred for 1 h.After that, gluconate lactone (800 mg) was mixed and the mixture was left to stand for 5 min, followed by vertical freezing for 12 h.Finally, 3D aerogel could be obtained by drying it for 48 h in a freeze-dryer.
Characterization: Scanning electron microscopy (SEM; Regulus 8100, Hitachi) and transmission electron microscopy (TEM; HT7800, Hitachi) were used to observe the surface topography and microstructure of the RM, RMA, air-laid paper, RMA-P, and SA aerogel.X-ray photoelectron spectroscopy (XPS) experiments of GO and RMA were conducted on a PHI QUANTERA-II SXM.The functional groups of the composite materials were analyzed by a Fourier transform infrared (FT-IR) spectrometer (NICOLET iS50, USA).A LabRAM HR 800UV (HORIBA Jobin Yvon, France) with an operating wavelength of 532 nm was used for recording Raman spectra of composite materials.X-ray diffraction (XRD; Bruker, Germany) was used to reveal the crystal structure from 10°to 80°.The optical absorption performance was recorded by UV−vis−NIR spectrophotometry (UV-vis-NIR; LAMBDA 1050, Japan).The zeta potential was measured by a zeta potential instrument (Zetasizer Nano ZS ZEN3600, Malvern Instruments, UK).A multimeter (UNI-T-UT61E+) was used to measure the voltage and current generated by water evaporation.
Preparation of Water-Driven Energy Generation: RGO/Mn 3 O 4 /Al 2 O 3air-laid paper (RMA-P) was synthesized by the solution-loading method.The RMA composites (100 mg) were ultrasonically dispersed in 30 mL of deionized water.The air-laid paper was immersed vertically in the welldispersed solution to form an asymmetric distribution of less upper-load materials and lower-load materials and then dried at 60 °C.After repeating the process three times, nonuniformly distributed RMA-P composite films were successfully acquired.GO@P and RM@P composite films were obtained by the same operation.To obtain nonuniform RMA/GO@P, the air-laid paper was vertically dipped in the RMA suspension, GO suspension (1.5 mg mL −1 ), and RMA suspension, successively.After three drying cycles, the asymmetric RMA/GO@P composite films were successfully prepared.The conductive silver adhesive coated on both ends of the film could connect the copper wires to the film and act as conductive electrodes.The copper wires and electrodes were sealed with epoxy resin, which served to avoid unwanted corrosion and potential electrochemical reactions.A porous hydrophilic and well-insulated SA aerogel was placed under the evaporator, which was used for the support layer and water transport channels.
Solar Evaporation Test: A solar simulator (Solar-500, Beijing NBeT Technology Co., Ltd., Beijing, China) with a standard spectrum filter (AM 1.5) was used for executing the solar evaporation test.The intensity of solar irradiation was calibrated to one sun illumination (1 kW m −2 ) at the sample level by a full-spectrum optical power meter (Beijing NBeT Technology Co., Ltd., Beijing, China).A computer connected to an electronic balance (PTX-FA210s) with an accuracy of 0.1 mg was used for recording the real-time weight changes of different samples during the overall duration of the steam generation experiments.The surface temperature distribution was measured by an infrared camera (Fluke Ti32, Fluke Electronic Instrument Co., Ltd., USA).The surface temperature data were investigated by a thermocouple (TASI, TA612C, China).The entire solar evaporation and electricity generation experiments were conducted at a room temperature of 26-28 °C and a humidity of 40-45 RH%.
Figure 1 .
Figure 1.Schematic of A-RMA/GO@P-SA evaporator/generator full-time operation under sunlight and environmental conditions.
Figure 2 .
Figure 2. a) The synthetic process of RMA at room temperature.b,c) SEM images of RM. d,e) SEM images of RMA.f-j) SEM image and corresponding elemental mapping images of C, Mn, O, and Al elements of RMA.k,l) SEM images of the blank film and the RMA@P film.m) SEM images of SA.
Figure 3 .
Figure 3. a) XPS spectra of GO and RMA, the illustration is the local amplification of Mn 2p peak in the RMA XPS spectrum.b) XPS spectra of Mn 2p.c) XPS spectra of O 1s. d) XPS C1s spectra of GO. e) XPS C1s spectra of RMA.f) FT-IR spectra of GO, RM, and RMA.g) Raman spectra of GO, RGO, and RMA.h) XRD patterns of GO, RGO, RM, and RMA.i) Light absorption capabilities of GO, RM, and RMA.
Figure 4 .
Figure 4. a) Schematic diagram of photothermal evaporation performance test.b) Weight change of the photothermal evaporation system for different devices.c) Evaporation rate of the photothermal evaporation system for different devices.d) Surface temperature of different evaporation devices.e) Thermal image of<EMAIL_ADDRESS>Comparisons of solar-driven interfacial evaporation performance.g) Evaporation rate of different quantities of devices.h) Weight change of the photothermal evaporation system for the quantity of device.
Figure 5 .
Figure 5. a) Main ion (Na + , Mg 2+ , K + , and Ca 2+ ) concentrations of seawater before and after desalination.b) Methylene blue and c) methylene orange solutions before and after purification.d) Schematic diagrams of a solar-driven interfacial evaporator prototype.e,f) Photographs of solar steam generation and collection using the RMA membrane array under outdoor conditions.g-i) Thermal images of the solar-driven interfacial evaporator before and after sunlight illumination.j) Weight change of the photothermal evaporation system during different time periods under natural sunlight.k) The solar intensity spectra, ambient air temperatures, ambient relative humidity, and synchronous evaporation rates of the RMA membrane array-based solar still from 8:00 a.m. to 5:00 p.m. on 17 September 2022 at Beijing Institute of Technology.
Figure 6 .
Figure 6.a) Schematic diagram of water-induced energy generation of different devices under ambient conditions.b,c) Voltage output from the A-GO@P, A-RM@P, and A-RMA@P generators immersed in water and 3.5 wt.% NaCl under ambient conditions.d) Schematic of ion transport within the EDL channels.e) Schematic illustration of the voltage generation from the asymmetrical load device.f) The measured voltage output of the device when the beaker was periodically sealed and open.g) Long-term output measurement over 6 h with the A-RMA/GO@P device under ambient conditions.
Figure 7 .
Figure 7. a) Schematic illustration of the A-RMA/GO@P photothermal evaporator for simultaneous solar desalination and electric power generation under one sun irradiation.b) Comparison of the voltage output with one and nine A-RMA/GO@P devices under ambient conditions and one sun irradiation.c,d) Voltage output with one device and nine devices under light on and off.e) Current output of nine A-RMA/GO@P devices before and after light off.f) Voltage output with the A-RMA/GO@P device by connecting several devices in series (1 device-9 devices) under one sun irradiation.g) Schematic illustration of the LED lamp lighted up by the nine-grid integrated array formed by nine devices in series under one sun irradiation.h) Optical picture of the nine-grid integrated evaporation/generator array lighting LED under one sun irradiation.
Figure 8 .
Figure 8. a-e) Simultaneous performance of solar evaporation and energy generation of one and nine A-RMA/GO@P devices under one sun irradiation.f) Performance comparison of our device to some recent reports. | 11,236.4 | 2023-09-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
Growth of Horizontal Nanopillars of CuO on NiO / ITO Surfaces
We have demonstrated hydrothermal synthesis of rectangular pillar-like CuO nanostructures at low temperature (∼60C) by selective growth on top of NiO porous structures film deposited using chemical bath depositionmethod at room temperature using indium tin oxide (ITO) coated glass plate as a substrate.The growth ofCuOnot only filled theNiOporous structures but also formed the big nanopillars/nanowalls on top of NiO surface.These nanopillars could have significant use in nanoelectronics devices or can also be used as p-type conducting wires. The present study is limited to the surface morphology studies of the thin nanostructured layers of NiO/CuO composite materials. Structural, morphological, and absorption measurement of the CuO/NiO heterojunction were studied using state-of-the-art techniques like X-ray diffraction (XRD), transmission electronmicroscopy (SEM), atomic force microscopy (AFM), and UV spectroscopy. The CuO nanopillars/nanowalls have the structure in order of (5± 1.0) μm × (2.0± 0.3) μm; this will help to provide efficient charge transport in between the different semiconducting layers.The energy band gap of NiO and CuO was also calculated based on UV measurements and discussed.
Introduction
In the modern society, environmental and energy resource concerns have been increasing, and because of that, greater stress has been placed on development of renewable energy resources especially on solar energy based photovoltaic cells, whose economic feasibility relies on efficient collection, retention, and utilization of photons.Over the past decade, research on solar cell has become one of the hot topics within science and engineering [1][2][3].The need for higher solar cell efficiencies at lower cost has become apparent, and at the same time synthetic control of nanostructures using top-down/bottom-up approaches has improved such that the high performance electronic devices are becoming possible [4,5].Inorganic nanostructures [6] with tailored geometry over their organic counterparts are expected to play significant roles for the next-generation nanoscale electronic, optoelectronic, electrochemical, and electromechanical devices [7][8][9][10].
Copper oxides (CuO and Cu 2 O) are p-type semiconductor oxides, suitable materials for high efficiency solar cells due to their band gap of 1.3 and 2.0 eV, respectively, which are close to the ideal energy gap for solar cells, and well matched with the solar spectrum.CuO has been intensively studied for photovoltaic/sensing devices due to its rich family of nanostructures and promising electrochemical and catalytic properties it possesses at nanoscale level [11][12][13].From the literature, CuO nanostructures can be grown on a Cu substrate using a thermal oxidation process or synthesized through wet chemical routes especially using hydrothermal method on any supporting substrates, whereas CuO nanostructures (NS) grown by wet chemical method show poor adhesion to the substrates [13].Therefore, under present work, the substrate has been modified by predeposition of NiO porous layer on top of ITO/glass substrate, which is expected to enhance the adhesion of CuO nanostructure on top of NiO porous layer.
At the same time, synthesis of nanomaterials using hydrothermal method has come up as a cost effective method for producing the different CuO nanostructures on flexible substrates [14].By applying the hydrothermal approach one can be able to precisely gain control over the synthesis of the nanocorals (NCs) consisting of p-type CuO at low temperature.This is achieved by assembling hierarchical CuO NC from a single-precursor entity (copper nitrate) at 60 ∘ C. The important features of copper oxide semiconductors are high optical absorption coefficient and nontoxicity and low cost fabrication [15,16].
The present paper deals with the horizontal pillar-like nanostructured formation of CuO layer on top of NiO porous layer modified ITO/glass substrate.Reasons of using NiO layer could be firstly to provide better surface interface interaction with CuO and this will ultimately improve the adhesion property of CuO on modified substrate (NiO/ITO/glass) and secondly the device aspect as NiO layer can act as hole transporting layer too.The schematic side view layer structure has been shown in Figure 1.For any device application further, any n-type layer can be deposited to form the pnjunction.
Experimental
All chemicals were of analytical reagent grade and used without further purification.Deionized water was used in each synthesis and in washing steps of both NiO and CuO layers' deposition.Nickel oxide (NiO) layers were deposited using chemical bath deposition (CBD) technique on precleaned indium tin oxide (ITO) coated glass plates.The precursor solution for NiO was obtained by dissolving 28.08 g (0.1 mol) of nickel sulfate hexahydrate (NiSO 4 ⋅6H 2 O) and 5.42 g (0.02 mol) of potassium persulfate (K 2 S 2 O 8 ) in 180 mL of deionized water.Clean ITO/glass substrates were immersed vertically in the solution, further of it; 30 mL of 30% ammonia solution was added.The mixture was maintained at 300 rpm stirring for approximately 30 minutes.Colour changes were observed in the solution upon addition of ammonia demonstrating the precipitation of nickel hydroxides particles [17,18].The obtained thickness of film varies in 150 ± 50 nm range.After deposition, the substrate colour varied from grey to black depending on the thickness of the film.The samples were annealed in a furnace at 400 ∘ C for 1 hour to produce the desired NiO film and finally washed by sonication in ethanol for 2 minutes.
The synthesis of nanostructured rectangular horizontal pillars CuO layer on top of NiO porous layer was performed by applying the widely known hydrothermal method.For the growth process of CuO nanostructures, the freshly prepared NiO/ITO/glass substrate was submerged having face up in a 100 mL of 5 mM aqueous solution of copper nitrate trihydrate [Cu(NO 3 ) 2 ⋅3H 2 O] and 1 mM of hexamethylenetetramine (HMT, C 6 H 12 N 42 ) under constant stirring.A bluish solution was formed due the presence of copper ions.The solution was further heated using heating plate at 60 ∘ C for 4 hours.After the growth, the vessel was cooled down and a resulting black product of CuO nanopillars (Figure 3) was collected and washed several times with deionized water.
Microstructures of the nickel oxides and copper oxides were investigated by state-of-the-art techniques like X-ray diffractometer (XRD, Bruker D 2 Phaser system) with Cu K radiation operating at 30 kV and 10 mA having wavelength () of 1.54 nm.For the morphological analysis, scanning electron microscopy (SEM, Phenom Desktop SEM, Phenom World, Netherland; 5 kV acceleration voltage) was carried out for nanostructure analysis of such inorganic layers.The optical transmission spectra of the samples were obtained in the ultraviolet (UV)/visible/near infrared (nir) region up to 1100 nm using Shimadzu UV-VIS spectrophotometer (Model: UV-3600).shown here) indicates that the thicknesses of such layers are in 150 ± 50 nm range.It is observed that the film surface looks highly porous with some overgrown clusters and is composed of nanosized Ni crystallites.This overgrowth could be explained on the basis of nucleation and coalescence process.Similar porous structures growths of NiO have been reported also in literatures [19][20][21][22].The porous NiO structure on ITO surface is uniformly distributed and grown along the entire ITO substrate as clearly shown in Figure 2(a).
Results and Discussion
Similarly, Figure 3 shows the top view of SEM micrographs of CuO nanostructured layer on NiO/ITO substrate.Different SEM images had been taken at various magnifications.From both pictures (Figure 3) one can clearly see the presence of nanowalls or thick nanopillars of rectangular shape structure, mostly lying horizontally and having average dimensions (5 ± 1.0) m × (2.0 ± 0.3) m.At the same time one can clearly notice that the porous structures of NiO beneath CuO layer have been filled after depositing CuO layer on top of it.All the nanopillars are randomly oriented and distributed along the whole NiO film and they even penetrated into the NiO surface.Similar morphologies have been reported, such as nanosheets, nanobelts, and wires [23][24][25][26].
Next, X-ray diffraction studies of NiO thin film on top of ITO layer had performed, particularly, -2 scan has taken in to account for all measurements, due to this buried substrate peaks are also visible in the scans.Figure 4 indicates the presence of NiO (111), NiO (200), NiO (311), and NiO (222) peaks, where NiO (111) and NiO (200) are the pronounced peaks.After exploiting Debye-Scherrer formula (size of crystallites = 0.9/ * cos , where is peak width in radian) as well as interplanar distance ( spacing = /2 sin ), it has been observed that both peaks show similar size of crystallites along the surface normal direction.Relatively NiO (111) peak area is doubled to NiO (200); this indicates that NiO (111) orientation is dominating in the NiO film (Table 1).At the same time numbers of NiO (111) crystallites are doubled compared to NiO (220) crystallites along the whole depth of the film.
Figure 5 shows the XRD curve of CuO film deposited on NiO/ITO substrate using hydrothermal method.One can observe multiple peaks which correspond to random orientation of CuO crystallites, of which (002) phase has the highest intensity indicating the maximum number of such crystallites present in CuO thin films along the surface normal direction, whereas CuO (200)/(111) is the second highest in intensity.
All the peak values calculated from Debye-Scherrer formula and interplanar distances have been summarized in Table 2; one can see that due to the least full width half maximum of CuO (−110) peak (0.281 ∘ ), the biggest size of crystallites observed is of CuO (−110) of 29 nm.At the same time CuO (002) has the size of crystallites along surface normal direction which is 16 nm.CuO (002), CuO (200), and CuO (−202) have almost the same full width at half maximum and size of the crystallites.For the optical properties, UV spectroscopy has been utilized to study the different layers (CuO/NiO).Figure 6 shows the transmittance spectra of nanostructured NiO thin film on ITO substrate.One can see that the transmittance is increasing with the wavelength, no additional peak has observed in between the whole range, and a continuous increasing spectrum has been observed.It indicates that under UV region the sample has the absorption peak at 280 nm and absorbs the UV light, whereas under the visible region (400-750 nm) the sample becomes more transparent towards higher wavelengths.
The CuO layer UV curve is shown in Figure 7. From Figure 7 it is clearly visible that except for low wavelengths (200-350 nm) the absorption is increasing toward higher wavelengths, which is opposite to NiO layer property.
In the visible range (400-750 nm), the maximum absorption is obtained around 700 nm.This shows that the film is highly active towards the higher wavelengths.The energy band gap ( ) was estimated by assuming a direct transition between valence and conduction bands from the expression ℎ = (ℎ] − ) 1/2 , where is constant and is determined by extrapolating the straight line portion of the spectrum to ℎ] = 0. Using this, the NiO and CuO band gaps have estimated 2.78 eV and 1.80 eV, respectively.
Conclusions
We have explored chemical bath deposition (CBD) as well as the hydrothermal deposition techniques for growing the nanostructures of NiO and CuO inorganic materials.These nanostructures of copper oxide (CuO) and nickel oxide (NiO) have great potential for applications in the fields of optoelectronics and sensor devices.Surface morphology of both films (NiO, CuO) has been studied by state-ofthe-art techniques like SEM, XRD, and UV.NiO thin film reveals the porous structure, whereas CuO layer has shown nanopillars/nanowalls-like structure.By judiciously manipulating the deposition conditions, the mean ledge thickness of the NiO and CuO nanopillars can be controlled.The crystalline properties and growth direction of as-synthesized NiO and CuO nanostructures were studied by XRD and SEM techniques; it confirms the polycrystalline phase and surface normal direction of their most preferred direction of growth for CuO nanopillars.
The present work primarily covers NiO and CuO nanostructures thin films formation, study of their low temperature growth.The layer combination ITO/NiO/CuO is completely open to put any n-type layer for application as a pnjunction like photovoltaics, sensors, and so forth.These big nanopillars of CuO films can make a difference in terms of device performance (charge transport), if used properly with suitable n-type materials.Further work of these structures towards solar cells and humidity sensors is in progress.
Figure 2 :
Figure 2: SEM images of NiO porous nanostructure deposited using chemical bath deposition on top of ITO coated glass plate.
Figure 2 Figure 3 :
Figure2shows the SEM micrographs of NiO thin filmdeposited at RT on ITO coated glass plate.AFM measurement (not
Figure 4 :
Figure 4: XRD scan of NiO film deposited using chemical bah deposition method on ITO glass, ITO peaks also noticed.
Figure 5 :
Figure5: XRD scan of CuO thin film on top of NiO/ITO substrate deposited using hydrothermal method on ITO glass; some of ITO and NiO peaks are also noticed.
Figure 6 :Figure 7 :
Figure 6: UV transmission and absorptance curve of NiO film deposited on ITO substrate using chemical bath deposition process.
Table 1 :
Different calculated values corresponding to Ni (111) and Ni (220) pronounced peaks observed in XRD scan of NiO/ITO sample.
Table 2 :
Calculated values corresponding to some of the CuO pronounced peaks observed in XRD scan of CuO/NiO/ITO sample. | 3,008.6 | 2014-08-28T00:00:00.000 | [
"Materials Science"
] |
The Determinants of Technical Efficiency of Oil Palm Smallholders in Indonesia
The main objective of the present study is to analyze the technical efficiency (TE) and evaluate its determinant in smallholder oil palm farming in Indonesia. The stochastic frontier analysis was applied to 20,409 selected oil palm farmers from the results of the 2014 Estate Cultivation Household Survey (ST2013 SKB) conducted by BPS-Statistics Indonesia. The results showed that all the input variables had a positive influence on oil palm production along with existing inefficiencies among heterogeneous smallholder farmers. They also indicated that the mean level of the TE among oil palm smallholders was 0.6694. Furthermore, variables such as farmer age, education, type of farmer, and location of the farm had positive and significant effects on TE. Therefore, the development policies in the oil palm smallholder sector might focus on promoting education and facilitate the accessibility of farmers to extension services by giving guidance on farming management based on environmentally friendly principles to improve production regarding land expansion.
INTRODUCTION
Oil palm oil is a prima donna commodity in the Indonesian estate sector. It is one of the sources of foreign exchange earnings, where the contribution to the value of non-oil and gas exports reached 10% in 2019. In the last 3 years, export volumes of Indonesian crude palm oil increased, i.e., from 28.7 million tons in 2017 to 29.5 million tons in 2019. However, the drop in the world oil palm prices decreased the value of exports from USD 20.3 billion in 2017 to USD 15.6 billion in 2019 (BPS, 2020). Although there is a decreasing price due to trade restrictions and a decline in global demand, this commodity remains the hope of more than 2.6 million farmer households operating, on average, 2.26 hectares per household (Directorate General of Estates, 2019). This sector is also an economic driver in some rural areas of Indonesia.
In 2018, smallholder plantation accounted for 55.09% of Indonesia's total oil palm acreage and 35.67% of national CPO production. In the last 20 years , the acreage increased 12 times (6 times in terms of production), but productivity has not much changed and tended to decrease (Figure 1). In 2018, the productivity of smallholder plantations was only 3.369 tons of CPO/hectare, far below that of large farms reaching 3.853 tons of CPO/hectares (Directorate General of Estates, 2019).
The increase in oil palm smallholder production is due more to land expansion than productivities. Most smallholders tended to be considerably less productive than a large plantation of commercial estates due to insufficient use and access to high-quality production inputs and adoption of poor management practices (Euler et al., 2016b;Jelsma et al., 2017). There is much room for improvement among farmers (Jelsma et al., 2019). Small farmers in developing countries face difficulties in exploiting the potential of new technologies and other agricultural resources, causing them to be inefficient in making decisions (Tijani et al., 2017). Meanwhile, the current level of technology and resources still allows farmers to reduce the production gap with large plantations through efficiency. Efficiency is an essential factor for productivity growth, where technical efficiency (TE) shows the ability of farmers to obtain maximum output on a certain amount of input and technology (Kumbhakar and Lovell, 2000).
The study is aimed to analyze the technical efficiency (TE) and evaluate its determinant in Indonesian smallholder oil palm farming. It is expected that there are socioeconomic and environmental factors that can affect the efficiency and sustainability of agricultural practices of farmers in the future.
LITERATURE REVIEW
Previous studies on the efficiency of oil palm production were still limited compared to the food crop commodities, which at least includes technical efficiency and inefficiency factors. There are no studies on the technical efficiencies covering almost all of oil palm production areas in Indonesia. Technically efficient approach (TE) to investigate the determinants of performance and production efficiency of oil palm smallholders have been carried out by authors such Hasnah et al. (2004) in West Sumatra Indonesia, Alwarritzi et al. (2015) in Riau Indonesia; Juyjaeng et al. (2018) in Thailand; Tijani et al. (2017) in Johor Malaysia.
The results indicated various TE and determinants among the farmers in those countries. Hasnah et al. (2004) analyzed the performance of oil palm NES farmers in West Sumatra. They obtained an average TE of 0.66, indicating that farmers were not efficient in managing their farms. They also mentioned that there were opportunities to increase the outputs through the performance of extension services, informal education, and progressive farming. A higher TE value (0.83) was obtained by Alwarritzi et al. (2015), who used the form of the translog function in estimating TE of oil palm farmers in Riau. The determinants of technical efficiency are farmer groups, education, age of farmers, and farm diversification.
The study of Tijani et al. (2017) showed differences in technical efficiency across different plant age groups in Johor Malaysia. The results showed that extension services, household size, age of farmers, access to credit, land conservation, household income, experience, education level, farmer group membership, and government intervention affected the technical efficiency of farmers.
A study on the efficiency of oil palm production in Thailand by Juyjaeng et al. (2018) showed that the TE of farmer group members of the Large Agricultural Plot Scheme (LAPS) was higher (0.63) than that of non-members (0.52). The determinant of technical efficiency meadow member LAPS is the length of farming, while it is the age of the farmer for non-LAPS.
METHODS
The data used in this study are cross-section data from the results of the 2014 Estate Cultivation Household Survey (ST2013 SKB) conducted by BPS-Statistics Indonesia. The analysis is carried out on 20,409 selected farmers who had a monoculture cropping system and a minimum of 15 trees of plants cultivated as a minimum limit enterprises.
In the analysis of the production function, a stochastic frontier model in the form of a translog is used. The stochastic production frontier model was developed by Aigner et al. (1977) and Meeusen and van Den Broeck (1977). This function is different from the traditional production function because the two components of the error term are as follows.
Assuming that f (X i ;β) takes the translog form, then the stochastic production frontier model is as follows: where Y i is the Fresh Fruit Bunches (FFB) oil palm production by the i th farmer (kg), X 1 is the number of weighted trees (trees), X 2 is the total quantity of labor used (man-days), X 3 is the amount of chemical fertilizers used (kilograms), X 4 is the amount the pesticides used (liters), (v i and u i ) is error term component where v i is the noise effect that cannot be controlled by farmers and assumed to be iid and symmetric (v i ~N(0,σ 2 v ) and u i is the technical inefficiency in the model and assumed to be iid and truncated (u i~N + (μ(Z i ),σ 2 u ). u i and v i are distributed independently of each other. Z i showed socioeconomic and environmental variables of farmers.
This study defines the variable of weighted trees, which will capture the effect of age of oil palm trees on the level of output. Variables are defined as follows: Where WPT i is the weighted number of oil palm on the i th farmer, PT 1 , PT 2 , PT 3 are the age categories of the plant according to Euler et al. (2016a), namely, PT 1 the age of oil palm trees of 3-7 years after planting when yield increase before reaching the peak, PT 2 the age of oil palm plantations 8-16 years when yield reach the peak, and PT 3 , age more than 16 years when yield has declined. The coefficient w i will be estimated from the average productivity data (kg/tree) of the sample. The average productivity of PT 1 , PT 2 , PT 3 are 92, 113 and 111 kg/tree, respectively, so that the value of w i are w 1 = 92/113, w 2 = 113/113 and w 3 = 111/113. This approach had been applied by several researchers to describe the effect of tree age on production, Alwarritzi et al. (2015) on oil palm in Riau, Indonesia, Ofori-Bah and Asafu-Adjaye (2011) on cocoa in Ghana, and Hung et al. (1993) on rubber in Vietnam.
The technical efficiency (TE i ) was measured using Battese and Coelli (1995) approach as follows: The parameters of the stochastic frontier and technical inefficiency model are estimated simultaneously. The technical inefficiency model uses the following equation: where u i is the effect of technical inefficiency, Z 1 farmer age (years); Z 2 farmers education (years); Z 3 dummy get extension services (1-yes 0-no); Z 4 dummy types of farmers (1-supported farmers 0-independent farmers) and Z 5 dummy farm location (1-mineral soil 0-peat soil).
RESULTS AND DISCUSSION
Before proceeding with the results, the present study conducted a hypotheses test to determine the functional form models and inefficiency in the model by calculating the likelihood ratio (LR) test (Table 1). First, it tested that the second-order parameter of the translog model is zero. The null hypothesis was rejected at a significant level of 1%. This result indicated that the translog model was more consistent representing the dataset than the Cobb-Douglas model. Second, it tested for inefficiencies present in the model, where the null hypothesis states that the inefficiency effect is 0. This null hypothesis was rejected at a significant level of 1%, so there was an effect of technical inefficiency on smallholder oil palm production. From these results, it can be concluded that the form of the translog model and the maximum likelihood estimation (MLE) method is suitable for this study. The results of the MLE estimation are presented in Table 2.
The estimated signs of input parameters in the form of output elasticity of input indicated that all the input variables had a positive influence on oil palm production. All inputs in the production function, i.e., weighted trees, labor, fertilizers, and pesticides, were inelastic, implying that an increase of 1% in every input will lead to a rise in FFB output of <1%. Of all the five input variables in the model, the weighted trees variable was the most crucial factor, with the most significant effect on the output with production elasticity equaled to 0.7038. The sum of all partial output elasticities equaled to 0.9692, indicating a decreasing return to scale, a proportional increase in all inputs results in a less than proportional increase in FFB output.
The TE value of oil palm farmers ranged from 0.0457 to 0.9582, with an average TE of 0.6694. However, most farmers (56.9%) were efficient because they had technical efficiency values >0.70. From the average TE value, there was room for farmers to increase production by 33.0% on existing technology and resources. Table 3 shows the estimation results of the determinants of technical inefficiency. Due to the inverse relationship between technical inefficiency and technical efficiency, parameter estimates are interpreted in terms of impacts on TE. In Table 4, the percentage Furthermore, the significant negative coefficient on the education variable implied that farmers with more years of formal education were more efficient. This finding was consistent with Alwarritzi et al. (2015), Tijani et al. (2017) and Ngango and Kim (2019).
The largest proportion of formal education of oil palm farmers in Indonesia was primary education (60.6%). Farmers who did not have a formal education had the lowest average TEs (0.6564), while highly educated farmers had higher TEs (0.6734). Educated farmers have a more structured mindset and broader insight that tended to absorb the information more accessible and more responsive in adopting new technologies and innovations.
As expected, the coefficient of extension services is negative and significant. It meant that farmers who receive advisory services tended to have higher levels of TEs. Farmers who had received advisory services are relatively more efficient (TE 0.7124) than those who did not (TE 0.6662). This aspect should be noted because only 7.9% of farmers received extension services, while the remaining 92.1% did not. Through extension services, farmers are introduced to best practices in oil palm cultivation, new technology, and field guidance so that they can improve their efficiency. The results of previous studies identified the significant effect of extension services on technical efficiency were Onumah et al. (2013), Ngango and Kim (2019).
The type of farmer had a significant effect on technical efficiency. Supported farmers were more efficient (TE 0.7093) than independent farmers (TE 0.6630). This aspect should be noted because the largest proportion of oil palm farmers was independent farmers, reaching 85.2%. Supported farmers were involved in oil palm farming through partnerships with large plantation companies. The relationship between the two is through a contract in which the companies are responsible for technical assistance and marketing (International Finance Corporation, 2013). Meanwhile, independent farmers adopt technology independently, that is, without any direct support and government involvement (Euler et al., 2016b), and they are free to sell to any buyer (International Finance Corporation, 2013). This result can be a justification that supporting farmers have taken advantage of technology transfer and guidance from companies. A similar result was also obtained from Alwarritzi et al. (2015).
The location of the farm had a significant effect on technical efficiency. Farmers who cultivated oil palm in mineral soil land might be more efficient (TE 0.6718) than peat soil (TE 0.6556). This result was consistent with the study of Alwarritzi et al. (2015). Farmers on peat soil spend more on production costs, considering that peat soil is fragile, relatively infertile, and irreversible (Ritung and Sukarman, 2016). Marginal and agronomically inappropriate land use implies high production costs with potential environmental severe impacts. To support food and energy securities, the utilization of peat soil remains one option in supporting long-term agricultural development. Still, it's only limited to degraded or abandoned (bushes or grasslands) peat soil (Las et al., 2016).
Considering the age groups of oil palm trees, the results varied on average yields (in FFB kilogram per tree) and TEs (Table 5). When the yield increases before the peak period (3-7 years after establishment plantation), the average productivity was 92.33 and TE 0.6399. When the yield reached the peak period (8-16 years), the average productivity and TE were 113.24 and 0.6962, respectively. When plants achieve the economic age (25 years), the average productivity and TE were still high, reaching 114.50 and 0.7126, respectively. Average productivity and TE began to decline after passing the economic age of 90.62 and 0.6475, respectively. Farmers need to pay attention to the plantation that has passed
CONCLUSIONS AND POLICY RECOMMENDATIONS
The analysis of output elasticity of input and returns to scale reveal that all inputs used in the production function are inelastic, suggesting that a proportional increase in all inputs results in less than a commensurate rise in FFB output. The mean level of the TE among oil palm smallholders is estimated at 0.6694. However, more than 56% of farmers have TE above 0.70. The results further reveal that farmer age, education, extension services, type of farmers, and location of farm significantly improved farmers' TE.
The following policy implications which contributed to the efficiency in this study are proposed. The government should promote education in rural areas and improve the knowledge and skills of young farmers through training programs and internships. Also, it is necessary to strengthen the role of extension services in enhancing the technical practice aspect of farmers and information dissemination of the latest technology covering all farmers in all locations. The farmers who cultivated in peat soil should have guidance to manage their farming based on environmentally friendly principles and sustainable agriculture systems with the minimum possible ecological risk.
ACKNOWLEDGMENT
This study is a part of doctoral dissertation at agricultural economics studies of IPB University. We gratefully acknowledge the financial support provided by Indonesian Endowment Fund for Education (LPDP). | 3,714.2 | 2020-11-04T00:00:00.000 | [
"Economics"
] |
Property Market Efficiency : Developed or Vacant Property
Property market efficiency in terms of adequate supply of land for proper use will have a strong impact on the value of urban real estate. Land supply plays an important role in providing space for housing which also includes meeting the demands of the commercial and industrial sectors. Hence, the distribution of urban land for development will affect the structural development of a city. Meanwhile, the existence of vacant urban land can lead to an unbalanced real estate development. This paper therefore aims to examine property efficiency relationship between real estate ability and the development process. Literature review from previous studies suggests that it is possible to examine property market efficiency. The term ‘efficiency’ has been viewed from different perspectives of theories or approaches by various researchers. Interestingly, many researchers have tried to examine property efficiency issues from the conventional approach until the emergence of the need to provide an alternative economic institutional approach. Institutionalisms consider rules, policy and organisations and the way these may govern agents’ social relations and their attitudes in the society. It means the major role of institutions in a society is to reduce uncertainty by establishing a stable structure to human interaction.
Introduction
Land has been identified as a key determinant of economic and social development.Generally, land supply plays an important role in providing housing which includes a variety of options to meet the needs of commercial and industrial sectors.Hence, the distribution of urban land for development will affect the structural development of a city.The existence of vacant urban land can lead to an unbalanced real estate development.As such, the government has to consider the adequacy of land supply which is a key input affecting the interest and benefit of urban residents.Therefore, it is not surprising if land supply strategy becomes an important part of the social and economic development strategy.This is because land supply strategy tends to provide guidance to property owners, potential owners, property investors, organizations, and the industrial sector.In a concrete sense, land supply strategy can be viewed as a long term solution to land use and property issue in urban areas.
The strategy of property development is a key tool in formulating government policy.As such, land supply strategy of property development may have the following benefits: (1) helping to enhance the social and economic development; (2) achieving optimum benefits for community purposes; (3) providing various choices of land and housing development opportunities; (4) enabling the development of land by private sector to encourage greater competition; and (5) ensuring future investments have a level of certainty in terms of availability of land supply and its pricing.This paper aims to examine property efficiency in terms of the availability of real estate property land in the course of the development process.This is because the relationship between the availability of land and the development process has a major impact on the issues of property efficiency which is considered to be very complex.
Land Supply Issues and Efficiency
The phenomenon of the existence of vacant urban land can lead to an unbalanced real estate development.Besides that, vacant land areas can have negative implications for urban economic development strategy such as in terms of efficiency and responsiveness (Michael & Bowman, 2000).Burrows (1978) interprets a city with vacant land as unused land which is usually overgrown with wild vegetation or covered with rocky surface or with abandoned buildings.In addition, Michael and Bowman (2000) explained that abandoned or vacant urban land involves both the public or private property that is not in use or abandoned without infrastructure or access to infrastructure, where the land has been left undeveloped.This particular classification focuses on land parcels which are characterised by particular types of activity (e.g under use, non use, temporary use) in combination with particular types of physical structure (e.g complete building, part-demolished building, cleared sites, tips).There are several "forms" of vacancy have been identified.A fairly full classification would distinguish between: 1) land and building underuse 2) building vacancy 3) dereliction of building and other operation e.g tips, disused railway sidings 4) vacant land (building demolished and removed) 5) temporarily used but otherwise vacant land 6) Other categories of land and building underuse (prior to full use and occupancy) e. g grassing over of previously unused land, some public open space Studies on 'vacant urban land' mostly focus on factors that led to their existence in cities, taking into account the concept of urban vacant land introduced by Burrows (1978) and Michael and Bowman (2000).Studies conducted by Evans (1985), Gore and Nicholson (1985), Burrow (1978), MacGregor (1985), Chisholm and Kivell (1987), as well as Cameron et al. (1988) have focused on the main causes of vacant urban land which are often attributed to lifestyles, technology and demographic factors.Similarly, studies by local scholars such as Abd Lateh @ Latiff (2005), Tan Hui Lian (2001), Ishak (1991) and Wan Abd.Aziz (1986) have examined the causes of urban vacant land in Malaysia in the context of economic, political and social factors.Greenstein and Sungu-Eryilmaz (2004) explicate that there are several factors that can be attributed to the rise of 'vacant land' in urban areas, namely moving out of business, residential areas located in the downtown area, and discrimination in investment opportunities.In addition, studies on urban vacant land involving large blocks of land holdings were found to have contributed to the understanding of urban wasteland or vacant land as well.A study by Ismail Omar (1999) on the Malay Agricultural Settlement (MAS) scheme in Kampong Baru, Kuala Lumpur found that the MAS land was designated as an agricultural reserve land, but since it was not developed, it ultimately became a 'vacant land' in the urban area.Most of these studies on urban vacant land have concentrated on the causes of urban vacant land.The urban vacant land concept denotes the idea of urban wasteland where the area has been abandoned and left undeveloped and generally have dilapidated buildings on it.As an example, Greenstein and Sungu-Eryilmaz (2004) have classified vacant land as an urban wasteland; however, they did not emphasize the function of urban vacant land.Several gaps that exist in past studies on vacant land have failed to provide adequate attention to issues of identification, classification and functions of vacant land.These gaps have led to the persistence of this vacant land issue.City administrators need to take into account these gaps in order to facilitate economic development that could ensure adequate land and property development process.Documented source from DBKL (2000) asserts that 'vacant land' in the urban areas has not only led to imbalance in urban development, but has also caused ineffective use of land such as slum areas and illegal parking areas.As a consequence, loss of council tax collection has reduced the revenue for local authorities, thus making redevelopment process difficult because of the increasing cost.Although there was intervention by the authority in preparing the planning zone, they were not able to solve the problem of urban 'vacant land' in the city.
The implication of the emergence of vacant land is that it has resulted in losses in collection of land tax for the state government, in which in 2000 it was reported to be at about RM15 million a year, including some 5.756 hectares of urban land which were not developed, and this came to about 23 percent of the overall land use in Kuala Lumpur (Mutazar, 2000).In fact, by law and by institutional protection, vacant land should not be a phenomenon because the law under Section 115, 116 and 117 of the National Land Code 1965, explicitly states that the land that has been left vacant for a period of time (i.e. 2 years for land and buildings, and 3 years for agricultural and industrial use) will be liable to seizure as provided for under Section 129 (4) (c) in the National Land Code 1965(National Land Code 1965).Section 129 (4) (c) provides that the Administrator could take the temporary rights of the land as directed by the State Authority or in the absence of such direction, make an order to declare the land confiscated by the State Authority.
Having considered all these, the question raised is how can land in urban areas be used more optimally.In a competitive environment, vacant land use in the urban area is better described as urban wasteland.As such, this situation would threaten the efficiency of land and property development process in the city space.
Literature Review
The relationship between planning policy and land use is a complex one as it is difficult to ensure demand is fulfilled by supply since the changes in market are drastic and uncertain.A study by Jackson et al. (1994) in Fenland, South Cambridgeshire and North Hertfordshire found that all three regions face different constraints in terms of land development process, and the three regions adopt different development plans.In terms of output, contraint in North Hentfordshire appeared to "iron out" some of the boom-slump variations in completions and outstanding permissions observed elsewhere.South Cambridgeshire reacted more to changing market conditions.But it was in Fenland, the least contrained area, that dramatic changes could be seen, with both output and prices rising rapidly in the boom only to be followed by equally rapid decline.The researches conclude that release of housing land in one area cannot fully substitute for contraint in another.Neither can the planning system as a whole absorb sudden changes in demand.. Additionally, Krabben et al. (2007) in their work discussed the understanding of spatial efficiency of industrial land where the authors explained the concept of space efficiency which has a close relationship with the productivity of space.According to Andrea Sarzyshi and Alice, L (2010) the term of spatial efficiency means the use of land such that the most output possible is produced.Therefore productivity efficiency refers to an output that is produced from a total amount of land (value added per square foot).In comparison, space efficiency generally involves numbers, where it involves the effectiveness of the production process.For example, an industry or firm leads the production of space efficiency output based on per acre land use.Thus, firms will use more land to produce more output.An empirical case in the Netherlands' using regression analysis of land productivity by taking into account several variables such as rates of urbanization, the ratio of supply and increased demand for industrial information such as the ratio of manufacturing workers and the ratio of employment to transport.The authors suggested that the strategy of industrial development should focus on improving space efficiency and productivity space for the return of the land area by using the variable industry sector, supply of land and regional economic growth.Hence, they also suggested the need to assess the property right theory in order to examine the efficiency of industrial land in a more systematic manner.
Besides space efficiency, the notion of market efficiency was emphasized by Akee (2006).In his article about the role of transaction costs, he postulates the outcome of the efficiency of markets.By using the Coase Theorem, researchers tried to identify the optimal efficiency of the market return, where they found that the market and property rights alone cannot ensure optimal market results.Accordingly, the theorem attempts to identify the barriers to the market.Among the data used were a time series data analysis and a cross tabulation of two variables (i.e.value with land tax) to determine the effect of reducing the significance of transaction costs on efficiency that yield on the land market.
Jun Iritani and Yeoun Tae Lee (2007) used the Coase Theorem in their research and divided it into two sub-theorems.It looked at the allocation of resource efficiency that can be achieved from the supply land when a variety of agents are involved.Both involved the allocation of resources that not only depended on the state.The early writings by Coase (1960) was based on bargaining which involves internal and external social costs.However, it does not necessarily need to be based on market forces.Therefore, market efficiency cannot be obtained if the transaction costs become too high or there is no complete information.Turnbull (2006) used the model of rate of return regulation model for land efficiency and its implications on land use.His paper described the direct result of giving power to a private firm in decision-making relating to land development.The research attempted to see with greater detail the relationship of the regulation on firms which utilize land in the market that can eventually lead to land efficiency.Regulation theory suggests that by giving appropriate incentives, more capital will be generated leading to positive effect toward land efficiency.However, they explained that the results showed that firms following the Pareto guidelines will provide an institutional efficiency orientation for the government in regulating the use of the land for development purposes.An example is privatization as an institutional efficiency mechanism in the past.Gregory Clark (1998) examined the evolution and growth of agricultural land in Europe since 600 years ago in which the privatization of this sector has resulted in huge profits obtained since the last 18 decades (1830).However, as a consequence of this historical process, land ownership has created 15 percent of wasteland that can be used as potential income for several generations.Therefore, Gregory Clark (1998) provided a solution by suggesting that changes in relative prices will lead to marginal gains to those parties involved and thus resulting greater land efficiency.
Besides market efficiency, the idea of institutional efficiency is also explored in detail.Keogh and D'Arcy (1999) highlighted the debate on the efficiency of property market by using the institutional approach.They considered efficiency of the property market as having specific criterion related to property owned by them (market) which also includes the development process.They argue that the institutional dimension can bring about changes toward the existing notions of land efficiency.As such, the institutional approach views the development process of the property market as a key entity within the efficiency framework in the property market.They also incorporated the time dimension as a key element in the process of social and economic changes whilst examining the property market's capabilities.
In examining the institutional approach, one of the main institutions that warrants attention is the economic institutions.Arvanitidis (2006) raised the issue of land efficiency of the market from the perspective of economic institutions.This means that the institutional economics analysis focuses on the important performances of actors.He believes that most of the efficiency concepts which have been used in the property market are conventional views, such as productive efficiency, space efficiency and the market efficiency hypothesis.These conventional approaches refer to the contemporary ideal benchmark without taking into account the nature and dynamics of the development process that occurs in the property market.In his study, he suggested a review of the property efficiency for land valuation in the market which involves to find a vision of sustainable economic development.In order to ensure the idea of the property market efficiency, he suggested two theoretical concepts/notions.The first is institutional uncertainty and the second, the notion of institutional diversity.Institutional uncertainty involves the impact/effect of existing institutional structure on the interpretation/understanding of urban quality and the extent to which it is effective in enhancing the socioeconomic conditions of the urban communities and ultimately providing better economic opportunities/environment.Meanwhile, the diversity of institutions assesses the operational details of such diverse institutions in the property market institutions, organizations and products that are available.As such, there is the need to include micro and macro level analysis involving time series data and to capture the dynamics of these economic institutions.
From the previous studies, it can be seen that various different authors view the term efficiency in property market from different perspectives of theories or approaches and in different contexts.Interestingly, many authors examined the issue of property efficiency in the context of conventional notion of efficiency until the emergence of the economics institutional approach.However, the study found that in order to examine the efficiency of the property market, researchers need to use time series analysis to capture the dynamics of the real estate sector so as to be able to identify the market potential in terms of the economic and social changes.It is apparent from the reviews by Keogh and D'Arcy (1999) and Arvanitidis (2006) that focused on the theorization of the economic and social changes that the institutional economics analysis framework provides an alternative tool to explain the nature of land development process.
Development Process -Considering Constraints on Land Supply for Development
Land development is a complex process in the built environment.This complexity is mainly due to spatial and temporal variation between one development project and another (Ratcliffe & Stubbs, 1996;van der Krabben, 1995).Although various theoretical approaches attempt to consider fully the whole range of issue in the land process, they contain both strengths and weaknesses in representing the complex nature of the built environment production.In general, various theoretical approaches consider the actor and the wider external forces which affect their decisions in the land development process (Gore & Nicholson, 1991;Healey & Barret, 1990;Healey, 1992).
Neoclassical economics emphasizes the importance of the price mechanism and resource allocation but tends to neglect the effect of economic changes and market instability in the land process (Healey, 1991).Moreover, with the general view and assumption on economic rationality, the neoclassical approaches fail to address the co-ordination needed in dealing with human social relation (Van der Krabben, 1995).According to Hodgson (1988), by relying on the device of rational economic man, neoclassical theories tend to ignore the entire manner in which the human agent behaves, thus neglecting the dynamic and evolutionary nature of the market processes.With reference to constraints on the land supply for development, the models consider selective distortions which range from individual decisions to government intervention.
The empiricist approach focuses on the limited scope of events and agents' activities but the breakdown of events is too broad and fails to highlight adequately the sequential overlapping.Furthermore, it fails to explain the important role of key actors and their interests comprehensively (Healey, 1991).Nevertheless, the event-sequence models are able to show the occurrence of supply blockages in the development process which plays an important role in determining the feasibility of the development process.However, the empiricist approach is lacking in dynamic concepts and is inadequate in considering the driving forces of land development and the way these may affect the actors' decisions on land development.
The humanist approach considers the individual actor's behaviour as playing a vital role in restricting the supply of land development.A passive or active landowner may influence the development decisions, especially at the early stages of land development process.The behaviourist approach however, tends to over-emphasize the managers' role and neglect the consumer related roles (Monk et al., 1991).
Structural approaches concentrate on the broader social, economic and political forces including land policy, planning, funding, taxation and the essential provision of land development.This means that structural approaches emphasize more on the general setting of the broader driving forces rather than concentrate on the elements that constitute human activities (Backhouse, 1987).Structural approaches however, hardly offer ways to link events and actors' behaviour to broader social, economic and political processes, thus providing little insight into the role of landowning and capital accumulations.Since limited role is given to the whole range of development difficulties, these approaches are inadequate in addressing the complex nature of the development process, in particular the constraints on land supply development.
The structure and agency model introduces the linkage between rules, resources and ideas and roles, interests, strategies and actions of actors through land right labour, finance, information and expertise.Although Healey (1992) cited the importance of formalized and in formalised rules in the linkage between structure and agency, she did not elaborate explicitly on the ways in which these rules affect the land supply of land for development.Furthermore, Healey's (1992) model was weak in terms of the imperfect dichotomy between structure and agency (Ball, 1998) and the problems related to the definition of institutions (Hooper, 1992).As a result of these weaknesses, Healey's (1992) model is inadequate in explaining the constraints on land supply for development purposes.
Institutional analysis models consider various institutions affecting human interactions which are responsible for the direction of institutional change in the society.The institutional change underlies various rules, interactions and institutional constraints affecting the economic performance of human agents.This means that institutional economic models tend to explain the land development process extensively by having a much richer insight into the linkage between formal and informal rules or institutions through the exercise of agency relations by agents in the land process (North, 1996;van der Krabben, 1995).The formal and informal rules may lead to formal and informal constraints which govern the actors' decisions and the way they interact and hence restrict the supply of land for development.According to institutionalises, there are formal written rules and policies in the land development process, in particular planning and development regulations which restrict the supply of land for development purposes.On the other hand, there are informal unwritten rules such as custom, tradition, perspectives, values, and collective behaviour which may constrain the agents' decisions and affect their social interaction in the land development process.
Conclusion
The relationship between property market and efficiency can be examined in the context of land supply.Limited land supply, in particular for residential use, has been an acute problem when a land area in the city has been identified as having potential for development but was undeveloped, leading the land area to be considered as under-utilised and vacant.These vacant, under-utilised and undeveloped lands are subject to various land supply constraints.Past research has identified several indicators that can enhance land efficiency, namely a safe, attractive, and healthy environment that can at the same time offer educational opportunities, employment, entertainment and social attraction.Additionally, other indicators include physical ownership marketability, infrastructure, land use, financial deficit and pollution.All indicators were found to influence urban land efficiency in the context of providing housing to the city residents.Therefore, the institutional economics analysis is usually carried out in a descriptive way.The broad nature of formal and informal institutions, rules and constraints leads to a descriptive identification of formal and informal institutions, agency relations and agents' decisions which constrain the supply of land for development.In particular, the analytical tool of institutional economics analysis framework was utilised to investigate the existence, importance and implications of land supply constraints on the land development process. | 5,177.2 | 2013-10-11T00:00:00.000 | [
"Economics"
] |
Temperature-dependent yield stress and wall slip behaviour of thermoresponsive Pluronic F127 hydrogels
This study explores the temperature-dependent dynamic yield stress of a triblock thermoresponsive polymer, Pluronic F127, with chemical structure (PEO)100(PPO)65(PEO)100, during the sol–gel transition. The yield stress can be defined as static, dynamic, or elastic, depending on the experimental protocol. We examine the dynamic yield stress estimation for this study, which usually entails utilizing non-Newtonian models like the Herschel–Bulkley (HB) or Bingham models to extrapolate the flow curve (shear rate against shear stress). Initially, we determine the yield stress using the HB model. However, apparent wall slip makes it difficult to calculate yield stress using conventional methods, which could lead to underestimates. To validate the existence of apparent wall slip in our trials, we carry out meticulous experiments in a range of rheometric geometries. To determine the true yield stress corrected for slip, we first use the traditional Mooney method, which requires labor-intensive steps and large sample sizes over various gaps in the parallel plate (PP) design. To overcome these drawbacks, we use a different strategy. We modify the Windhab model equation by adding slip boundary conditions to the HB equation, which allowed us to calculate the slip yield stress in addition to the true yield stress. In contrast to other typical thermoresponsive polymers like poly(N-isopropyl acrylamide) (PNIPAM), our findings demonstrate that PF127's yield stress obeys the Boltzmann equation and increases with temperature.
Introduction
Hydrogels have become a highly adaptable material with a wide range of uses in many different domains, such as biomedical engineering, tissue engineering, and drug delivery. 1Because of their exceptional capacity to experience sol-gel transitions in response to temperature uctuations, thermoresponsive hydrogels have been used in biomedical applications for over two to three decades. 2,3Poloxamers are the second most commonly used thermoresponsive polymer aer poly(N-isopropyl acrylamide) (PNIPAM).Poloxamers are typically nonionic tri-block copolymers in the form of PEO-PPO-PEO based on a hydrophilic block of poly(ethylene oxide) (PEO) and hydrophobic block of poly(propylene oxide) (PPO). 3Commercially, poloxamers are also known as Pluronics.Based on molecular weight and physical state of Pluronic, it is available in different forms such as L31, P104, P85, F127, etc., where the rst letters L, P, and F represent the physical form commercially available at room temperature as liquid, paste and aked solid, respectively and numbers represents the PPO molecular mass and PEO content. 4F127 has been used for various biomedical applications because its sol-to-gel transition temperature is close to body temperature, and its rheological properties play an essential role in different biomedical and drug delivery applications. 5ydrogels are intricate materials that exhibit neither simple liquid nor perfectly elastic solid behavior.They exhibit viscoelastic behavior with both elastic and viscous components and have mechanical behavior that differs from that of solids and liquids.As the lowest stress necessary for the material to ow, the yield stress is a critical parameter in describing the mechanical behavior of hydrogels.][8] Different methods have been proposed to determine the yield stress.The yield stress can be dened as static, dynamic, or elastic, depending on the experimental protocol. 9,10The static and elastic yield stress can be found directly from creep and oscillatory shear experiments, respectively, while dynamic yield stress can be calculated by extrapolating the ow curve (shear rate versus shear stress) by tting with non-Newtonian models such as Bingham or HB model, etc. Minimum stress at which, if one can wait sufficient time, the sample reaches a nal steady Here, we aim to investigate the temperature-dependent dynamic yield stress of PF127 during the sol-gel transition.Surprisingly, the traditional method of calculating dynamic yield stress by tting the HB model for the whole range of data could not help us with different temperatures.Following a comprehensive review of the literature, we found that apparent wall slip can underestimate true yield stress values and is one of the difficulties in computing the dynamic yield stress. 11Therefore, accurately determining the yield stress of hydrogels is crucial for optimizing their performance and understanding their rheological behavior. 12The following describes the origin and history of the apparent wall slip.
The rheological characterization of complex materials, such as colloidal gels or so microgels and others, changes when bounded on smooth surfaces compared to rough or serrated surfaces due to wall effects. 13The effects of walls in complex uids are oen interpreted as apparent wall slip.Wall slip can be well understood from a typical example when a solid block is kept between two parallel plates with smooth polished surfaces, one of which is in motion, and it is due to inadequate friction present on the smooth surface. 14Similar situations can also arise in various complex uids when they exhibit solid-like behavior under external conditions like time, temperature, shear rate, etc.For example, some complex materials behave like liquids at a low shear rate and exhibit solid-like behavior at a higher shear rate. 14In some cases, solid-like behavior is observed aer a specic time, keeping the temperature and other parameters constant. 10[15][16][17][18][19][20][21] The layer may consist of pure solvent or decient particle concentrations compared to the bulk dispersion.The literature shows a high-velocity gradient near the wall due to an apparent slip.The velocity prole in between the rheometric plate gap can be directly measured by several advanced techniques combined with conventional rheometers such as particle image velocimetry (PIV), 22 nuclear magnetic resonance (NMR) velocimetry, 23 ultrasonic speckle velocimetry (USV) 24 and dynamic light scattering (DLS), etc. 25 One can also indirectly observe the wall slip from ow curves (shear rate versus shear stress). 13Over the years, a large number of studies have investigated wall slip in complex uids in the literature.While it is difficult to include them all in Table 1, we have documented a few of them.][27][28][29][37][38][39][40] 47 This paper has not discussed the temperature-dependent behavior of thermoresponsive materials.However, Aral and Kalyon studied the time and temperature-dependent behavior of poly(butadiene-acrylonitrile-acrylic) acid terpolymers and found that the temperature-dependent yield stress decreases with increasing temperatures. 36Jalaal et al. performed a systematic study to understand the rheological phase behavior of PF127 by analyzing the ow curves at different temperatures and found that the yield stress increases as an increasing function of temperature. 49They used sandblasted parallel plates to avoid the wall-slip effect and calculated the yield stress by tting with the Herschel-Bulkley (HB) model. 49But they calculated the yield stress for PF127 for different concentrations at 5 °C intervals, and from their experiments, it was shown that for thermoresponsive hydrogels like PF127, with short intervals such as 1 °C is crucial.Therefore, we investigate the temperaturedependent yield stress of PF127 gels with 1 °C during the solgel transition.Careful observation conrms the presence of apparent slip during the experiments.
Traditionally, there are three approaches to deal with the apparent slip while calculating the yield stress; rst, the slip can be avoided by using surface modication or serrated plate or alternative geometries like vane-in-cup and helix to avoid the slip.Unfortunately, we could not avoid the slip from experiments due to experimental limitations.The second approach to calculate the true yield stress is by using Mooney's calculation.We have extensively calculated the true yield stress following Mooney's method.The third approach is to use a suitable model or theory that can include the slip and calculate the true yield stress.In this article, we modied the HB model by including the slip boundary condition and were able to calculate the true yield stress at different temperatures.Finally, we plot the dynamic yield stress as a function of temperature that was obtained using three different methods (the HB model, Mooney's plot, and our approach) and found that the yield stress of PF127 increases as a function of temperature and follows Boltzmann's equation for sigmoid curves.We also compare the yield stress of PF127 with other thermoresponsive materials such as poly(N-isopropyl acrylamide) (PNIPAM) and compare the results with PF127.
Methods
Rheology.Rheological measurements of PF127 samples were performed on an MCR 302 rheometer (Anton Paar) with stainless steel cone-plate geometry (CP) and parallel plate geometry (PP) with different diameters (CP-25, CP-60, PP-40, PP-50) in rotational and oscillatory modes.The oscillatory frequency sweep experiments were performed in the linear viscoelastic region (LVR) at constant shear stain amplitude g = 0.1%, with angular frequency (u) range from 0.1 to 100 (rad s −1 ) for different temperatures (20-26 °C) during the sol-gel transition to study the linear viscoelastic properties of PF127.The ow curve is plotted for rotational tests for different temperatures varying from 10 to 40 °C.The ow curves were measured by ramp-up in logarithmic scale in two input shear rates, one from 0.1 to 100 s −1 and another from 0.01 to 1000 s −1 .The measuring point duration is kept in variable logarithmic scale with initial time, t = 100 s, and nal t = 5 s.This approach aims to prolong the measuring points at lower shear rates while shortening them at higher rates, ensuring that transient effects of the sample are minimized or fully decayed by the end of each prolonged measuring point within the low-shear range.All experiments were carried out for PP40 with a plate gap size of 1 mm except otherwise specically not mentioned.For calculating the true yield stress following Mooney's method, we also performed measurements at different gap sizes varying from (0.1, 0.25, 0.5, and 1 mm).To conrm the wall slip phenomena in our system, we also analyzed the steady-state ow curves at different geometries (CP-25, CP-60, PP-40, PP-50) at different temperatures.
Linear viscoelasticity of PF127 at different temperatures
The linear viscoelastic properties of 20% PF127 gels were investigated by frequency sweep experiments at a minimal shear amplitude strain, i.e., at g = 0.1%.The frequency sweep data for 20% PF127 at different temperatures is plotted in Fig. 1.For 20 and 21 °C, loss modulus (G 00 ) is dominant over storage modulus (G 0 ) and for almost all frequency ranges.This represents the liquid-like (viscous) behavior of PF127.At low temperatures, the PF127 solution exhibits liquid-like behavior, and at 22 °C and above, the storage modulus (G 0 ) is dominant over the loss modulus and indicates the viscoelastic uids.At 22 °C, storage modulus (G 0 ) shows a plateau at G 0 = 200.2± 20.2 Pa, known as an elastic plateau in linear rheology.As the temperature increases, the elastic plateau values increase, and the sample behaves as viscoelastic solids, i.e., gels.We found that these samples exhibit yield stress for the steady shear rheology (ow curves), and we calculated the dynamic yield stress in the following section.
Determination of dynamic yield stress from steady-state ow curves
The ow curves, i.e., shear stress (s) and viscosity (h) as a function of shear rate ( _ g) of 20% PF127 at temperatures 23 °C, 24 °C, and 25 °C are shown in Fig. 2. As shown in Fig. 2(a), For 23 °C, shear stress (s) increases linearly as a function of shear rate ( _ g), which is typical Newtonian behavior, and this is also reected in Fig. 2(b); the viscosity is independent of the shear rate ( _ g).The samples exhibit yield stress for 24 °C and 25 °C, which can be calculated using a non-Newtonian model.We dened this temperature as the sol-gel transition temperature using the same convention as Jalaal et al. 49 (here, we have used a 20% PF127 sample, and other concentrations of PF127 are shown in ESI Fig. S1 and S2 †).
For this work to calculate the yield stress, we have used the well-known Herschel-Bulkley (HB) model.The HB model with the constituent equation is given as follows, where s is the shear stress, _ g is the shear rate, s y is the yield stress, k is the consistency index, and n is the ow index.The HB model is tted to the ow curves at different temperatures, and from Fig. 3, we observed that the HB model explains the data only for 24 °C and 25 °C (Fig. 3(a) and (b)).For 26 °C and 27 °C, it fails to explain the data (the yield stress (y 0 ) value in the tting parameter table shown in the gure index shows 0 ± 49.5 and 0 ± 32.9, for 26 °C and 27 °C, respectively, which has no meaning), and, we observe a slope change in the mid shear range for both data.Therefore, to understand these results, we took help from literature, and in this context, we found one exciting work by Georgios C. Georgiou. 46In his paper, they generated the ow curves numerically for parallel plate conguration by considering various aspects of wall slip.
They discussed that the following slip law must be employed to calculate true yield stress when a wall slip occurs during the measurements.
where, v w is the relative velocity of the uids with respect to the wall, s w is the wall shear stress, s s is the slip yield stress, b is the slip coefficient, and s is the slip exponent.Fig. S3 † summarizes their works.Fig. S3(a and c and d).Therefore, inspired by the above discussion, we realized that the HB model fails to explain the data due to wall slip.Therefore, we performed more careful experiments to conrm the presence of wall slip in our system, discussed in the following section.
Conrmation of the presence of wall slip in the system
To verify whether apparent wall slip exists in our system, we have conducted additional tests using various rheometer geometries, including parallel plate (PP) and cone and plate (CP).The literature claims that rheological measurements do not depend on measuring geometry; however, these values would vary if a wall slip is present. 13,50As seen in Fig. 4, we observed a signicant change in the ow curves from our experiments with various geometries, conrming that the sample experiences wall slip during rheological measurement.One possible reason is that in cone and plate geometry, the slip tends to occur primarily at the outer edge of the cone, where the sample comes in contact with the the stationary plate. 51This can lead to distortion in the ow prole, affecting the measured rheological properties such as viscosity and shear stress.In parallel plate geometry, wall slip typically occurs at both surfaces.Depending on the severity of the slip, it can signicantly inuence the ow behavior, especially at lower shear rates where slip effects are more pronounced. 51s discussed, wall slip usually occurs in solids due to inadequate friction on the smooth surface.Aer 24 °C, a phase transition occurs from sol-gel for 20% PF127, and at higher temperatures, the sample shows solid-like behavior, as conrmed by Fig. 4. Therefore, the signature of wall slip is seen at higher temperatures.As a result, the sample deviates from the HB Model.Using a rough surface to apply more friction and ignore the slip is the proper way to deal with it.However, there was no other geometry that we could have used to prevent slip because of experimental constraints.Consequently, we have used Mooney's traditional method to calculate the true yield stress corrected from slip using different measurements at different parallel plate (PP) geometry gaps.
Measurement of true yield stress corrected from slip by Mooney's plot using parallel plates
According to the literature, in the presence of wall slip, the shear rate measured by the rheometer differs from the true shear rate and is dened as the apparent shear rate. 23,30,31,47,52 where, _ g app is the apparent shear rate, _ g is the true shear rate corrected from slip and V S is the apparent wall-slip velocity, and h is the separation gap between the parallel plates.The true yield stress (s true ) can be calculated from the plot between (1/h) and _ g app from eqn (3).A straight line gives the slope (2V S ) and intercept ( _ g).This is typically known as Mooney's plot.As discussed in previous literature, we have rst performed steadystate ow curves at different gap sizes varying from (0.1, 0.25, 0.5, and 1 mm).Then, the shear stress value and corresponding shear rate values were taken from each plot and average values (the error bar represents the mean and standard deviation of all 3 measurements in each gap size).As seen in Fig. 5 and S4, † we have plotted these curves for various temperatures and shear stresses.The tted line has a slope of 2V S ; the slip velocity and shear stress value are represented by half of the slope, and for any intercept ( _ g∼0), the corresponding shear stress is known as dynamic yield stress. 23,30,31,47,52 Though this method has traditionally been used to calculate the true yield stress of a material, eliminating the slip during the measurements, this method is labor-intensive, and a large sample is required for repetitive experiments for different gaps of rheometric geometry.Therefore, to overcome these drawbacks, we followed a second approach, where we included the slip boundary conditions in the HB equation, modied the HB equation in the form of the famous Windhab model equation, and calculated the true yield stress and slip yield stress.
Calculation of true yield stress by including slip boundary condition in HB model
Consider a simple plane coquette shearing ow between two parallel plates with a gap (h), as shown in Fig. 6(a).The lower plate is xed, and the upper plate moves at a constant velocity (V) along the x-direction.If the density of the uid is (r) and the uid velocity vector is given as u = u(u 1 , u 2 , u 3 ) where bold letter denote the vector representation and u 1 , u 2 , u 3 are the x, y, and z components of the uid velocity vector (u), then the equation of continuity (conservation of mass) 14,53 is given as Now, for steady-state conditions and incompressible uid (r is constant), the above equation will reduce to Similarly, the equation of motion (conservation of momentum), famously known as the Navier-stokes equation, 53 is given as where r is the uid density, u is the uid velocity, t is time, p is the pressure, m is the coefficients of viscosity, and F represents body forces per unit volume (i.e., external forces such as gravity or electromagnetic forces).Now, since we consider the ow is in between two plates, the velocity of the uids in one direction, i.e., along the x-direction, and no motion in the y and z-direction.Then, the velocity of the uids can be represented as u = u(u 1 , 0, 0), and since we consider a simple plane coquette shearing ow, there is no pressure gradient, i.e., Vp = 0, and we also assume that there exist no external forces (F = 0) using these conditions eqn ( 5) and ( 6) can be reduced to following, Eqn (5) can be written as Similarly, eqn (6) reduced to only the x-component momentum equation, i.e.
For simplicity, we drop the subscript and write u instead of u 1 0u(y Eqn (7) represents the velocity of the uids between parallel plates.If we apply a no-slip boundary condition (velocity of the uids and plate are same), i.e., for a lower plate which is xed u(0) = 0 and the upper plate moving with velocity (v) so, u(h) = v.Applying this to the boundary condition in eqn (7), we arrived at, This gives a linear velocity prole for no-slip boundary conditions and is shown in Fig. 6(a), if the shear rate ( _ g) is linear, then this can be dened as If we assume a monotonic slip law (i.e., slip velocity is the same for both plates), the boundary conditions, i.e., u(0) = V S and u(0 Applying the slip boundary condition eqn (7) has the following form.
The velocity prole is still linear but slightly shied, as shown in Fig. 6(b), and now, the true shear rate can be dened as Experimentally, the shear rate is measured as the apparent shear rate, where, U is the rotational speed of the rheometric geometry, R is the radius of the rheometric plate, and h is the gap between the parallel plates.Now, using eqn (11) in HB model The modied HB model, including the slip boundary condition, has the following form: app m ðmultiply and divide byðg *ÞÞ This equation is well known in the form of the Windhab model with additional power terms m and n (when m = n = 1, eqn ( 15) is reduced to the Windhab equation).Therefore, we dened this as a modied Windhab equation.To avoid confusion, we used _ g app as only _ g and if we write eqn (15) with some redened constant inspired by the Windhab model, then the equation becomes where, k = h N and k 2 = (s 1 − s 0 ) and eqn ( 16) very well ts our data.But from eqn (16), we found two yield stresses (s 0 , s 1 ) and also two exponents (n and m) and _ g* is a normalization constant.We have taken some arguments from different literature to understand each term's physical meaning and generalization of the approach to any system.
Consequently, to validate our approach generalized to any system, we took help from arguments discussed in Moud et al., 47 where they discussed wall slip in kaolin clays, as shown in Fig. 7(a) and discussed different regimes and their physical signicance.Then, we tted eqn (16) to the data from Moud et al., 47 as shown in Fig. 7(a), and we found that the equation tted very well to the data.Here s 0 value lies in Regime-II, dened by Moud et al. 47 in Fig. 7(a), which is nothing but the slip yield stress and s 1 lies in Regime-III, which is the true yield stress.Therefore, from tting parameters, we found that s 0 , is the slip yield stress, and s 1 , is the true yield stress, and _ g*, is transition shear rate.When observing the ow curves, we see slope changes from low shear to higher shear rates aer some shear rate.Here, _ g* represents the corresponding shear rate and is dened as the transition shear rate, and (n and m are slip and power-law exponents, respectively).Therefore, we write the eqn (15) as where, s s is slip yield stress and s y is the true yield stress, and all other terms are the same as dened previously.In all the ow curves we discussed, the data ranges from 0.1 to 100 s −1 , and to verify that our approach is not limited to any range, we also tted data for a large range, i.e., from 0.01 to 1000 s −1 , and is shown the Fig. S5.† We compared the values with the conventional Mooney's method to verify the true yield stress estimated from our method, in Table 2.
Temperature-dependent dynamic yield stress Fig. 9(a) shows the yield stress obtained from various methods plotted as a function of temperature.We found that yield stress increases as a function of temperature, and the experimental data following a sigmoid curve can be tted by an empirical equation in the form of the Boltzmann equation 54 as follows, where, s 00 y is yield stress plateau at low temperatures and s 0 y yield stress plateau at high temperatures and T c is the critical temperature where the yield stress increases exponentially.The yield stress values gradually increase as shown by the plateau, in this case.The increasing value of yield stress for PF127 is consistent with Jalaal et al.'s study 49 at different temperatures at 5 °C intervals but doesn't formally report any trend, and as discussed earlier near the sol-gel transition temperature, each 1 °C interval is important; therefore we carefully calculated the yield stress and found the trend.The increasing trend can be understood by the mechanism suggested by Suman et al. that the individual micelle size may increase and can form a glassy state at higher temperatures can account for the exponential increase in the yield stress at higher temperatures. 55In the current study, we compared the results with other thermoresponsive polymer poly(N-isopropyl acrylamide) (PNI-PAM) and found some interesting and contradictory results.
When comparing the yield stress for other conventional thermoresponsive polymers such as PNIPAM, whereas the temperature increased, a noticeable decrease in yield stress was shown in Fig. 9(b) (data are taken from Divoux et al. 43 and plotted).This phenomenon can be attributed to the coil-toglobule transition experienced by PNIPAM, where hydrophobic interactions cause the polymer chains to collapse from extended coils to compact globules.7][58] In contrast, Pluronic PF127 exhibited increased yield stress with increasing temperature.Above its critical micelle temperature T CMT (around 15-30 °C), Pluronic PF127 undergoes micelle formation, where hydrophobic PPO blocks assemble into micellar cores surrounded by hydrophilic PEO shells.The increase in temperature leads to enhanced micelle aggregation and packing, resulting in the formation of a denser gel network.This densication of the gel network corresponds to an increase in yield stress as the gel becomes more resistant to deformation.The contrasting temperature-dependent yield stress behaviors of PNIPAM and Pluronic PF127 highlight their distinct gelation mechanisms.While PNIPAM's coil-to-globule transition decreases yield stress, Pluronic PF127's micelle formation leads to an increase in yield stress with increasing temperature. 58The biomedical eld has used body temperature as the transition temperature for two to three decades.Examining the yield stress concerning temperature can provide an additional understanding of the rheological characteristics of PF127 in diverse drug delivery applications and transport phenomena.
Conclusion
This study thoroughly investigated the yield stress behavior of PF127 at various temperatures using the Herschel-Bulkley Further tests proved the presence of wall slip using ow curves depending on rheometric geometry, which led to the adoption of the modied Windhab model to address the limitations of the HB model.The ow curves at higher temperatures were more accurately tted (with higher R-square values) by this wall slip-accounting model.The temperature-dependent yield stress increased signicantly as the results showed, following a sigmoid curve that could be explained by an empirical equation like the Boltzmann equation.Comparative analysis with other thermoresponsive polymers, such as PNIPAM, highlighted the distinct gelation mechanisms and temperaturedependent yield stress behaviors of PF127.Unlike PNIPAM, which shows a decrease in yield stress with increasing temperature due to the coil-to-globule transition, PF127 exhibited an increase in yield stress attributed to enhanced micelle aggregation and network densication at higher temperatures.
This research emphasizes the importance of considering wall slip effects in rheological measurements and provides a robust framework for understanding the temperaturedependent rheological properties of PF127.The ndings have signicant implications for using PF127 in biomedical applications, particularly in drug delivery systems where temperature-sensitive gelation is crucial.Future research should explore applying these models to other complex uids and further investigate the mechanisms underlying the observed temperature-dependent behaviors.
) † represents ow curves shear stress (s) vs. shear rate ( _ g) generated numerically for HB uids in case of the yield stress (s y ) = 2 Pa, slip exponent (s) = 1, and slip-coefficient (b) = 1 × 10 4 Pa s s /m s , ow index (n) = 1, and consistency index (k) = 8 × 10 −3 Pa s n for different the slip yield stress (s s ) as: (a) s s = 0; (c) s s = 1 Pa, respectively.Finite slip yield stress shown in Fig. S3(c) † is very similar to 26 °C and 27 °C data for our data shown in Fig. 3(c and d).The above arguments are also valid for investigation of the inuence of slip exponent (s) variation on the ow curve with s s = 0.5 Pa, s y = 2 Pa, b = 1 × 10 4 Pa s s /m s , n = 1 and k = 8 × 10 3 Pa s n : (b) s = 1; (d) s = 2.One can observe that for s = 1 (Fig. S3(b) †), the ow curves show a similar trend as of Fig. 3(c
Fig. 3
Fig. 3 (a-d) represents HB model fitting of flow curves for 20% PF127 at temperatures varying from 23 to 27 °C, respectively.
Fig. 5 (
a) and (b) represent Mooney's plot for 25 °C and 28 °C for different shear stress values.We observed that from Fig. 5(a) and (b) shear stress values (140.9 ± 2.1) Pa and (213.7 ± 1.6), respectively, the intercept almost approached zero; therefore, we dened these values as yield stress at that temperature.In Fig. S4, † we have shown Mooney's plot at 26-30 °C, respectively, and calculate the corresponding yield stress values.
Fig. 5
Fig. 5 (a and b) Mooney's plots for PP-40 geometry between apparent shear ( _ g app ) rate and reciprocal of the gap between two parallel plates (1/ h) for different shear stress at different temperatures from 25 to 28 °C, respectively.Different colour lines represent the straight line fit to each data set for different shear stress values, and the symbols and error bars represented here are the mean and standard deviation of three sets of measurements in each gap size.
Fig. 6
Fig. 6 Schematic representation of velocity profile between two parallel plates with a gap (h) for various boundary conditions (a) no slip (b) slip boundary condition, respectively.
Fig. 7
Fig. 7 (a) Flow curves of Kaolin clay in a parallel plate geometry with different gaps (adapted and reproduced with permission from ref. 47) and (b) data taken from (a) and plotted with eqn (16).The red and pink lines are the extrapolation of yield stress values found from fitting the vertical dotted line represented by the transition shear rate obtained from the fitting eqn (16).
model and the modied Windhab model, shedding light on the rheological characteristics of the thermoresponsive polymer.The HB model effectively described the data at lower temperatures (23 °C to 25 °C) but failed at higher temperatures (26 °C and 27 °C) due to wall slip.This failure was indicated by a slope change in the mid-shear range with a kink around a shear rate of 10 s −1 , and unrealistic yield stress values at these temperatures.
Fig. 9
Fig.9(a) Comparison of dynamic yield stress values obtained from different methods for 20% PF127.The data fits Boltzmann's equation (eqn(18)).(b) The temperature-dependent yield stress of reported data for 20% PF127 taken from Jalaal et al.49 is fitted with eqn (18), and PNIPAM gels are from ref.43.
Table 1
List of articles published on wall slip behaviour and their significance in different systemsPublication © 2024 The Author(s).Published by the Royal Society of Chemistry RSC Adv., 2024, 14, 23772-23784 | 23773
Table 1 (
Contd.)However, we were inspired by recent work by Moud et al., which evaluated apparent slip in the colloidal suspensions of kaolinite clay and proposed a generalized slip model that can be applied to many other colloidal systems. | 7,121.8 | 2024-07-26T00:00:00.000 | [
"Materials Science"
] |
Methionine controlled impediment of secondary nucleation leading to nonclassical growth within self-assembled de novo gold nanoparticles
: The conventional key steps for seed mediated growth of noble metal nanostructures involve classical and nonclassical nucleation. Furthermore, the surface of the seed catalytically enhances the secondary nucleation involving Au + to Au 0 reduction, thus providing in-plane growth of seed. In contrast to this well-established growth mechanism, herein we report the unique case of methionine (Met) controlled seed mediated growth reaction, which rather proceeds via impeding secondary nucleation in presence of citrate stabilized gold nanoparticle (AuNP). The interaction between the freshly generated Au + and thioether group of Met in the medium restricts the secondary nucleation process of further seed catalyzed Au + reduction to Au 0 . This incomplete conversion of Au + , as confirmed by X-ray photoelectron spectroscopy (XPS), results in a significant enhancement of the zeta ( z ) potential even at low Met concentration. Nucleation of in situ generated small-sized particles (nAuNPs) takes place on the parent seed surface followed by their segregation from the seed. Self-assembly process of these nAuNPs arises from the aurophilic interaction among the Au + . Furthermore, the time dependent growth of smaller particles to larger sized particles through assembly and merging within the same self-assembly validates the nonclassical growth. This strategy has been successfully extended towards the seed mediated growth reaction of AuNP in presence of three bio-inspired decameric peptides having varying number of Met residues. The study confirms the nucleation strategy even in presence of single Met residue in the peptide and also the self-assembly of nucleated particles with increasing Met residues within the peptide. Alanine in these peptide show
Methionine controlled impediment of secondary nucleation leading to nonclassical growth within self-assembled de novo gold nanoparticles Jitendra K. Sahu, Shahbaz A. Lone and Kalyan K. Sadhu* ABSTRACT: The conventional key steps for seed mediated growth of noble metal nanostructures involve classical and nonclassical nucleation.Furthermore, the surface of the seed catalytically enhances the secondary nucleation involving Au + to Au 0 reduction, thus providing in-plane growth of seed.In contrast to this well-established growth mechanism, herein we report the unique case of methionine (Met) controlled seed mediated growth reaction, which rather proceeds via impeding secondary nucleation in presence of citrate stabilized gold nanoparticle (AuNP).The interaction between the freshly generated Au + and thioether group of Met in the medium restricts the secondary nucleation process of further seed catalyzed Au + reduction to Au 0 .This incomplete conversion of Au + , as confirmed by X-ray photoelectron spectroscopy (XPS), results in a significant enhancement of the zeta (z) potential even at low Met concentration.Nucleation of in situ generated small-sized particles (nAuNPs) takes place on the parent seed surface followed by their segregation from the seed.Self-assembly process of these nAuNPs arises from the aurophilic interaction among the Au + .Furthermore, the time dependent growth of smaller particles to larger sized particles through assembly and merging within the same self-assembly validates the nonclassical growth.This strategy has been successfully extended towards the seed mediated growth reaction of AuNP in presence of three bio-inspired decameric peptides having varying number of
INTRODUCTION
Secondary nucleation is one of the crucial steps in the growth reactions involving gold nanoparticles as seed or in the case of biomolecules such as amyloid fibrils. 1,2Small molecule has the ability to suppress the secondary nucleation in the growth medium for amyloid beta peptides. 3 contrary, noble metals follow secondary nucleation in their growth process in the presence of small molecules, which are rather used as capping agents, leading to a variety of final shapes. 4ak or strong binding efficiencies of such small molecules play an important role in the growth process in general, including the growth mechanism of anisotropic gold nanoarchitecture. 5Recent report suggest that unlike classical nucleation proceeding by addition of atom to the crystal lattice via a single high energy barrier, the growth mechanism in certain cases follows an oriented attachment among nanoparticles involving a low defect-free energy pathway for nonclassical growth. 6Intermediate synthesis at the molecular level has been traced for time dependent seed mediated gold nanocluster growth. 7Fluctuations of surface atoms successfully demonstrated the coalescence behavior of gold nanoparticle at 873 K within 1 hour time span on a silicon surface. 8e concentration and functional group present in the ligand play crucial role in the prenucleation stage during gold nanoparticle synthesis, thereby influencing their architectures. 9A wide range of shape evolution from spherical gold nanoparticle seed has been reported in presence of Ag + and halides through kinetic and surface-controlled growth. 10,11The combination of icosahedral gold seed and alkylamines shows growth into highly symmetric gold nanostars. 124][15][16][17][18][19][20][21] The growth reaction of octahedral and cubic gold nanoparticle seed in presence of amino acids and peptides containing cysteine has ended up in asymmetric evolution. 22,23[29][30][31][32][33] In this work we have demonstrated experimental evidences that show the unprecedented role of methionine (Met) to impede the secondary nucleation step involved in the seed mediated growth of Au nanoparticles.The seed catalyzed Au + to Au 0 conversion in the secondary nucleation step 1 is significantly inhibited through the stabilization of Au + -S (thioether, Met) interaction. 34S analysis after the Met controlled growth reaction confirms the stabilization of Au + species.
The transmission electron microscopic (TEM) images after the growth reaction with Met variation show stepwise formation of smaller sized nucleated gold nanoparticle (nAuNP) on parent AuNP seed surface, their detachment from seed (Scheme 1).The inhibition of secondary nucleation results in the self-assembly of these in situ generated nanoparticles (nAuNPs) through aurophilic interaction between Au + . 35The self-assembled structures show time dependent nonclassical growth of individual small particles to larger particles by assembly and merging (Scheme 2), analogous to those observed for other noble metal nanocomposites. 36The growth reactions with a handful of sulfur containing molecules confirm the selectivity of Met for the inhibition of secondary nucleation.The unique behavior of Met in secondary nucleation and self-assembly process has been explored in the seed mediated growth reactions with three bio-inspired peptides having variable Met residues.
RESULTS AND DISCUSSION
Role of spectator seed in growth reaction.Citrate stabilized AuNP (15.0 ± 2.0 nm, Figure S1), showing surface plasmon resonance (SPR) peak at 522 nm, has been synthesized as seed for the growth reactions.The incubation of 9 mM Met with AuNP solution (1.20 nM) changes the color of the solution to orange red and an additional small hump around 610 nm has been observed in the absorbance spectra (Figure S2), which is absent in case of AuNP incubation with 0.5 mM Met.
The presence of a less intense peak of Met carboxylate stretching 37 frequency at 2116 cm -1 in infrared spectrum (Fig. S3) confirms weak interaction prevailing between Au 0 and Met carboxylate after the 30 min incubation of Met to the parent AuNP seed.Addition of 300 µM Au 3+ salt for the growth reaction in presence of excess hydroxylamine as reducing agent immediately turns the color of both the solutions having 0.5 mM and 9 mM Met concentrations into blue and SPR peaks show red shift to near 550 nm along with the appearance of new peaks at 688 nm and 696 nm respectively (Figure 1A).Interestingly, the TEM and high-resolution TEM (HRTEM) images from the growth reaction in presence of 0.5 mM Met clearly suggest the generation of small nucleated particles of size ~6 nm connected to the parent AuNP seed through (1 1 1) plane (Figure 1B and 1C).This undoubtedly supports the role of the seed surface in the nucleation of the smaller AuNPs.Time dependent absorption spectra up to 30 min growth reaction in presence of 9 mM Met reveal continual redshifts of the SPR and newly generated peak (Figure S4).This growth reaction reveals spherical assemblies of average diameter 170 nm after 30 min in the TEM images (Figure 1D).Closely focusing on one such assembly, highly dense self-assembled nAuNPs has been observed (inset Figure 1D).Interestingly, TEM image in presence of 9 mM Met reveals the maximum dimension (~11 nm) of individual particle within the self-assembly; noticeably smaller than the 15 nm parent seed and sufficiently larger than the 6 nm nucleated particles developed on the parent seed surface after the growth reaction with 0.5 mM Met.In addition to the selfassembled nAuNPs, presence of nanoparticles with ~15 nm diameter as spectator seed (Figure S5) has been observed in the same TEM grid.It is important to note that the same growth reaction without parent seed shows no generation of SPR peak, which endorses the important role of the parent seed in the Met controlled nAuNP synthesis.
We have followed the self-assembly of nAuNPs through XPS measurements of the mixture containing spectator seeds and self-assembled nAuNPs obtained from the growth reactions in presence of 9 mM Met (Figure S6).Deconvolution studies of binding energy for Au 4f !/# and 4f $/# show the presence of Au + in addition to Au 0 after the growth reaction (Figure 1E).Negative control XPS study on binding energies for the parent AuNP seed show no formation of Au + (Figure S7).At the initial nucleation step, hydroxylamine as a mild reducing agent reduces the externally added Au 3+ ions to Au + ions.In the standard secondary nucleation process, seed AuNP participates in the reduction of Au + to Au 0 followed by growth of the parent seed. 1 In this Met controlled growth, secondary nucleation process is partially inhibited due to the stabilization of the freshly generated Au + by the available Met in the solution. 34XPS study further confirms the presence of 3:7 ratio for Au + :Au 0 after the growth reaction, while 4:6 ratio for Au 3+ :Au 0 has been used in the growth reaction.The secondary nucleation step involving Au + conversion to Au 0 has thus been restricted by 75% in presence of Met.The self-assembly process of nAuNPs has been observed due to the aurophilic interaction 35 between Au + -Au + present in these nucleated particles.
Variation of methionine concentration for segregation and self-assembly of nucleated
particles.The difference in TEM images in presence of 0.5 mM and 9 mM Met concentration prompts us to follow the Met concentration stepwise (0.1 mM to 9 mM) in order to understand its role in the growth medium.The absorbance spectrum after the growth reaction in presence of 0.1 mM Met concentration displays a small hump around 700 nm.In case of 0.3 mM Met, a clear additional peak has been observed at 696 nm, which further shows 30 nm blue shifts with increasing concentration of Met up to 1 mM (Figure S8a).Notably after 1 mM Met critical concentration, absorbance measurements show continuous red shift till 696 nm with increasing Met concentration up to 9 mM (Figure S8b).The TEM image after 30 min of the growth reaction in presence of 1 mM Met shows segregated nAuNP with a dimension of ~6.5 nm (Figure S9), which is comparable to the nucleated particles of ~6 nm size connected to the parent AuNP seed in case of 0.5 mM Met (Figure 1B).Further increasing the Met concentration to 3 mM produces small self-assembly of nAuNPs (Figure S10) along with the existence of the spectator seeds in the TEM images.This result is in sharp contrast to the anticipated classical growth of seed, where growth takes place on the surface of parent seed.The remaining amino acids behave differently in similar growth reaction conditions. 38It is important to mention that the variation of Au 3+ salt in the growth reaction without Met incubation shows the only enhancement in the SPR peak intensity (Figure S11).
Formation and stabilization of Au + after the growth reaction have been prominently reflected in the z potential change (Figure 1F).In the absence of Met, the z potential after the growth reaction is found to be almost similar compared to the parent AuNP, which has negative z potential (-40.The overall trend of z potential and Met concentration (Figure 1F) follows equation (1), where the two components are attributed to the individual interactions of Au + and Au 0 with Met.
In this equation, in saturation within 0.5 mM to 15 mM Met concentration range (Figure S12).XPS measurements conducted after the growth reaction in presence of 9 mM Met clearly indicated that 0.3 mole fraction of total sulfur is present for stabilizing Au + interactions, while the remaining 0.7 mole fraction is similar to the free methionine from the deconvolution spectra of sulfur 2p 1/# and 2p 0/# (Figure S13).In coherence to the sulfur XPS, the 0.32 mole fraction (m) of total Met in equation (1) further confirms the role of Met for Au + stabilization.
The selectivity of Met in secondary nucleation inhibition process followed by selfassembly has been investigated (Figure S14) with a few other sulfur containing molecules and inorganic salts such as 6-mercaptohexanoic acid (1), 3-mercaptopropionic acid (2), ethyl 4-amino-2-(methylthio)pyrimidine-5-carboxylate (3), lipoic acid (4), oxidized and reduced glutathione (5,6), sodium sulfate (7) and sodium thiosulfate (8).Classical growth of the parent AuNP seed producing the spherical gold nanoparticle of 16-18 nm diameter (Figure S15 and S16) has been observed after the growth reactions in presence of 1-8 without any positive z potential data.In the cases of oxidized and reduced glutathione, aggregations of the particles after growth have been observed without any self-assembled geometry.
Variation of Au 3+ salt and its effect on luminescence from nAuNPs.In order to find the origin of the self-assembly after the growth reaction with 300 µM Au 3+ in presence of 9 mM Met, the external reagent Au 3+ amount has been varied during the growth reactions.The dual absorbance peaks, obtained after the growth reaction in presence of 10 µM of Au 3+ salt (Figure S17), have been blue shifted in comparison to the absorbance obtained for 300 µM of Au 3+ salt (Figure S3).
With the increase of Au 3+ concentration in the different growth reaction, continuous red shift trend of absorbance spectra has been observed (Figure S18).Variation of Au 3+ concentration not only shows the red shift trend in the absorbance, but also results in the luminescence enhancement at 430 nm (Figure 2A) by exciting the solution at 412 nm.The luminescence property has been observed during the 30 min growth reactions in presence 10 µM and 300 µM Au 3+ concentration (Figure 2B).The emission intensity at 430 nm increases initially up to 5 min after the growth reaction in presence of 10 µM Au 3+ concentration, whereas the intensity does not alter in presence of 300 µM Au 3+ concentration.The excitation spectrum confirms the position of the excitation wavelength at 412 nm (Figure S19).TEM image taken after 5 min of the growth reaction with 10 µM Au 3+ salt shows the formation of smaller non-aggregated nAuNPs of average diameter of ~2.8 nm (Figure 2C), which is responsible for higher luminescence intensity.The HRTEM (Figure 2D) image after 5 min of growth reaction suggests the presence of both (1 1 1) and (2 0 0) planes in the instantly generated nucleated nanoparticles.With the increase in Au 3+ concentration, the observed quenching in emission is likely to be attributed to the aggregation induced quenching within the self-assembly and/or to the larger dimension (~11 nm) of the particle compared to the reported emissive gold nanoclusters. 40The weak emission from this Au + /Au 0 combination in water is similar to the previous report on weakly emissive Au 0 @Au + -thiolate-based core-shell nanocluster in 75% ethanol. 41No emissions have been observed for control experiments with the parent AuNP seed or the Met solution or the growth reaction of parent AuNP without Met incubation.Trend in z potential data (Figure S20) follows equation ( 2) during the variation of externally added Au 3+ concentration for growth reaction of AuNP in presence of 9 mM Met.
where, z' (-39.5 mV) is the z potential before addition of Au 3+ , c AuNP shows aggregation of the seed followed by small-scale formation of nAuNPs (Figure S22A).Formation of smaller self-assembled nAuNPs has been observed in the case of 5 min incubation of Met (Figure S22B).The maximum size of the self-assembled nAuNPs has been achieved within 10 min incubation of Met with AuNP (Figure S22C).Setting the incubation time of Met with AuNP as 30 min, the growth reaction immediately after the addition of Au 3+ shows the formation of spherical self-assembly in TEM (Figure 3A).Size and z potential of the self-assembled nAuNPs remains unchanged even after 30 min of the growth reaction (Scheme 2).
However, the average size for each nAuNP increases from ~3.5 nm to ~11 nm during 30 min growth reaction (Figure 3A-C).3][44] Growth reaction in presence of 300 µM M 1 peptide shows broad absorbance spectrum (Figure 4A and S24), whereas the growth reactions after the treatment with the same concentration of M 3 and M 5 results in the development of additional peak around 680 nm and 710 nm respectively.The luminescence measurements at 430 nm after the growth reaction with these peptides confirm the decreasing trend of emission intensities from M 1 to M 5 peptides (Figure 4B).
The increasing trend of z potential (M 1 : -17.6 mV, M 3 : -0.5 mV and M 5 : +4.1 mV, Figure 4C) after the growth reactions in presence of these three peptides also follows the equation (1) depending upon the total Met residues in the solutions.TEM images after the growth reactions with M 1 peptide shows the formation of nAuNPs having average size 2.4 nm (Figure 4D).Similar studies with M 3 and M 5 peptides display the nAuNPs of 4.7 nm and 7.1 nm respectively (Figure 4E,F).In addition, small self-assembly of nAuNPs has been observed only after the growth in presence of M 5 peptide.Formation of self-assemblies through nAuNPs are restricted in M 1 and M 3 peptides due to additional Ala residues in these peptides.Alanine in these peptide sequences show non-invasive behavior during the growth reaction of AuNPs.Weak emission has been attributed to the enhanced size of nAuNPs in the case of M 3 and/or the formation of self-assembly of nAuNPs in presence of M 5 .for example in developing anisotropic shape within nucleated particles of smaller dimension compared to the parent seed.We are currently investigating along this direction in our laboratory.
METHODS
Synthesis of AuNP seed stock solution 20 mg (0.05 mmol) of HAuCl 4 was dissolved in 90 mL of deionized water and refluxed at 90 ℃. 10 mL of 1% (w/v) trisodium citrate dihydrate (88 mg, 0.3 mmol) was added to the above solution.
After few minutes the color of the solution was changed to dark violet and then immediately changed to wine red.The reaction was continued for another 30 minutes in the refluxing condition and finally stopped.The AuNP solution was cooled to room temperature and the solution was characterized with the help of absorption spectroscopy and TEM image.The stock solution was stored at 4 ℃ till further use.For further experiments the seed solution was diluted and the final concentration of the solution was measured as per our previous report. 38owth reaction of AuNP with Met and high concentration of gold salt 300 µL gold nanoparticles seed stock solution was incubated for 30 min with 30 µL of 100 mM Met.During the incubation period the color of the solution gradually changed towards reddish violet.Thereafter, 3 µL of 200 mM NH 2 OH (pH 5 maintained with addition of NaOH) was added to the above solutions and stirred vigorously for 10 min followed by the addition of 5 µL of 0.8% (w/v) HAuCl 4 to induce the reduction reaction.In each case, the final volume of the reaction was adjusted to 340 μL.After addition of gold salt, the reddish violet color immediately changed to blue.The solution was analyzed up to 30 min by different characterization techniques.
Growth reaction of AuNP with Met with variable concentration of gold salt
Different sets of 900 µL gold nanoparticles seed solution were incubated separately for 30 min with 90 µL of 100 mM Met.During the incubation period the color of the solutions gradually changed towards reddish violet.Thereafter, 9 µL of 200 mM NH 2 OH (pH 5 maintained with addition of NaOH) was added to the above solutions and stirred vigorously for 10 min followed by the addition of variable amount (0.5-50 µL) of 0.8% (w/v) HAuCl 4 to induce the reduction reaction.In these cases, the final volume of the reactions was adjusted to 1020 μL by addition of deionized water.After addition of gold salt, the reddish violet color immediately changed to blue.
The solution was analyzed up to 30 min by different characterization techniques.
Growth reactions of AuNP with variable concentration of Met
Different sets of 900 µL gold nanoparticles seed solution were incubated separately for 30 min with 1 µL, 3 µL, 5 µL, 10 µL, 30 µL, 60 µL and 90 µL of 100 mM Met and 60 µL and 75 µL of 200 mM Met, where the final Met concentration were maintained to 0.1 mM, 0.3 mM, 0.5 mM, 1 mM, 3 mM, 6 mM, 9 mM, 12 mM and 15 mM respectively.During the incubation period the color of the solution gradually changed towards reddish violet at high concentration (3 mM to 15 mM), whereas the at low concentration (0.1 mM to 1 mM) there was no change in color.Thereafter, 9 µL of 200 mM NH 2 OH (pH 5 maintained with addition of NaOH) was added to the above solutions and stirred vigorously for 10 min followed by the addition of 15 µL of 0.8% (w/v) HAuCl 4 to induce the reduction reaction.In these cases, the final volume of each reaction was adjusted to 1020 μL by addition of deionized water.After addition of gold salt, the reddish violet or red color immediately changes to blue.The solutions were analyzed up to 30 min by different characterization techniques.
Growth reactions of AuNP with variable incubation time of Met
Different sets of 300 µL gold nanoparticles seed solution were incubated for 0 min, 5 min and 10 min separately with 30 µL of 100 mM Met.During the incubation period the color of the solution remained unchanged up to 10 min.There was gradually change in color towards reddish violet after 10 min.Thereafter, 3 µL of 200 mM NH 2 OH (pH 5 maintained with addition of NaOH) was added to the above solutions and stirred vigorously for 10 min followed by the addition of 5 µL of 0.8% (w/v) HAuCl 4 to induce the reduction reaction.In each case, the final volume of the reaction was adjusted to 340 μL.After addition of gold salt, the red color immediately changed to blue.The solutions were analyzed up to 30 min by different characterization techniques.
Growth reactions of AuNP with Met containing peptides (M 1 , M 3 and M 5 ) 5 mM stock solutions of M 1 , M 3 and M 5 were prepared separately in DMSO due to hydrophobic nature of Met.Three sets of 300 µL gold nanoparticles seed solution were incubated for 30 min with 20 µL of 5 mM M 1 , M 3 and M 5 peptides.During the incubation period the color of the solutions gradually changed towards reddish violet for M 3 and M 5 peptides with high Met content.Thereafter, 3 µL of 200 mM NH 2 OH (pH 5 maintained with addition of NaOH) was added to the above solutions and stirred vigorously for 10 min followed by the addition of 5 µL of 0.8% (w/v) HAuCl 4 to induce the reduction reaction.In each case, the final volume of the reaction was adjusted to 340 μL.After addition of gold salt, the reddish violet color immediately changes to blue.The solutions were analyzed up to 30 min by different characterization techniques.
In each case, the final volume of the reaction was adjusted to 340 μL.After addition of gold salt, the color of all of each solution immediately changed to blue except the solution containing 4, 7 and 8.The solutions were analyzed up to 30 min by different characterization techniques.In case of 4, the SPR peak intensity was enhanced significantly with broadening.In case of 7, the SPR peak intensity was enhanced slightly.However, in case of 8, there was no color change.
Scheme 1 :Scheme 2 :
Scheme 1: Schematic illustration for the nAuNPs development from parent AuNP seed with Met variation through impediment of secondary nucleation.a Yellow spheres represents only the presence of Au + within nAuNP.
Figure 1 :
Figure 1: (A) Absorption spectra after growth reaction of AuNP seed incubated with 0.5 mM, 1 mM and 9 mM Met; (B) TEM and (C) HRTEM images of nAuNPs obtained after the growth reactions from AuNP seed incubated with 0.5 mM Met; (D) TEM image (inset: enlarged image) of the self-assembled nucleated particle obtained after the growth reactions from AuNP seed incubated with 9 mM Met; (E) de-convoluted XPS spectra of Au 4f !/# and 4f $/# showing the presence of Au 0 (red) and Au + (olive); (F) Met dependent z potential variation after 30 min growth reaction of AuNP in presence of 300 μM Au 3+ .Scale bar: 50 nm (B), 5 nm (C), 2 µm (D) and 100 nm (D, inset).
2 mV) due to the presence of capping citrate anions.Incubation of 0.1 mM to 1 mM Met with AuNP for 30 min before growth reaction shows steady enhancement of z potential value after the growth reaction and reaches almost neutral value.Interestingly, when the concentration of Met increases from 1 mM to 15 mM, the z potential values slightly enhance up to +6.5 mV.The above findings hint toward two different origins for the z potential and marks 1 mM Met concentration as the critical concentration.The formation of Au + within nAuNPs is responsible for the significant change in the z potential measurements at low concentration (0.1 to 1 mM) of Met.
1 and ζ 2 are the coefficients in mV, a and b are the constants and m is the mole fraction of Met.After fitting the equation (1) for the data in Figure 1F, ζ 1 , ζ 2 , a, b and m values are found to be -42.3,2.4, 11.3, -0.1 and 0.32 respectively.This expression with high positive constant a (11.3) confirms the role of Au + -thioether interaction in the sharp enhancement of the z potential within 1 mM Met concentration.On the other hand, the negative negligible value of constant b (-0.1) due to Au 0 -amine interaction 39 is responsible for gradual enhancement after 1 mM Met concentration.Considering the first part of the equation (1) exclusively, z potential results
Figure 2 :
Figure 2: (A) Immediate emission spectra and (B) time dependent emission intensity after growth reaction with 300 μM (red) and 10 μM Au 3+ (black) for AuNP seed incubated with 9 mM Met; (C) TEM and (D) HRTEM images of the nAuNP obtained after growth reaction with of 10 μM Au 3+ from AuNP seed incubated with 9 mM Met. Scale bar: 20 nm (C) and 1 nm (D).
( 4 . 5 )
is coefficient and k (0.40) is the Met dependent conversion factor from Au 3+ to Au + and Au 0 .Time dependent nonclassical crystal growth within self-assembled nAuNPs.The effect of the reaction time on the nAuNP formation and their size have been monitored by varying the incubation time of Met with AuNP (Figure S21 and S22) and the growth reaction time after the addition of the Au 3+ salt in the solution independently, keeping the other time parameter as constant.By performing the growth reaction for 30 min immediately after the addition of Met to
Figure 3 :
Figure 3: Met controlled nonclassical growth, (A-C) TEM and (D-F) HRTEM images of the Met incubated self-assembled nAuNPs at different time interval after the growth reaction; the average sizes of each nAuNPs are (A, D) ~3.5 nm at 0 min, (B, E) ~7.0 nm at 10 min and (C, F) ~11 nm at 30 min after the growth reaction with 300 µM Au 3+ from AuNP seed incubated with 9 mM Met; (E-G) displaying (1 1 1) planes (white lines) in the self-assembled nAuNPs responsible for the nonclassical growth through assembly and merging along with (2 0 0) planes (blue lines).
Figure 4 :
Figure 4: Formation of nAuNPs after the growth reactions of AuNP seed incubated with Met containing peptides M 1 , M 3 , M 5 ; (A) absorbance spectra, (B) emission spectra, (C) fitting of z potential values with the proposed equation (1) and (D-F) TEM images (scale bar 50 nm) of nAuNPs after 30 min growth reaction with 300 µM Au 3+ from AuNP seed in presence of 300 µM M 1 , M 3 , M 5 respectively. | 6,675.2 | 2021-11-29T00:00:00.000 | [
"Materials Science"
] |
Prognosis Stratification Tools in Early-Stage Endometrial Cancer: Could We Improve Their Accuracy?
Simple Summary Endometrial cancer is the most common gynaecological malignancy in developed countries. Most cases are diagnosed at a localized stage, overall with a good prognosis, although approximately 15% of them will recur. The identification of patients with an increased risk of relapse remains a challenge for clinicians. There are well-defined clinicopathological characteristics associated with prognosis. These variables have been integrated in multiple classifiers to stratify the prognosis, and more recently, molecular features have also been considered. The aim of our retrospective study was to compare the three available prognostic stratification tools for endometrial cancer and determine if additional biomarkers could improve their accuracy. We confirmed that the incorporation of molecular classification in risk stratification resulted in better discriminatory capability, which was improved even further with the addition of CTNNB1 mutational evaluation. Abstract There are three prognostic stratification tools used for endometrial cancer: ESMO-ESGO-ESTRO 2016, ProMisE, and ESGO-ESTRO-ESP 2020. However, these methods are not sufficiently accurate to address prognosis. The aim of this study was to investigate whether the integration of molecular classification and other biomarkers could be used to improve the prognosis stratification in early-stage endometrial cancer. Relapse-free and overall survival of each classifier were analyzed, and the c-index was employed to assess accuracy. Other biomarkers were explored to improve the precision of risk classifiers. We analyzed 293 patients. A comparison between the three classifiers showed an improved accuracy in ESGO-ESTRO-ESP 2020 when RFS was evaluated (c-index = 0.78), although we did not find broad differences between intermediate prognostic groups. Prognosis of these patients was better stratified with the incorporation of CTNNB1 status to the 2020 classifier (c-index 0.81), with statistically significant and clinically relevant differences in 5-year RFS: 93.9% for low risk, 79.1% for intermediate merged group/CTNNB1 wild type, and 42.7% for high risk (including patients with CTNNB1 mutation). The incorporation of molecular classification in risk stratification resulted in better discriminatory capability, which could be improved even further with the addition of CTNNB1 mutational evaluation.
Introduction
Endometrial cancer (EC) is the most common gynaecological malignancy in developed countries. Most cases are diagnosed at a localised stage, reaching 5-year survival rates of over 95% in some series [1,2]. Despite such a good prognosis, approximately 15% of patients with early stages (I and II) of EC will recur [3]. Therefore, the identification of patients with an increased risk of relapse remains a challenge for clinicians.
There are well-defined characteristics associated with prognosis, including age, lymphovascular space invasion (LVSI), myometrial infiltration, differentiation grade and International Federation of Gynecology and Obstetrics (FIGO) stage [4]. During the past 2 decades, these variables were integrated in multiple classifiers to stratify the prognosis. In 2016, the European Society of Medical Oncology (ESMO)-European Society of Gynecologic Oncology (ESGO)-European Society for Radiotherapy and Oncology (ESTRO) Consensus established a four-group classification (low, intermediate, high-intermediate and high risk) based on clinicopathological features, with the aim of prognosis stratification, but also to help with the indication for adjuvant therapy [2].
The Tumor Cancer Genome Atlas (TCGA) performed a comprehensive genomic profiling of over 300 EC samples, resulting in a molecular classification with prognosis implications [5]. In terms of a more cost effective and applicable method for group assignment in routine practice, the Leiden/PORTEC and the Vancouver/Proactive Molecular Risk Classifier for Endometrial Cancer (ProMisE) groups reproduced the TCGA molecular classification using surrogate biomarkers by targeted sequencing and immunohistochemistry (IHC) on formalin-fixed paraffin-embedded (FFPE) tumour samples [1,[6][7][8]. The group named POLE, composed of cases with mutations in the exonuclease domain (EDM) of polymerase-ε gene, has an excellent prognosis. In contrast, patients with the poorest prognosis harbour tumour mutations in the TP53 gene. This group is named p53 abnormal (p53abn) due to aberrant immunohistochemical p53 expression. The other two groups with intermediate risk were also established. The first encompasses mismatch repair deficient (MMRd) cases, defined by loss of expression of at least one of the mismatch repair proteins (MLH1, PMS2, MSH2 and MHS6). The remaining cases are included in the group named p53 wild type (p53wt) or non-specific molecular profile (NSMP).
Furthermore, other potential prognostic biomarkers have been described in EC, although most of them remain on lab setting. For example, it is reported that oestrogen and progesterone receptors (ER and PR) play a significant role in endometrial carcinogenesis. Their expressions are associated with well-differentiated tumours and correlate with earlier tumour stages and better survival [9]. L1-cell adhesion molecule (L1CAM) overexpression has been associated with a poorer outcome [10]. Amplification and increased expression of human epidermal growth factor receptor 2 (HER2) has been correlated with poor prognosis and more aggressive tumour behaviour [11]. Those with EC harbouring catenin beta 1 (CTNNB1) mutation encompass a more aggressive subset within low-grade early-stage endometrioid EC [12,13]. Other biomarkers such as phosphatase and tensin homolog (PTEN), AT-rich interactive domain-containing protein 1A (ARID1A) or E-cadherin (ECAD) have also had a possible impact on prognosis in some studies [14].
The integration of clinicopathological features and molecular subgroups is currently a reality based on the recent publication of ESGO-ESTRO-European Society of Pathology (ESP) 2020 guidelines. These guidelines still recommend a four-risk group classification, incorporating ProMisE molecular markers with clinical characteristics and suggesting a possible improvement in the accuracy of the risk prognosis stratification [15].
Our aim with this study was to analyse and compare the three above-mentioned risk stratification tools in the same cohort of early-stage EC, and to identify additional biomarkers with an impact on prognosis that could improve the precision of these classifiers.
Study Cohort
A retrospective cohort was collected including patients diagnosed with early-stage (I and II by FIGO) EC between 2003 and 2015 at La Paz University Hospital (Madrid, Spain), with a minimum follow-up of 5 years. Patients were consecutive. The study was approved by the local Ethics Committee (HULP#PI3778) and was conducted in accordance with ethical standards of the Helsinki Declaration of the World Medical Association.
All patients underwent surgery, which consisted of a total hysterectomy and bilateral salpingo-oophorectomy. This procedure was performed initially via laparotomy, until 2006, and then by a laparoscopic approach. The lymph node assessment was performed by lymphadenectomy. We analysed clinical and pathological variables, such as age, histological subtype, FIGO stage (updating to FIGO 2009 staging system for older samples), tumour size, LVSI, grade of differentiation, and myometrial infiltration. Clinical data on treatment and follow-up were obtained from the electronic medical records database and were subsequently updated, allowing for an evaluation of relapse-free survival (RFS) and disease-specific overall survival (OS).
Sample Selection
Optimal tissue blocks were selected by an expert gynaecological pathologist on haematoxylin and eosin (H&E) slides. DNA was extracted from selected tumour rich regions with the Qiamp DNA FFPE Tissue Kit (Qiagen, Hilden, Germany) and used for polymerase chain reaction (PCR) purposes. Additionally, representative tumour non-necrotic areas of each case were selected for tissue microarray (TMA) construction. Two representative cores of 1.2 mm in diameter were taken and arrayed into a receptor block using a TMA workstation (Beecher Instruments, Silver Spring, MD, USA), as previously described [16].
Risk Stratification Tools
The ESMO-ESGO-ESTRO 2016 risk stratification groups were established as follows: low, intermediate, high-intermediate, and high risk. For simplicity, hereafter this classifier will be referred to as the '2016 Classifier' [2].
We also stratified patients by the ProMisE risk groups: POLE, MMRd, p53wt/NSMP, and p53abn [7]. First, specific PCR and Sanger sequencing was performed to identify mutations in exons 9, 13 and 14 of POLE. These exons code for part of the EDM and account for most of the described mutations [17,18]. As a modification of the original ProMisE classification, for the POLE-mutated cases, we have only taken into account the pathogenic variants selected in the study by Leon-Castillo et al. [18]. Second, we used 4 µm sections of the TMA for IHC purposes. The expression of MLH1, PMS2, MSH2, MSH6 and p53 was evaluated with specific antibodies (p53, #IR616; MLH1, #IR079; PMS2, #IR087; MSH2, #IR084 and MSH6, #IR086 respectively), all from Agilent (Santa Clara, CA, USA), as previously described [19].
Lastly, a combination of clinicopathological and molecular variables employed in the previous risk stratification tools were used following the ESGO-ESTRO-ESP 2020 guidelines
Biomarker Analysis
Additionally, other molecular markers previously studied in EC were explored. Expressions ER, PR, ECAD, HER2, ARID1A, PTEN, and L1CAM were evaluated. Specific antibodies and cut-off categories were applied to each marker to simplify their evaluation as much as possible. A detailed description can be found in Supplementary Table S1. PCR and Sanger sequencing were also performed to explore CTNNB1 exon 3, which contains key protein phosphorylation sites.
Statistical Analysis
Descriptive statistics included clinicopathological and biomarker frequencies. Qualitative variables are presented as number of cases and frequency percentages. Continuous variables are presented as median value and range. Missing values in the ProMisE and 2020 Classifier groups were imputed, taking the most frequent values from a total of 1000 runs of the predictive mean matching method provided in the mice R package [20].
The primary endpoint was to evaluate RFS, defined as the time from surgery to the time of first recurrence or death from disease. As a secondary endpoint, disease-specific OS was analysed, defined as time from the surgery to death related to disease. All relapses and deaths were considered as events. Differences in RFS and OS were compared using Kaplan-Meier (K-M) curves.
The Goodman-Kruskal concordance index (c-index) is used as a metric to assess the models' performance. It ranges between 0 and 1; however, a value of 0.5 indicates that a model does not perform better than random. The c-index is designed to estimate the concordance probability of independent and identically distributed data comparing the rankings of 2 independent survival times and hazard values [21,22]. Therefore, this index indicates the discriminatory properties and stratification accuracy. The precision of each risk classifier for RFS and OS (censored data) was evaluated using the Cox Proportional Hazards (PH) Model. The statistical analysis was based on Student's t-test and the Mann-Whitney test for parametric and nonparametric continuous variables, respectively, and the chi-squared or Fisher's exact test, as appropriate, for categorical variables. Statistical significance was considered when p < 0.05. Also, patients' shifts between risk groups of different classification systems were illustrated by a Sankey diagram using Google Chart for developers (Google LLC, Menlo Park, CA, USA). Data were managed with an Excel database (Microsoft, Redmond, WA, USA) and statistical analyses were performed using R 4.0.3 software, available online at https://cran.r-project.org/ (accessed on 28 December 2021).
Description of Clinical Characteristics
A total of 293 patients were included, with a median follow-up of 75 months. The clinicopathological characteristics of the entire cohort and their univariate analysis for RFS and OS are summarised in Table 1.
The majority of patients had tumours with endometrioid histological subtype (88.4%), low grade (80.2%) and FIGO stage Ia (69.3%). Lymphadenectomy was performed in 67.6% of patients (48.2% only pelvic; 19.4% pelvic and paraaortic). Adjuvant radiation therapy and chemotherapy were administered to 36.9% and 5.1% patients, respectively, but they did not show any significant impact in RFS or OS (data not shown). Relapse was identified in 43 (14.71%) patients, with a location pattern divided into locoregional (34.9%) and distant metastases (65.1%). Twenty-six (8.8%) deaths due to EC were recorded. All clinicopathological variables had a statistically significant correlation with RFS and OS (with the exception of LVSI in OS).
Prognosis Features and Accuracy of Stratification Tools
K-M curves for RFS and OS of each classifier are shown in Figure 1. The distribution of prognosis risk groups, 5-year survival rate, Cox regression and c-index analysis for each classifier are detailed in Table 2. The distribution of prognosis risk groups, 5-year survival rate, Cox regression and c-index analysis for each classifier are detailed in Table 2. Regarding the 2016 Classifier, the low-risk group is the most represented, accounting for half of the patients. The K-M curves showed a clear differentiation between low-and high-risk groups, with an early overlap of the intermediate groups' curves.
The ProMisE classification found that p53/NSMP followed by MMRd groups represented the majority of cases. According to the selection of pathogenic variants proposed by León-Castillo et al. [18], in our series we identified five POLE patients that constitute two percent of total cases. Another seven patients presented additional alterations in POLE EDM, which were not used for classification purposes. The K-M curves confirmed that POLE and p53abn were the extreme prognosis groups. The MMRd group showed a poorer 5-year survival rate than p53wt/NSMP, but without significant differences.
Lastly, regarding the 2020 Classifier, the low-risk group was the most frequent, with a similar proportion as that of the 2016 Classifier. However, there was a redistribution of the other three groups, with a decrease in the percentage of high-risk cases, and a redistribution of the intermediate and high-intermediate risk groups. Figure 2 illustrates shifts between the three stratification systems analysed.
Relapse survival analysis over intermediate and high-intermediate risk groups showed better differentiation between K-M curves but still narrow separation and late overlapping between these intermediate groups.
The Cox regression model for RFS found statistically significant differences for both the 2016 and 2020 Classifiers (p < 0.01), but not for ProMisE. Discriminative metrics in the entire cohort showed that the 2020 Classifier reached the highest c-index (0.78), closely followed by the 2016 Classifier (0.76). Despite the slight improvement in c-index value, when we look forward to the 5-year survival rates estimation, this showed that the redistribution among groups over the 2020 Classifier achieved a better RFS stratification compared to the 2016 Classifier (Table 2).
Lastly, regarding the 2020 Classifier, the low-risk group was the most frequent, with a similar proportion as that of the 2016 Classifier. However, there was a redistribution of the other three groups, with a decrease in the percentage of high-risk cases, and a redistribution of the intermediate and high-intermediate risk groups. Figure 2 illustrates shifts between the three stratification systems analysed. The Cox regression model was also performed for OS, finding again statistical significance for risk assessment in the 2016 and 2020 Classifiers: HR 1.53 (95% CI 1.25-1.87) and 1.79 (95% CI 1.44-2.23), respectively; p < 0.01 for both. In contrast, there was still an absence of significant differences for ProMisE (p = 0.57, for both outcomes).
Other Biomarker Assessments
The univariate statistics of other biomarkers for RFS and OS are provided in Table 3. ER and ECAD expression were the only biomarkers significantly correlated with a longer RFS and OS. We also performed a subgroup analysis by histology and differentiation grade. Considering only the endometrioid histology subgroup, the CTNNB1 mutation was associated with a significantly poorer RFS, whereas ER expression was correlated with a better OS and a trend towards a longer RFS (Supplementary Table S2). In the non-endometrioid subgroup, L1CAM expression had a trend to a longer RFS and ECAD to a longer OS (Supplementary Table S3). In the low-grade (histological differentiation grade 1 and 2) subgroup, there was a trend to a shorter RFS and OS with PTEN expression (Supplementary Table S4). None of the biomarkers showed a correlation with RFS or OS in the high-grade subgroup (Supplementary Table S5).
A descriptive analysis of these biomarkers regarding their distribution by the risk classifier categories is summarised in Supplementary Table S6. As we explained before, our results showed that the 2020 Classifier was a slightly better stratification tool than the 2016 and ProMisE Classifiers in our series. However, the intermediate groups ( (Figures 1c and 3a). Therefore, we merged these intermediate groups and performed a Cox regression analysis to explore the impact of the selected biomarkers (Supplementary Table S7). Among them, CTNNB1 mutational status was the only one significantly associated to a shorter RFS (HR 2.62; 95% CI 1.14-6.02), and also showed a trend towards a worse OS (HR 2.17; 95% CI 0.81-5.78).
intermediate and high-intermediate) still overlapped in RFS
The K-M plots on the merged intermediate groups after categorization by CTNNB1 mutation status showed an improved stratification (Figure 3b). Therefore, we substituted the two original intermediate 2020 Classifier groups for these new ones, while maintaining the original low-and high-risk groups (Figure 3c). Subsequently, we observed that patients with tumours harbouring the CTNNB1 mutation showed a poor prognosis, with a similar RFS to the high-risk group (late curves overlapping). Thus, we proposed a novel stratification model consisting of three categories instead of four, by merging the 2020 Classifier high risk group with CTNNB1 mutated tumours. The intermediate group was redefined as CTNNB1 non-mutated cases from the previous intermediate risk groups (Figure 3d). A decision-tree model based on this proposal is shown in Figure 3e. (Figures 1c and 3a). Therefore, we merged these intermediate groups and performed a Cox regression analysis to explore the impact of the selected biomarkers (Supplementary Table S7). Among them, CTNNB1 mutational status was the only one significantly associated to a shorter RFS (HR 2.62; 95%CI 1.14-6.02), and also showed a trend towards a worse OS (HR 2.17; 95%CI 0.81-5.78). (Figure 3b). Therefore, we substituted the two original intermediate 2020 Classifier groups for these new ones, while maintaining the original low-and high-risk groups (Figure 3c). Subsequently, we observed that patients with tumours harbouring the CTNNB1 mutation showed a poor prognosis,
Discussion
In this study, the three main risk classifiers described in the last decade (ESMO-ESGO-ESTRO 2016, ProMisE and ESGO-ESTRO-ESP 2020) were evaluated in a large early-stage EC cohort. The results showed that all of these classifiers differentiate RFS between high-and low-risk groups, but there was an overlap between the intermediate-and high-intermediate risk groups. Similar findings have been observed in other studies. For example, regarding the 2016 Classifier, two retrospective cohorts reported no differences between the intermediate and high-intermediate group, one of them with overlapping K-M OS curves [23,24]. In terms of the ProMisE Classifier, there are other publications that also showed no significant differences between the two intermediate molecular subtypes, although it performed well on the two extreme groups: the POLE group, with an excellent prognosis and a very low incidence of relapses, and the p53abn group, with the worst prognosis and a high risk of recurrence [25,26].
The distribution of cases by ProMisE groups in our series is lower for POLE, MMRd and NSMP than the originally described distribution. The main explanation for this is that TCGA groups may vary according to clinicopathological characteristics, as previously described [27,28]. Specifically, for the POLE group, it can also be explained because of technical modifications. In the ProMisE study, mutations were determined covering the EDM domain, and including all pathogenic variants within it. We have modified this classification for POLE status with the proposed list of mutations recently described by Leon- Castillo et al., which reduces the number of variants to take into account to 11 [18]. Different publications support overall that POLE-mutated cases have better prognosis outcomes, but in our knowledge, the consideration of isolate molecular features encourage a lack of information during prognosis stratification and needs more studies with homogeneity to clearly define this group [18,[29][30][31].
The recently published 2020 Classifier has incorporated the molecular profile of the ProMisE classification into the prognostic stratification carried out in the 2016 Classifier, with the aim of improving its accuracy and thus making better therapeutic recommendations. In this new classification, stage I-II POLE mutated tumours are included in the low-risk group, for which adjuvant treatment is not recommended, whereas most of the p53abn tumours (except those without myometrial invasion) have been incorporated into the high-risk group, for which adjuvant chemotherapy is strongly recommended.
In this study, we have provided one of the first evaluations of this new risk classification in a cohort of patients and, to our knowledge, the first comparison of the three classifiers focused on early-stage EC. Two recent publications have evaluated the 2020 Classifier in two large patient cohorts, including those with advanced disease [32,33]. Similar to our results, Ortoft et al. described fewer patients allocated to the high-risk group using the 2020 Classifier and reported a poorer RFS for this group than that achieved with the 2016 Classifier [32]. These findings suggest that the 2020 Classifier achieves a better redistribution of the four risk groups that impact the 5-year survival rates. However, in terms of c-index values, we found only a slight improvement over the 2016 Classifier, associated to a small increase in the HR value. Furthermore, in our experience this classifier is still not good enough to separate the two intermediate groups, and following this classification, different adjuvant treatments would be recommended to patients with a similar prognosis (intermediate and high-intermediate groups). In the same way, Imboden et al. found significant differences in RFS using the 2020 Classifier, but with an overlap of K-M curves of both intermediate-risk groups [33]. These results reaffirm the unmet need for an accurate stratification system and motivate us to explore the potential of other biomarkers that could improve the current options.
To improve the precision of the 2020 Classifier, we focused on the molecular biomarkers previously explored in EC, with potential prognostic value but not yet included in the main risk classifiers. We first evaluated their association with prognosis in our entire cohort. Among them, only ER and ECAD showed a significant correlation with RFS and OS. These results are in agreement with previous publications [34,35]. There are several reports on HER2 amplification, specifically in non-endometrioid histologies and a subset of highgrade endometrioid tumours. We had almost no HER2 overexpression, so no correlations with the prognosis could be established [36]. Loss of ARID1A has been linked to shorter progression-free survival in EC, and loss of PTEN might be a good prognostic factor [37,38]. Our results are similar in terms of the positive proportion of cases for both biomarkers, but we did not find any statistical significance related to survival.
Among the remaining analysed markers, probably the most intriguing results concern L1CAM, which has frequently been associated with distant recurrence and OS. We have used a previously established cut-off for IHC to achieve the best correlation with prognosis [39]. Our results are similar regarding positivity rates to those published for the PORTEC-1 trial samples, but do not reach significance, probably because of the lower positivity of the marker and the smaller size of our cohort [40]. The other biomarker frequently associated with prognosis is CTNNB1 [13,41]. In our cohort, it showed significance only when intermediate risk groups were merged, and for this reason it was subsequently considered for their inclusion in the risk classifier.
The impact of the CTNNB1 mutation and other biomarkers (like POLE, MMRd, p53, L1CAM, or LVSI) prompted the design of the PORTEC-4 trial. In this phase III study, patients with high-intermediate risk EC are randomised between a standard arm with adjuvant vaginal brachytherapy and an experimental arm with adjuvant radiation therapy tailored by a molecular-integrated risk profile. In this trial, patients with p53wt/NSMP and no mutation in CTNNB1 are considered to be in the same low-risk group as those with the POLE mutation [42]. However, in our study, patients initially classified in the intermediate or high-intermediate groups with no mutation in CTNNB1 have a poorer prognosis than those of the low-risk group (which included patients with the POLE mutation).
The CTNNB1 mutation leads to the overactivation of beta catenin, which results in the aberrant signalling of the Wnt pathway, contributing to tumour progression [43]. The poorer prognosis associated with the CTNNB1 mutation in exon 3 has been shown in other studies, mainly in grade 1-2 endometrioid or NSMP cohorts [8,44], suggesting that this mutation is more likely to be functional, and not a passenger event [41]. Another study showed how the identification of CTNNB1 alterations, along with ARID1A mutations, could represent an effective way to characterize the tumor aggressiveness of the heterogenous NSMP group [45]. However, although the ESGO-ESTRO-ESP 2020 guidelines mention that the CTNNB1 mutation might be potentially useful in the group of low-grade p53wt/NSMP EC, they did not include it in the risk stratification proposal. In our study, the CTNNB1 status was significantly associated with RFS in the intermediate and high The main limitation of our study is related to its retrospective design and the absence of a validation cohort. Therefore, our proposal of risk classifier needs to be validated in other external series, preferably from different countries and including a variety of ethnic groups, in order to confirm that the inclusion of CTNNB1 status in the 2020 classifier improves its accuracy. Second, the study is based on TMA and not on whole tissue sections, which might not completely reflect the heterogeneity of some tumours. On the other hand, as strengths, the large number of patients with a long follow-up, and the high homogeneity of the series should be highlighted, given it encompasses only early stages (FIGO I-II). Furthermore, it is the first study to evaluate and compare the three most important risk classifiers in EC, including the recent ESGO-ESTRO-ESP Classification, focused on early-stage disease.
Conclusions
None of the main published risk classifiers developed in EC achieved a significant difference in RFS between their intermediate groups. The 2020 ESGO-ESTRO-ESP classification showed a slightly better discriminatory capacity than the other classifications. The incorporation of additional biomarkers, such as CTNNB1, into the 2020 Classifier could improve the accuracy of the stratification, especially in terms of redefining the intermediate prognostic groups. This proposal warrants validation in an external series, preferably from different countries and including a variety of ethnic groups. | 5,895.4 | 2021-07-19T00:00:00.000 | [
"Biology",
"Medicine",
"Psychology"
] |
Significance of Image Features in Camera-LiDAR Based Object Detection
In autonomous cars accurate and reliable detection of objects in the proximity of the vehicle is necessary in order to perform further safety critical actions which depend upon it. Many detectors have been developed in the last few years, but there is still demand for more reliable and more robust detectors. Some detectors rely on a single sensor, while some others are based upon fusion of data from multiple sources. The main aim of this paper is to show how image features can contribute to performance improvement of detectors which rely on pointcloud data only. In addition it will be shown, how lidar reflectance data can be substituted by low level image features without degrading the performance of detectors. Three different approaches are proposed to fuse image features with point cloud data. The extended networks are compared with the original network and tested on a well-known dataset and on our own data, as well. This might be important when the same pretrained model is to be used on data generated by a lidar using different reflectance encoding schemes and when due to the lack of training data retraining is not possible. Different augmentation techniques have been proposed and tested on the KITTI dataset as well as on data acquired by a different lidar sensor. The networks augmented with image features achieved a recall increase of a few percent for occluded objects.
I. INTRODUCTION
I N the field of autonomous driving and intelligent infrastructures the environment perception stands for a safety critical task where different type of static and dynamic objects must reliably and robustly be detected and localized under various circumstances, such as different weather conditions, limited sensing resolution of applied sensors, partial occlusions, etc.
Efficient sensing under different weather conditions might easily be handled by utilizing different type of sensors jointly (lidar, RADAR, Camera, thermal vision). Most often camera and Lidar sensors are used in sensor fusion algorithms [1] [2]. For various applications, camera and radar pairing is also common [3], [4], while there are also cases where radar and lidar data are considered for fusion. [5].
Lidar sensors are not affected by day and night lighting conditions, they can also reliably operate under various limited visibility conditions. Radar sensors are also unaffected by light and weather conditions such as fog and dust, they can sense longer distances than Lidars, however their resolution compared to Lidars is considerably lower. Cameras perform poorly under limited visibility conditions, although thermal imaging cameras can compensate for such limitations. [6] Individual application of certain sensors might strongly be limited by their low spatial resolution as in case of lidars for instance (depending on the displacement, number of channels and the field of view different sparse patterns can be observed on the generated pointcloud). Even the most advanced lidars are not able to capture objects being at longer distances VOLUME 4, 2016 (> 150 m) with good enough resolution which makes the detection task in such cases even more difficult. At distances more than 150 the number of rays crossing the body of an average sized vehicle is to low for its reliable detection (even in case of lidars having the highest available resolution).
There are numerous cases when long range detection of vehicles is required in order to perform the given task efficiently, such as for instance prediction of potentially dangerous traffic situations or scenarios in order to avoid accidents and increase road safety [7]; digital twin generation of longer road sections, where the range of detectability influences also the minimal number of sensors to be deployed in order to cover a given road section. This factor plays significant role first of all due to cost, energy consumption and maintenance related reasons in future intelligent infrastructure and road networks [8].
Multi-modal approaches are a promising alternative to handle (to some extent) problems caused by the sparsity of lidar pointclouds. For example by combining information from camera images (which obviously have much larger resolution than lidars) with pointcloud data (which on the other hand have good depth resolution) the detection performance as well as the reliability might be improved (compared to camera only or lidar only approaches).
The main contribution of this paper is represented by the proposed pointcloud augmentation techniques incorporated into a selected baseline lidar based detector model and by the evaluation and analysis of the impact of certain image features on 3D object detection performance compared to the lidar only detection case where the training of neural networks as well as the inference is solely performed on pointclouds. It is also examined how certain types of image features under different conditions (various scenarios including partially occluded, close and distant vehicles, data acquired by various lidar types) contribute to the performance improvement of lidar only based solutions by transforming the pointcloud into a "clever" pointcloud by applying the proposed augmentation techniques.
The paper is organized as follows: • in Section II the related work including the brief overview of the state of the art solutions of sensor fusion is described; • Section III summarises the problem addressed by the point cloud augmentation algorithms presented; • Section IV presents the proposed point cloud augmentation algorithms; • Section V analyses the results achieved by using augmentation networks; • Finally Section VI reports conclusions.
II. RELATED WORKS
Many methods appeared in the literature in the last few years (first of all machine learning based approaches) to tackle the detection problem especially in lidar pointclouds. Let us categorize the methods developed for object detection into two classes, i.e. approaches which operate on lidar pointclouds only and approaches utilizing camera images together with lidar pointclouds.
A. LIDAR ONLY APPROACHES
Lidar only approaches are efficient for short range detection, however at longer distances the density of lidar points is significantly reduced, which makes it difficult to detect objects reliably. By utilizing lidars the vehicle or pedestrian detection task might be performed under various weather conditions efficiently. Building on the PointNet design developed by Qi et al. [9], VoxelNet [10] was one of the first methods to perform true end-to-end learning in this area. VoxelNet creates voxels, applies a PointNet to each voxel, followed by a 3D convolutional middle to consolidate the vertical axis, after which a 2D convolutional detection architecture is applied. While the performance of VoxelNet is robust, inference time is too slow for real-time deployment. Recently, SECOND [11] improved the inference speed of VoxelNet, but 3D convolutions remain a bottleneck. The bottleneck was solved by PointPillars [12] which is still one of the most computationally efficient architecture (according to the KITTI benchmark site [13]) designed for 3D object detection task in lidar pointclouds. In PointPillars the 3D points are organized into columns (pillars) and transformed into a sparse tensor of learnt abstract features which are then processed by further convolutional layers to get detections in form of 3D bounding boxes. A different concept for object detection in pointclouds is proposed by the authors of the so called Self-Ensembling Single-Stage object Detector (SE-SSD) where they focus on exploiting both soft and hard targets by introducing two Single-Stage object Detector (SSD) networks being in a "student" "teacher" relation. [14]. The Semantic Point Generation (SPG) method proposed in [15] aims to recover missing parts of foreground objects by generating semantic points which might be utilized by pointcloud based object detectors directly to enhance detection.
B. CAMERA AND LIDAR BASED APPROACHES
In order to extend the range of detectability of objects and increase reliability, joint application of different sensor types is highly welcome. The authors in [1] proposed a multimodal approach by fusing information from lidar pointclouds and semantic-rich stereo images. They bridge the resolution gap between the lidar and Camera by introducing so called virtual points. Another multi-modal approach is proposed in [2] where the the lidar points are augmented by semantic information being extracted from images in form of pixel categories resulted by semantic segmentation of the image. In the so called EPFNet [16] the authors enhance lidar points with semantic image features in a point-wise manner without any image annotations. In the work [17] the pointcloud of occluded objects is handled by learning object shape priors based on which the shape of the complete object might be estimated. Authors in [18] consider geometric consistency between detections in the image and the pointcloud, meaning that 2D bounding boxes and the projected 3D bounding boxes 2 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3181137 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ of detections must be consistent as well as the so called semantic consistency which is related to the category of objects. The RPN model proposed in [19] performs multimodal fusion on high resolution feature maps in order to generate more reliable 3D object proposals for multiple object classes.
In this paper a camera-lidar fusion for object detection is proposed, which is based on augmenting the lidar points with corresponding image patterns as well as by individual pixel data. We will show how fusion strategies of this kind affect the performance of the baseline detector. The proposed fusion models enable real-time application (the frame rate of the detector is kept at 20 fps which is the frame rate of lidar sensors available today). The effectiveness of the proposed augmentation techniques is evaluated on the KITTI dataset as well as on further real world data collected by the authors. It is also shown how such augmentations may improve the performance of the selected baseline network when performing the inference on pointclouds generated by different lidar sensors.
C. FUSION RADAR WITH CAMERAS OR LIDAR
In sensor fusion radars are also well considered sensors, first of all due to their longer range, cost efficiency and applicability even under limited visibility conditions. The so called AssociationNet [3] generates a pseudo-image from radar pins, 2D bounding boxes and the original RGB camera image which is fed into a neural network to learn highlevel semantic representations. Camera-radar fusion might be applied at object level, as well [4].
Thermal imaging cameras operate well even under limited visibility conditions, they can jointly be utilized with Lidars to achieve more accurate object detection. In [20] for instance the authors use such combination of sensors while in [21] radar sensor is also included. Researchers at the University of Berlin have presented a solution where radar and Lidar detections are fused aimed for highway applications [22].
III. PROBLEM DESCRIPTION
There have many object detectors been proposed during the last few years operating on various types of sensory data (lidar pointcloud, camera image, radar pointcloud, etc.). Here our main goal is to show how the performance of an object detector operating on pointclouds only might further be improved by low level fusion with camera images. We will also show how a network trained on data acquired by a specific sensor performs on pointcloud data acquired by comparable sensors of other vendors and what improvement in detection performance might be expected when fusion is applied. Here under fusion we mean camera-lidar fusion, i.e. fusing image pixels with pointcloud data.
Here we would like to point out the impact of sensor specific data patterns -produced by different lidar sensorson detector performance (due to beam angles, resolution and sensitivity varying from sensor to sensor). Obviously some performance degradation of detectors might be expected due to sensor specific pointcloud patterns and reflectivity profiles representing the objects. We would like to show how camera-lidar fusion performed at lower level of abstraction may contribute to the reduction of performance degradation. Since there are many types of lidar sensors and many setups exist (each causing different pointcloud patterns to appear on surfaces of objects), collecting training data for each specific setup and sensor type individually is energy and time consuming. Instead of retraining the network on sensor or setup specific training datasets, we aim at improving its robustness by applying lower level fusion of pointclouds with image pixel data.
As baseline model we have chosen the PointPillars [12] object detector (operating on lidar pointclouds) which has remarkable performance considering its speed and precision of detections (according to the KITTI 3D object detection benchmark). Although some newer detectors managed to get higher precision but they have still much lower frame rate than PointPillars. We have trained the baseline model as well as the fusion capable network on the KITTI training dataset [23].
A. DIFFERENCE IN POINTCLOUD PATTERNS
The pointcloud patterns formed on object surfaces differ from manufacturer to manufacturer of lidar sensors (by considering the same scenario and sensor placement), which may strongly influence the performance of networks trained for a specific lidar sensor but applied on data acquired by a different one. The following figure shows two pointcloud patterns corresponding to two different lidar sensors (both sensors were modeled in accordance with their specification sheets by the dSpace SensorSim sensor simulator)(see Fig. 1). Let us call these sensors as sensor-A and sensor-B. In Figs. 1a and 1b vehicles being 25m apart from the sensor origin can be followed while in figs. 1c and 1d the vehicles were set to be 15m away from the sensor origin. The height of the lidar sensor for both scenarios was set to be 1.73m (according to the test vehicle of the Karlsruhe Institute of Technology [ [13] [23]]). The orientation of vehicles was 45°wrt. longitudinal axes of the lidar. The aim of this simulation is to point out the differences between pointcloud patterns. One may observe that the density of points as well as the formed pointcloud patterns differ in both cases. Another factor to be considered is the difference in reflectivity profile of lidar sensors The performance of the trained detector obviously degrades when running on data acquired by a different lidar sensor. Another important aspect here is the intensity profile of lidars, which may also differ from vendor to vendor and therefore it stands for an additional limiting factor for the usability of pretrained neural networks (trained on specific lidar data) in case of different lidars. Each manufacturer handles the reflection of the laser beam differently, from which the reflectivity value is calculated by the sensor.
In the upcoming sections we will show how the lidar reflectivity information influences the performance of the baseline model and how image pixel information may contribute to the performance improvement of detectors compared to lidar reflectivity values.
IV. PROPOSED POINTCLOUD AUGMENTATION
In order to combine data from different sensors to generate higher level features to enhance the performance of detectors we proposed a data driven sensor fusion approach where the fusion itself is done by neural network architectures, as well as in IMF-DNN architecture [24]. The data acquired by the lidar is combined with image features (see later in this section) which is applied during training as well as inference. The other class of fusion algorithms, where mathematical models are used to generate detections is called model-based fusion [25].
A. THE SELECTED BASELINE MODEL
We selected the PointPillars convolutional neural network proposed by authors in [12] as the baseline model in order to apply and evaluate the impact of our proposed augmentation techniques on detection performance. The main components of the PointPillars are the so called Pillar Feature Network, the Backbone, and the Single Shot Detector (SSD) head [26]. It converts the raw pointcloud to a stacked pillar tensor and a pillar index tensor. Then a feature encoder uses the stacked pillars to learn a set of features to form a so called 2D pseudoimage serving as input for the Backbone convolutional neural network. Based on the generated features the detection head predicts 3D bounding boxes of objects present in the scene [12]. Starting from this baseline model our aim was to include image pixel information into the process of pseudo image creation in order to force the network to learn higher level features from pointcloud and image data jointly. For transforming the augmented input into a higher level feature vector (see Fig.2) a fully connected layer has been applied similarly as in [ [10] [9]]. The next section (IV-B) gives a deeper insight into the extended architecture as well as the alternatives used for image-pointcloud fusion.
B. EXTENDED PILLAR FEATURE NET
The idea of using image features comes from the fact that when using different brands of lidars, we cannot fully align the reflectivity values. Another problem that arises is that pretrained models may be sensitive to internal sensor parameters, such as the angle or pitch of the beams. As extensions to the original baseline model, three different architectures have been proposed to increase the robustness against the influence of varying sensor parameters.
Let p i = [u i , v i ] T stand for the pixel coordinates of the projection of a 3D point P i = [X i , Y i , Z i ] T from the lidar pointcloud onto the camera image plane using the pinhole camera model as follows: wherep i andP i stand for the homogeneous coordinates of p i and P i , respectively, K denotes the camera matrix (which contains the focal length f x and f y expressed in terms of pixel width and height, respectively; principal point coordinates x 0 , y 0 and the axis skew s), R the rotation matrix and t the translation vector corresponding to the transformation from the lidar frame to the camera frame. Let I L i and I cam i stand for the reflected laser beam reflectivity and the image pixel intensity of P i and its projection p i , respectively.
Let us point out that in the baseline model [12], we augment each lidar point P i in the pillar it is contained in, as follows: where M j = [M j x , M j y , M j z ] and C j = [C j x , C j y , C j z ] denote the mean of points falling in the jth pillar and the center of the pillar, respectively. Considering the above original augmentation we have incorporated image pixel information into P * i as follows: Let P * * i denote the reduced version of the augmented point P * i , where r i is not included. The following cases have been considered: 1) Each P * * i is augmented by v i (1P1P) 2) Each P * * i is augmented by the intensity vector formed from a N × N neighborhood of p i (1P25P) 3) Each P * * i is augmented by the normalized intensity vector formed from a N × N neighborhood of p i (1P25PN) 4) Each P * i is augmented by r i and v i (1P1P) 5) Each P * i is augmented by r i and the intensity vector formed from the intensities of a N × N neighborhood of p i (1P25P) 4 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and During our experiments we set N = 5. Together with the original baseline models (with and without considering r i ) eight networks corresponding to the above cases were trained, evaluated and tested. Each of these networks was trained and tested on the same splits of the KITTI [13] dataset. The original training data (7481 snapshots) was split by random selection into 3212 training, 3269 validation and 1000 test samples. After evaluating the networks on the test set, we tested their performance on KITTI RAW [23] data as well as on data collected by us using a lidar different from the one used in KITTI. Unfortunately, there is no ground truth for raw and custom dataset, so we cannot determine the accuracy of the detections for those cases, but we can draw useful conclusions from the number of true/false detections. In the following chapters let us describe the structure of the extended feature network in detail.
The modified architectures extend the PFN (Pillar Feature Network) network of the baseline model. The modified network is of size (9+K, 64), where the 9 features in the original input are augmented by K = 1 or K = 25 image features, while the output size is 64. The augmentation is performed as follows:
1) The 1P1P Network
First, the original network was modified by attaching to each point P i in the pointcloud the intensity value (taken from the HSV color space) of the pixel corresponding to the projection of P i in the camera image (see Fig.3). In order to project a 3D point onto the camera image plane the camera and the lidar must be calibrated first, i.e. the intrinsics and extrinsics must be estimated. For this purpose the calibration approaches in [ [14] [27]] have been used.
2) The 1P25P(N) Network
Second approach a vector of 25 pixel intensities is attached to each lidar point P i . Let us denote this vector by v i , which contains the intensity values of the 5 × 5 sized neighborhood of the projection p i . Let us denote this neighborhood by U i . v i can be expressed as v i = V ec(U i ). In order to ensure accurate comparison across U i we normalized the elements of U i to have zero-mean and unit-variance (see Fig.5). However we have also tested the case when non-normalized neighborhood intensities are used for augmentation (see Fig. 4). By including neighborhood related information to the features of each 3D point, the network during training may "utilize" spatial image information, as well.
A. EVALUATION OF RESULTS ON A SEPARATED TEST SET
The performance of the detectors was tested on a separated test set containing 1000 training images from the KITTI 3D Benchmark. The metrics used for comparison here are the precision, recall and the mean Average Precision (mAP). The latter is calculated by averaging AP values over multiple Intersection over Union (IoU) thresholds used by COCO [28] [29].
Two groups of detectors (each using the same baseline model but different augmentation) were compared. The first group uses all data from the lidar sensor, i.e. the pointcloud as well as the reflectance value for each lidar point. In the second group of networks the reflectance was omitted in order to eliminate the influence of different reflectance encoding schemes being used across lidar manufacturers. Obviously in this case the network is forced to learn from reduced data however our goal here is to substitute the reflectance value by image pixel intensities and show their impact on the performance of trained detectors.
During training, the weights for all considered networks were saved at every 5000 steps for which the mAP metric was calculated on the test set (with available groundtruth) for each category (easy, moderate, and hard) according to the KITTI benchmark site [13](see Fig. 6). One can see that in case when the reflectance is included, the image based augmentation has no remarkable effect on the mAP (less than 1% difference). On the other hand when the reflectance is omitted, the image augmentation caused observable increase in the mAP. The largest contribution of image pixel intensities to mAP improvement can be observed in case of hard objects, i.e. when the number of rays reflected from the surface of objects is small.
The training of detectors was stopped after a certain number of steps which in case of the original and 1P1P detectors was roughly 300000 steps while in case of the 1P25P(N) networks roughly 600000 steps. We expected that more steps will be required for training a more complex network, but none of the considered networks produced remarkable improvement after 300000 steps. Each network was trained on the same splits of the KITTI dataset. To train the models, the hyperparameters used by the baseline model have been considered. The values of the most relevant hyperparameters are given in Table 1.
B. TEST RESULTS ON KITTI RAW DATA SCENARIO
In this section, we selected the weights from the detectors that performed best in the evaluation process. Depending on whether the reflectance was included or omitted the 1P1P and 1P25P networks showed the best performance (see Figs.12 and 6). The selected weights were used to run through the network the "0104" drive data from the KITTI RAW dataset recorded on 26.09.2011 [23]. Although due to the absence of groundtruth data, the previously applied metrics were not 6 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and Fig. 8 a sequence of 5 frames can be seen. The top row shows the detections resulted by the original architecture while the bottom row shows the detections obtained by the 1P25P network. Here the lidar reflectivity values have also been taken into account. One may observe that there is no significant difference between the number of the detected objects for this sequence, thus the contribution of image features to the overall detection performance is negligible in this particular case. Fig. 9 shows the same sequence of 4 frames. The top row shows the detections of the original architecture and the bottom row shows the detections of the 1P1P network. In this series, the lidar reflectivity values were omitted from both the original as well as from 1P1P network. As one may observe the 1P1P network was able to detect more distant or occluded cars with a confidence larger than 70%, thus in this case the contribution of image features to performance improvement is remarkable. Figure 10 shows another sequence, also consisting of 5 frames. The top row shows the detections of the original architecture and the bottom row shows the detections of VOLUME 4, 2016 7 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3181137 1P25P network. In this series, the lidar reflectivity values have been taken into account. There is no remarkable difference between the performance of the two networks. The same set of vehicles is detected by both networks, even in terms of orientation and location accuracy they are nearly of the same quality. Thus, by using image pixel intensity besides the lidar reflectivity, the performance improvement of the network is negligible.
In Fig. 11 the top row shows the detections resulted by the original architecture while the bottom one shows the detections yielded by the 1P1P network. Here the lidar reflectivity values were omitted. The number of objects detected by the 1P1P network compared to the original one (with reflectance omitted) has increased significantly. The confidence limit was set to 70%. The original network was able to detect nearby vehicles confidently, but in case of distant or occluded cars it did not perform as reliably as the 1P1P network did with image features. Significant increase in the number of true positive detections can be observed in case of the 1P1P network.
This section showed a comparison of the original and the 1P1P network for the case when lidar reflectivity values were taken into account, and the 1P25P network for case when the reflectivity was omitted. Here we considered these two networks only because according to Fig. 12 they proved to perform remarkably better than 1P25PN.
C. DETECTOR PERFORMANCE ON OUR CUSTOM RECORDED DATA
We recorded our custom dataset with a different type of lidar sensor and camera than the one used by the KITTI vision benchmark suite. From numerous recordings, two groups were selected to test the contribution of image features on the detection performance. The confidence limits of detections were set to 70% and 75%. The networks were tested on the same snapshots. Fig. 13 shows the detections when the lidar reflectivity was taken into account on the same short series of recordings. The confidence limits of the detections were set to 75%. The results show no difference in the number of detected objects in this case. There are some cases where the detectors (original, 1P1P, 1P25P, 1P25PN) recognize different vehicles but the overall performance has not been improved.
1) The First Scenario From Our Custom Dataset
The detectors were also tested on these recordings by omitting the lidar reflectivity (see Fig. 14). The results show that the modified networks detect more vehicles on these frames. The reason behind this might be the low number of points for each vehicle due to occlusions. As it can be seen the 1P25P and the 1P25PN networks detected most of the vehicles, while the original network (by omitting the lidar reflectance) provided fewer detentions. Neither of the detectors was able to detect all vehicles in this scene.
2) The Second Scenario From Our Custom Dataset
Similarly to the first scenario, the detectors were evaluated with and without considering the lidar reflectance (see Fig. 15) and Fig. 16). The same phenomenon can be observed as in case of the first scenario. By including the reflectivity values the performance did not change. On the other hand by omitting reflectivity values the modified architecture proved to be more effective.
HARDWARE SETUP AND CALIBRATION
For recording the stream of image-pointcloud pairs a Hikvision DS-2CD2063G0-I camera having 6MP resolution and an Ouster OS-1 Uniform 64 channel lidar sensor was used. The calibration of the camera was performed by the method proposed by Zhang [30]. The Camera-lidar extrinsics have been estimated by the method proposed in [27].
The detector works by projecting the lidar points onto the camera plane, thus in addition to an accurate calibration, VOLUME 4, 2016 9 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3181137 FIGURE 12. KITTI 3D object detection evaluation metric for each network architecture. The individual rows depict the recall-precision curves for the original PointPillars, the 1P1P, the 1P25P and the 1P25PN networks, respectively. The 1st column corresponds to the recall-precision curve for the case when the lidar reflectivity was also considered while the 2nd column reflect the case when the lidar reflectivity was omitted. The 3rd and 4th column correspond to cases when the detection threshold was set to 70% with lidar reflectivity included and omitted, respectively.
it is essential to precisely synchronize the acquisition of data in order to determine the correct pixel intensity value corresponding to a given 3D point. The importance of time synchronisation is illustrated by Liu et. al. in Matter of time [31]. Inaccurate synchronization may affect the performance of the detector significantly.
VI. CONCLUSION
Reliable environment sensing is one of the most important tasks for self-driving vehicles. The most common types of available object detectors are the lidar only, camera-only and the camera-lidar based detectors. In this paper a low level camera-lidar fusion was proposed based on augmentation of pointcloud data by image features to improve the performance of lidar only based detectors. It was shown how pixel intensity patterns (compared to 3D spatial data) contribute to the reliability of detections especially in those cases when distant objects (represented by lower number of points in the pointcloud) have to be detected. The augmentation is performed by attaching reshaped image intensity patterns to each projected 3D point in the pointcloud. The network retains 20 FPS, which corresponds to the highest frame rate of available lidar sensors. The accuracy of the detector was evaluated and tested on the KITTI dataset as well as on custom data. [24] Jian Nie, Jun Yan, Huilin Yin, Lei Ren, and Qian Meng. A multimodality 12 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. | 7,485.8 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
ClustENMD: efficient sampling of biomolecular conformational space at atomic resolution
Abstract Summary Efficient sampling of conformational space is essential for elucidating functional/allosteric mechanisms of proteins and generating ensembles of conformers for docking applications. However, unbiased sampling is still a challenge especially for highly flexible and/or large systems. To address this challenge, we describe a new implementation of our computationally efficient algorithm ClustENMD that is integrated with ProDy and OpenMM softwares. This hybrid method performs iterative cycles of conformer generation using elastic network model for deformations along global modes, followed by clustering and short molecular dynamics simulations. ProDy framework enables full automation and analysis of generated conformers and visualization of their distributions in the essential subspace. Availability and implementation ClustENMD is open-source and freely available under MIT License from https://github.com/prody/ProDy. Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Mapping the conformational space of proteins has been a challenge, especially for large assemblies of complexes. Elastic network models (ENMs) and normal mode analysis (NMA) have proven to predict the global modes of motion of biomolecular systems, and particularly supramolecular machines in the last two decades, as shown in numerous comparisons with experimentally observed conformational changes (Bahar et al., 2010;Bakan and Bahar, 2009;Tama and Sanejouand, 2001). Thus, hybrid techniques combining ENM/NMA with molecular dynamics (MD) have come forth as computationally efficient means for elucidating transition pathways (Gur et al., 2013;Orellana et al., 2019) and for conformational sampling for large complexes (Costa et al., 2015;Kurkcuoglu et al., 2016), as we recently reviewed (Krieger et al., 2020).
ClustENM hybrid algorithm has been introduced for unbiased sampling of the essential subspace spanned by the softest ENM modes through integration with clustering and energy minimization of conformers. Comparison with experimental data has shown the efficiency and utility of ClustENM for investigating highly flexible proteins like calmodulin (Kurkcuoglu and Doruker, 2016;Kurkcuoglu et al., 2016) as well as large assemblies such as the ribosome (Can et al., 2017;Guzel et al., 2020;Kurkcuoglu et al., 2016). More recently, ClustENM conformers have proven to facilitate protein-DNA and protein-protein ensemble docking (Kurkcuoglu and Bonvin, 2020) and prediction of cryptic allosteric pockets (Kaynak et al., 2020).
Such promising results have motivated us to further develop and implement the ClustENMD version in the widely used ProDy (Bakan et al., 2011;Zhang et al., 2021) application programming interface (API) via integration with OpenMM (Eastman et al., 2017) software. This version allows us to generate more realistic conformers by performing short MD simulations even for large allosteric complexes, together with high efficiency and full automation within a Python environment.
Methods and features
The ClustENMD algorithm is explained schematically in Figure 1A. In Step 1, the input structure is subjected to anisotropic network model (ANM) analysis to produce atomistic conformers using random deformations along linear combinations of a set of global ENM modes. In Step 2, the conformers are clustered based on their structural similarities, and a representative member is selected for each cluster. In Step 3, the representatives from the previous step are structurally relaxed by short MD simulations using OpenMM. The new conformers are then fed back to Step 1, each being used as a starting point for a new generation of conformers. This iterative procedure (Steps 1-3) is repeated for several generations to allow for sufficiently large excursions from the original energy minimum.
ClustENMD has the following features (see Supplementary Material including Tutorial for details): • Implemented as a class in ProDy • Integrated with OpenMM (Step 3) • Applicable to multimeric complexes/assemblies, comprising protein, RNA and/or DNA chains • Input structure either retrieved from the Protein Data Bank (Berman et al., 2000) (PDB) or provided by the user in PDB file format • Addition of hydrogens and any missing heavy atoms in the residues of the input structure by PDBFixer/OpenMM and/or in PDB/DCD format • Analysis of output conformers using diverse ProDy modules, e.g. ensemble analysis • Fully automated pipeline, from input PDB file to the generated ensemble of conformers • High-computational efficiency on GPU-architecture
Illustration
ClustENMD results for two case studies are presented in Figure 1 using simulations in implicit solvent model and heating up (HU) the system to 300 K. Figure 1B presents the population distribution for adenylate kinase (AK). AK is known to undergo a large conformational change (7 Å RMSD) between open (apo) and closed (substrate/inhibitor-bound) states. The contour plot corresponding to the population distribution of ClustENMD conformers is displayed on the two-dimensional (2D) space spanned by the two interdomain angles, namely LID-Core and NMP-Core (Beckstein et al., 2009). This plot is based on six independent runs, each comprising five generations (see Supplementary Table S1 for details on all systems/runs). Homologous experimental structures retrieved from the PDB using ProDy are shown on the same plot (black dots). ClustENMD conformers sample the two states as well as the transition region between them. See Supplementary Figure S1 for runs with more generations. Figure 1C displays the 2D space for hetero-dimeric HIV-1 reverse transcriptase (RT), a large enzyme (N ¼ 1000 residues). Here, the population distribution (contour plot) is projected onto the essential space of experimentally resolved structures. The axes denoted by the first two principal components (PC1 and PC2) are derived from the principal component analysis of 365 experimental structures (black circles) resolved for RT under different conditions (oligonucleotide/inhibitor-bound or -unbound). ClustENMD conformers (red circles) projected onto this space sample the close neighborhoods of most experimental structures (see Supplementary Fig. S2 for other runs including those in explicit solvent). We also present in Supplementary Figure S3, the counterpart of Figure 1C for AK, i.e. the generated conformers projected onto the subspaces spanned by experimentally defined PCs. Supplementary Figures S4 and S5 further display the respective ensemble of RT conformers in the space spanned by fingers-thumb versus fingers-RNase H distances, and that of HIV-1 protease conformers projected onto PC1-PC2 subspace.
The high efficiency of ClustENMD is reflected by the average computing time for a 5-generation run that generates 300 conformers (presented in Fig. 1), which takes 8 min (AK, 214 residues) to 27 min (RT, 978 residues) on a single GPU platform with NVIDIA V R GeForce V R RTX 2080 Ti graphics card. For a comparison of computational efficiency and required resources, we refer to a recent enhanced sampling study on generating a detailed free energy landscape for AK using Gaussian accelerated replica-exchange umbrella sampling (Oshima et al., 2019); each of the 32 replica (of 2 Â 10 8 MD time steps) would require a couple of days on the same platform, as opposed to a total run time of 8 min for ClustENMD. We note that ClustENMD could be applied to the protein model refinement problem if the excursions/deformations from the initial model are restricted, possibly by adding restraints (Heo et al., 2021), in order to retain the initial model's conformational characteristics. It remains to be seen in future applications whether such an approach might help to efficiently sample conformations closer to the native structure.
Concluding remarks
The ClustENMD algorithm is implemented within ProDy (Bakan et al., 2011;Zhang et al., 2021), an open-source Python API for protein structure, dynamics and sequence analysis, containing multiple modules. The ProDy package (downloaded more than 2.1 million times and visited by 140 000þ unique users worldwide) ensures broad dissemination of ClustENMD to the research community in addition to providing accessory tools for analyses and visualization. The current version of ClustENMD is unique in performing unbiased sampling with high-computational efficiency, augmented by fully automated and user-friendly features upon integration with ProDy and OpenMM.
Funding
This work has been supported by the National Institutes of Health [P41GM103712 to I.B.]. HIV-1 RT. Conformational surface plotted along the first two PCs of experimental structures (black dots), onto which conformers and the initial structure (green circle) are projected. Cyan contours indicate the density levels of 903 ClustENMD conformers. The distributions in panels B and C were produced by kernel density estimate (KDE) using Gaussian kernels. | 1,842.4 | 2021-04-18T00:00:00.000 | [
"Computer Science",
"Biology",
"Chemistry"
] |
An All-Time-Domain Moving Object Data Model, Location Updating Strategy, and Position Estimation
To solve the problems from the existing moving objects data models, such as modeling spatiotemporal object continuous action, multidimensional representation, and querying sophisticated spatiotemporal position, we firstly established an object-oriented all-time-domain data model for moving objects. The model added dynamic attributes into object-oriented model, which supported all-time-domain data storage and query. Secondly, we proposed a new dynamic threshold location updating strategy. The location updating threshold was given dynamically in accordance with the velocity, accuracy, and azimuth positioning information from the GPS. Thirdly, we presented several different position estimation methods to estimate the historical location and future location. The cubic Hermite interpolation function is used to estimate the historical location. Linear extended positioning method, velocity mean value positioning method, and cubic exponential smoothing positioning method were designed to estimate the future location. We further implemented the model by abstracting the data types of moving object, which was established by PL∖SQL and extended Oracle Spatial. Furthermore, the model was tested through the different moving objects. The experimental results illustrate that the location updating frequency can be effectively reduced, and thus the position information transmission flow and the data storage were reduced without affecting the moving objects trajectory precision.
Introduction
Moving objects database [1] (MOD) is a database which manages position-related information of moving objects. Recently, the research of moving objects database focused on moving object representation, modeling, indexing, query, uncertainty dealing, and privacy protection [2]. The data model is the basis for the moving object database. The early study centers on the spatiotemporal data model. Zhao [3] formally defined the types and operations, offered detailed insight into the considerations that went into the design, exemplified the use of the abstract data types using SQL, and offered a precise and conceptually clean foundation for implementing a spatiotemporal DBMS extension. A spatiotemporal cube model [4] and a simplest time slice snapshot model [5] were presented and used. However, the model cannot fit with the description of spatial changes accompanying the temporal changes. Worboys et al. proposed an object-oriented spatiotemporal data model [6], which was further studied by many other scholars [7][8][9][10]. Tryfona proposed spatiotemporal entity relation (STER) data model [11,12] which used an extended entity relationship to describe the phenomenon in real world. After that many scholars started to work on the spatiotemporal database model to adapt to the position management of moving objects. Wolfson presented the moving object spatial-temporal (MOST) model [13,14], which showed the spatial difference with time by a dynamic attribute. However, the MOST model failed to describe the whole spatiotemporal trajectory and only supported current and future state query of moving objects. Jin et al. [15] built a spatiotemporal object relation model based on object relation data model, and the key was to extend classification system and operation objectives relation data model by abstract data types (ADT). Xue et al. [16] proposed process-oriented spatiotemporal data model in which studied expression organization and storage of continuous changing geographic entities were given. Ding carried out the research on the Dynamic Transportation Network Based Moving Objects Database (DTNMOD) and its real-time dynamic transport network flow analysis method based on the spatiotemporal trajectory of moving objects [17,18]. Yang et al. developed a protocol based on prediction that significantly boosts the speed of message transmission over vehicles [19,20]. From the above analysis and comparisons, it is indicated that object-oriented model and moving object model worked better in spatiotemporal object modeling and have the advantages in spatial query, time query, and spatiotemporal query. But in the continuous moving modeling, the data transmission and data storage also have some disadvantages.
The traditional location update method was time point selection method, including equal-time algorithm, equaldistance algorithm, and dynamic time point algorithm [21,22]. Equal-time algorithm was the most simple and was more effective to achieve and the communication flow is stable but the adaptability was poor. When the moving object is static, it also records the position which causes invalid storage. Equal-distance algorithm has dramatically increased communication flow in high-speed motion; dynamic time point algorithm is capable of reducing the position update, but this method is only suitable for high positioning accuracy and stably changing positioning parameters.
According to the deficiencies of continuous moving modeling, multidimensional expression, and complex spatiotemporal queries, an object-oriented moving object database model supporting all-time-domain query was proposed by combining static object model and moving object model. In order to reduce the amount of moving object data transmission and storage, a dynamic position updating strategy was studied. It is suitable for frequent changes of position of moving objects. This model solved the problem of continuous storage, multidimensional expression, and complex spatiotemporal queries. This paper realized the model by an instance at last.
Moving Object Trajectory Segment
Model. The motion and change of moving object are continuous. In the traditional moving object data storage model, the object-relational database was used to store the information continuously obtained from the moving object at a certain time interval. It not only increases the amount of the data storage but also increases the difficulties in the history and future spatiotemporal queries of the moving objects and in the complex spatiotemporal behavior queries. In order to solve these problems, dynamic properties methods of the MOST model are proposed and implemented in this paper. The discrete modeling ideas are proposed to support dynamic properties of the entire trajectory model, and all the trajectories of moving objects are separated into many small segments as shown in Figure 1.
In Figure 1, axis means the time, ( , ) means the point information of moving objects, and is the coordinate information at moment. stands for the position information of moving objects at moment. Each discrete segment is abstracted as dynamic changes within a certain period of time .
In order to support the historical trajectories of moving objects modeling, the quadruple ( , , , ) was put forward to support the all-time-domain query. All trajectories segments expressed by the quadruple constitute continuous trajectories of moving objects: Here, stands for basic information of moving objects, such as moving object ID, position, direction, speed, accuracy, and other information. indicates the basic attribute information of moving objects, such as moving object ID, name, and type, is the position information of the moving object in the moment , = ( , V , , ), is the coordinate information at moment, V is the velocity at , is the azimuth at , and is the accuracy at . In this paper, the dynamic properties of the model are built on the location ( ) change over time . indicates the time the object moves. is not a fixed time interval but a right open interval, which is uncertain. The size of is related to the updating strategy of moving object location.
represents motion function of moving objects; ( ) is the motion function of moving objects and is a timedependent function, by which we can estimate the position of a moving object in the past, at present, and in the future within a short time. Motion function can be divided into many types according to the actual situation, such as linear function and curvilinear function.
is the state of the moving objects in the period . There are mainly three types of the moving object state : past, current, and network failure or interruption, respectively, represented by 1, 2, and 3. When = 1, the position is used to store and query the historical trajectory; when = 2, that is used to store current position and estimate future position; when = 3, the location information will not be stored until the terminal receives the location information again. When the update data is firstly or lastly received, a new moving object trajectory segment will be created, the state of motion is represented by 1, and at the same time the front tracking segment state is modified. The former tracking segment state is judged according to the time interval between the current time when it receives data and the previous time +1 when segment ends. When < Δ , the last trajectory segment state is changed to 2 and when ≥ Δ the last trajectory fragment state is set to 3.
Entity Model of Moving Object Trajectory Segment.
The trajectory segment of moving objects is represented as an object entity. The object entity of the trajectory segment includes the following: moving object identification (MID), the starting point (MovingPT1), the ending point (Mov-ingPT2), the time interval (Period), and the moving state (State) of the trajectory segment, as shown in Figure 2. In the model, the moving trajectory point (MPT) is an abstract entity object, including location information (PT), velocity, azimuth, and accuracy, and the location information is also an abstract spatial object, including latitude, longitude coordinates, and projection information.
The abstraction entity of moving object trajectory segment is MO Entity, which inherits the time abstract entity (Period) and the moving point object (MPT) and also extends the spatial and temporal relation operation (ST Operation); the MID is a unique identification of the moving object, which can be used to indicate the basic information of the moving object. The moving objects have three moving states in period , including history, now, and positioning or communication failure.
Position-Based Dynamic Threshold Location Updating
Strategy. There are two kinds of traditional location updating method. One is equal-time algorithm and the other is equaldistance algorithm. Equal-time algorithm refers to upload, update positioning data according to a certain time interval. In this algorithm, the communication flow is stable, but the adaptability is poor. Equal-distance algorithm refers to upload, update positioning data according to a certain distance interval. When the moving object is in high-speed state, the location update frequency and the communication traffic will be greatly increased. In practical applications, the position information, such as velocity, position accuracy, and azimuth received from GPS, was in constant changing status and the changes are also irregular. Velocity is the main factor to the frequency of location updating, which is the principle of the dynamic time point algorithm [19]. When the velocity from two adjacent points changes slightly, the mid-time positions can be estimated by two adjacent points. Thus, the position point will not be updated. Different positioning accuracy determines the position changing distance, for example, distance position algorithm; if position changing distance > Δ , this point should belong to the location updating point, but in fact this point may be an error point. So we designed a dynamic distance threshold changed by the positioning accuracy. Only when the position change of distance was greater than the distance threshold, the location point was updated. The corresponding relationship between accuracy and distance threshold was shown in Table 1. The changing of azimuth mostly affects the trajectory precision of moving objects shown as in Figure 3. Moving object moves from road A to road B and then to C and finally enters the D; if we do not consider the impact of orientation on the location updating, the trajectory of moving object stored in database is 0 → 1 → 2 → 3 → 4, while the real trajectory was 0 → 1 → 2 → 3 → 4 → 5 → 6. In consideration of the azimuth location update, the trajectory of moving objects stored in the model database was more practical. According to the influencing factors of the location update of moving object, a new dynamic threshold algorithm was designed based on the moving object positioning information. The algorithm combined with GPS positioning information (velocity, accuracy, and azimuth) to dynamically determine the location update threshold. Here is the definition of the location update strategy.
Hypothesizing that 1 is the last position change point of moving object, 2 is the location information point of moving objects lastly acquired, the spatial distance between and 2 is , azimuth changing is , if > Δ and = 2 − 1 > Δ and V > 0 ‖ > max , that position is believed to be with the update condition.
Here, Δ is the minimum distance update threshold under the different accuracy and max is the minimum distance update threshold when the distance changes more than max , no matter whether other conditions satisfied all the update location. Δ is azimuth threshold and V stands for velocity. V = 0 means that the position does not change.
Design of Location Updating Strategy Flow.
According to the definition of location update strategy, the position updating process can be constructed as follows (shown in Figure 4): (1) real-time acquisition of GPS data packet; (2) data packet analysis and instantiation as one position object location; (3) judgment according to the dynamic precision threshold rule on whether the distance change is larger than the minimum change threshold in the current precision condition; if not, do not update the position and jump to (1); (4) judgment on whether the time between the current position and last updated position is greater than the maximum time threshold; if so, that means it meets the location update condition; then go to (8); The moving terminal gets the position point data through the GPS module and then the terminal program parses the position information and judges whether the location information meets the moving updating strategy requirements. If the location information meets the requirements, it will be sent to the server and update the position point. (1) Inquire about the location data that client received and parse it; then process the model of moving objects database.
(2) Query whether the moving object MID is in the historical trajectory database; if not, go to step (7); else go to step (3).
(3) Query the current location segment of the moving object, denoted as MO1.
(4) Calculate the time interval between the upload location time and the last update time; if the time interval is greater than the threshold of time, then go to step (6); else go to (5).
All-Time-Domain Position Estimation
Generally speaking, the trajectories of moving objects are continuous. However, the location information is discrete and discontinuous recorded by moving object Database. When querying and visualizing moving object position and trajectory at any moment, the moving object position needs to be estimated according to the location information stored in the database. In order to simplify the study, the moving object trajectory will be defined as a straight line or curve in the time period by the proposed dynamic location update strategy.
Estimation of Historical
Position. The trajectories of moving objects are divided into different moving trajectory segments, whose trajectory curve is represented by a dynamic function. Thus, in order to query the moving objects historical trajectory at a certain point in time, we need to estimate the trajectory by the motion function. Motion functions are selected under different circumstances and generally there are two kinds of motion function: one is a linear function and the other is the curve function. A linear function is applied to the object moving in the fixed road network, such as vehicle, while the curve function is applied to the vehicles moving freely, such as pedestrians. In this paper, we designed two kinds of estimation function. One is the linear path function, and the other is three times Hermite interpolation function.
Linear Path Function Method.
To estimate the position of moving objects at time, trajectory segment must be found at the moment of , which includes the starting point 1 ( ( 1 , 1 ), V 1 , , ) and ending point 2 ( ( 2 , 2 ), V 2 , , ) in the period : where 1 is the starting point, V is the average velocity measured from the starting point through ending point, Δ = − 1 , 1 is the longitude, 1 is the latitude, and = | 2 − 1 | is the azimuth angle. When using linear functions to represent the trajectories of moving objects, the moving object trajectory in the period will be seen as a linear function that passes updated location point.
Three Hermite Interpolation Function Method.
The cubic spline function interpolation method was used to estimate the curve movement trajectories of the moving object. The velocity change function V ( ) and the azimuth change function ( ) with time were calculated. Then the location of the moving object is estimated by position function loc( ) = loc( 0 ) + V( − 0 ). Here, V stands for the velocity at time , but in actual movement, the velocity from 0 to is constantly changing and the velocity at time cannot be used to represent the middle time velocity, so does the azimuth. Obviously, this method has a great error.
To solve this problem, the three Hermite interpolation function was established. The calculating progress is designed as follows. Firstly, calculate a three-time spline curve ( ) in three-dimensional space and decompose it into and plane projection, to get two three-time spline curves ( ) and ( ), as shown in Figure 5. Consider ( ) = , ( ) = . Then, use ( ) and ( ) to estimate the history position at time .
( ) can be solved in the same way and the position ( ) at any time can be calculated by ( ), ( ).
Future Position Estimation.
The moving object data model can only store the current and historical location updating data. The future location prediction of moving objects in short needs to depend on the corresponding algorithm. Three methods for estimating future location are given as follows.
The Linear Extended Positioning Method.
Linear extended positioning method means that the moving velocity and direction at +Δ moment are consistent with those at updating moment . In accordance with the current location update points as the future prediction, let be the motion state updated point of current trajectories and let |V | and be moving velocity and azimuth at the same time; then (5)
Moving Average Method.
Moving Average Method means that the moving velocity at +Δ moment is the average value of velocity at last updating times and azimuth is consistent with that at updating moment . Consider Here, is a positive integer and ≥ 2. Generally speaking, the large value of is meaningless. Thus, the value of is 2 or 3.
Cubic Exponential Smoothing Method.
Generally speaking, the moving velocity changes are nonlinear with time. The cubic exponential smoothing method can be used to predict the change trend of velocity with time series. Exponential smoothing method is an iterative process and can be denoted as ( ) . adjacent location updating points before current are usually selected to join the calculation. Time sequence is denoted in turn as 1 , 2 , 3 , . . . , . Besides, an initial value (1) 0 is needed. After multiple period of smoothing the effect of (1) 0 becomes relatively small, so the 4 moment is regarded as the first time point in time sequence. This time point value is the average of the last 3 time points mentioned before; that is, (1) (2) , (3) be the first, second, and third exponential smoothing value of velocity V at , , and which stand for the smoothing coefficients of time series whose values range is [0, 1]; then The above formula can help work out , , and and then the cubic exponential smoothing speed prediction model is described as follows: The above three methods can be used to estimate the future position, but in the actual object movement, the positions of moving objects are often changeable which leads to the errors of predicted values. To reduce the error of prediction, the position predicted time cannot exceed a certain range.
Moving Object Abstract Data
Types. The moving object database was implemented based on Oracle Spatial extension. Thus, the moving object abstract data type includes some basic data type, spatial data types. The basic data type is the basis for all databases, including Int, Number, Varchar, and Date. Besides, based on these types, the PL\SQL was used to define the temporal data type supporting temporal data and the spatiotemporal data type supporting spatiotemporal data. The abstract data type structure for moving object is shown in Figure 6. Temporal data type includes the instant (time), period (time interval), and periods (time interval set). Instant is composed of real data and time interval is made of the starting and ending points of period (sins, eins) of instant.
Spatial data types are established based on the Oracle Spatial SDO Geometry data type, which includes five attributes: SDO GTYPE, SDO SRID, SDO POINT, SDO ELEM INFO, and SDO ORDINATES. However, all forms of spatial data types are defined based on SDO Geometry data type, and the interface parameters are relatively complex. In order to provide simple spatial data type interface that is easy to assess, we packaged the SDO Geometry data type into S Point, S Line, S Polygon, S Rectangle, and S Circle, respectively. Spatial data must have a projection or coordinate system; the model uses the default projection 8307, namely, World Geodetic System 1984 (WGS 84), to establish spatiotemporal data types based on temporal data types and spatial data types.
Spatiotemporal data types are the inheritance of temporal data types and spatial data types, and spatiotemporal data types include I Point, I Line, I Polygon, I Rectangle, and I Circle.
Moving Object Data Model Implementation.
We used PL/SQL to define the abstract data types of moving objects; various types were constructed according to the objectoriented method. The base class is spatial data types and temporal data type classes. The class diagram data type definitions of all moving objects are shown in Figure 7.
Implementation of Spatial-Temporal Data Type.
Based on temporal data types and the spatial data types and using inheritance of object-oriented method, the fields and methods of spatiotemporal data types (I Point, I Line, I Rectangle, I Polygon, and I Circle) are inherited from the fields and methods defined by the temporal data types and spatial data types. Then the set type of these basic spatial-temporal data types was also given. Here, taking I Point as an example, the implementation is as follows:
Implementation of Moving Object Data Type.
Moving object data type is set up based on the above data types. In this paper, we focus on the moving point objects. The moving point objects include the basic information of the moving object: the spatial position and positioning time and the accuracy, velocity, and azimuth information; the definition is as follows: The moving object trajectory segment is defined as follows: CREATE OR REPLACE TYPE "MO" as object ( mid varchar2(100), mpt1 mpt, //starting point mpt2 mpt, //end point p period, //time interval state integer, member function get mpt(t instant) return i point //dynamic function, to calculate the location at t).
Results and Analysis
We implemented the object-oriented all-time-domain moving object database model and applied it to a moving object monitoring system. This system includes database servers, moving object monitoring center, and data collection terminals. The database server was implemented by extending Oracle 11 g spatial database that each trajectory segment was stored with the user defined types MO. Moving object monitoring center uses B/S structure. The server is mainly used for system business logic processing, including the business process logic and interface services, and the monitoring client is mainly responsible for the mobile terminal monitoring, the main function of a mobile terminal including real-time monitoring, historical trajectory playback, and historical location query. The monitoring system was developed with the FineUI framework (http://fineui.codeplex.com/), online geographic map, and AMap API for JavaScript (http://lbs.amap.com/), as shown in Figure 8.
The main function of data collection terminal is to collect location information in real time and to send the location information to data server through the 3G network and update position information by the method proposed in this paper-position-based dynamic threshold position updating strategy. The terminal system was developed on android operation system, shown as in Figure 9.
A case study was carried out on different moving object. Firstly, we installed the data collection software on a smart phone with the global position system. Then we harvested the position data on different forms of movements, including bus, taxi, train, and pedestrian. Finally, we tested the model from the update frequency and the matching degree between the stored moving trajectory and the actual moving one. The dynamic update frequency was compared with the equaldistance updating and the equal-time update one.
The parameters settings are as follows. In the dynamic threshold position updating strategy, Δ = 20 ∘ , Δ is determined by Table 1, and max = 1000 m. In equal-distance updating strategy, Δ = 100 m, and in equal-time updating strategy, Δ = 15 s. The updating frequency result is shown as in Table 2.
The experimental results are as follows.
(1) For moving object, such as bus, taxi, and train, the location update frequency determined by the location update strategy based on positioning accuracy is about 30-40% of equal-time strategy and equaldistance strategy, while it is 8.8% of that in the train.
(2) For moving object, such as pedestrian, with slow moving speed and irregular moving trajectory, the dynamic threshold update strategy will increase the location update frequency in order to record trajectories more accurately because of the rapid changes of the azimuth and accuracy information.
(3) Experiments show that the dynamic threshold position updating strategy is capable of effectively reducing the position update, saving data transmission flow, and reducing the data storage without affecting the trajectory accuracy of the moving objects.
(4) We tested the matching degree between the trajectory derived from the stored position and the actual one. According to the results, the derived trajectory was satisfied which was supported by high matching degree.
Conclusion
Moving object databases always manage spatial-temporal information with massive volume. It is necessary to work on spatiotemporal data model adapted to moving objects in order to store and query the position trajectory information effectively. We design an all-time-domain moving object database model combined based on the object-oriented idea and dynamic attributes in MOST. By analyzing the interstructure and rendering of MOST in detail, a position-based dynamic threshold position updating strategy was proposed to fit this new data model. The cubic Hermite interpolation function was used to estimate the historical location of moving objects. Linear extended positioning method, velocity mean value positioning method, and cubic exponential smoothing positioning method were designed and used to estimate the future location of moving object. The result verification showed that this strategy can effectively reduce the transmission and storage of data. | 6,225.8 | 2015-11-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A Study of Broad Emission Line and Doppler Factor Estimation for Fermi Blazars
In this work, we obtained a sample of 979 Fermi blazars with broad emission lines, including 701 objects collected from published works and 278 objects developed in this work. For the 278 objects, we made a crossmatch from three catalogs, the Fermi Large Area Telescope Fourth Source Catalog (4FGL), the Sloan Digital Sky Survey, and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, and calculated the broad-line region (BLR) luminosity. Then, we estimated the Doppler factor and studied the correlations between the BLR luminosities and the γ-ray luminosities, the synchrotron peak frequency (ν p ), and Doppler factor (δ) for the whole sample. Our analyses and discussions came to the following main conclusions: For the 278 blazars, their BLR luminosity (log L BLR) ranges from 40.44 to 45.45 erg s−1, with a mean value of 43.39 erg s−1. The Doppler factor ranges from δ = 0.45 to δ = 88.52, with a mean value of 12.99 for the 979 Fermi blazars, which is consistent with the results in the literature. Both the BLR luminosity and the Doppler factor exhibit positive correlations with the γ-ray luminosity. The BLR luminosity is anticorrelated with synchrotron peak frequency, implying a Compton cooling. A line of logLBLR=1.58logνp−19.46 separating BL Lacertae objects and flat-spectrum radio quasars was obtained in the diagram of logLBLR against logνp using a machine-learning method. Based on the analysis of the equivalent width and the Doppler factors, we proposed five changing-look blazar candidates.
Introduction
Active galactic nuclei (AGNs), the interesting extragalactic sources, have attracted many astronomers.Blazars are an extreme subclass of AGNs that show many special properties, such as rapid and high-amplitude variability, high and variable polarization, apparent superluminal motion, etc. (Moore & Stockman 1981;Wills et al. 1992;Fan et al. 1997;Romero et al. 2000;Aller et al. 2003;Andruchow et al. 2005;Xie et al. 2005;Abdo et al. 2010;Zheng & Zhang 2011;Zheng et al. 2014;Fan et al. 2016Fan et al. , 2021;;Yang et al. 2022b;Xiao et al. 2022d).It is believed that these extreme observational properties are due to a narrow angle between the relativistic jet and the observer's line of sight.In the relativistic beaming model (Padovani & Urry 1990; Urry & Padovani 1995), the beaming factor (or Doppler factor) of the jet is defined by 2 1 2 is the bulk Lorentz factor, β is the velocity in units of the speed of light, and θ is the viewing angle.Blazars are divided into two subclasses: BL Lacertae objects (BL Lacs) and flat-spectrum radio quasars (FSRQs).One classical division between BL Lacs and FSRQs is mainly based on the equivalent width (EW) of emission lines; blazars with EW < 5 Å are classified as BL Lacs, while those with EW 5 Å are classified as FSRQs (Urry & Padovani 1995).The spectral energy distribution (SED) is also used to classify blazars (Abdo et al. 2010;Fan et al. 2016;Yang et al. 2022aYang et al. , 2023)).Nieppola et al. (2006) divided BL Lacs into low synchrotron peak BL Lacs (LBLs), intermediate synchrotron peak BL Lacs (IBLs), and high synchrotron peak BL Lacs (HBLs).Recently, Fan et al. (2016) calculated SEDs for a sample of 1492 Fermi Large Area Telescope (LAT) blazars, adopted a Bayesian method for the distribution of the logarithm of the synchrotron peak frequencies (logν p ), and proposed classifications using the acronyms defined in Abdo et al. (2010): low synchrotron peak sources (LSPs, log(ν p /Hz) 14.0), intermediate synchrotron peak sources (ISPs, 14.0 <log(ν p /Hz) 15.3), and high synchrotron peak sources (HSPs, log(ν p /Hz) >15.3).Yang et al. (2022a) performed similar work for a sample of 2709 Fermi blazars and proposed dividing log(ν p /Hz) = 13.7 and log(ν p /Hz) = 14.9 to separate LSPs, ISPs, and HSPs.
The Doppler factor is a key jet characteristic, yet we are unable to directly obtain it by observations.Fortunately, many indirect methods were proposed to estimate the Doppler factor: Lähteenmäki & Valtaoja (1999) obtained the Doppler factor from radio flux density variations.For some γ-ray-loud sources, their γ-ray emissions and timescales were also used to estimate the Doppler factor (Mattox et al. 1993;von Montigny et al. 1995;Cheng et al. 1999;Fan et al. 1999;Fan 2005;Fan et al. 2013Fan et al. , 2014;;Pei et al. 2022).In recent years, the progress in the Doppler factor estimations has been greatly developed.Ghisellini et al. (2014) and Chen (2018) obtained the Doppler factor via the broadband SED.Liodakis et al. (2017) and Liodakis et al. (2018) compared the observed and the intrinsic brightness temperatures to derive the variability Doppler factor.Zhang et al. (2020) proposed a new method to estimate the Doppler factor for the Fermi blazars with available broad-line and γ-ray luminosities, which was updated by Zhang et al. (2023).Ye & Fan (2021) estimated the Doppler factor from the relationship between the core and extended radio luminosities.In general, different estimation methods are based on different assumptions, which result in different Doppler factor values.
Exploring the formation of relativistic jets can improve the understanding for the AGN model, but the formation is still an open question in astronomy.It is accepted that jets are produced near the central black hole, where the black hole spin (Blandford & Znajek 1977) and/or accretion disk (Blandford & Payne 1982) provide the jet power.In either case, the central black hole will continue to accrete circumnuclear material, and therefore a close correlation between the accretion luminosity and the jet power is expected (Maraschi & Tavecchio 2003).However, it is difficult to detect jet power and accretion radiation directly.To solve this problem, one can explore their relationship indirectly by other observable properties (Celotti et al. 1997;Cao & Jiang 1999;Sbarrato et al. 2012;Ghisellini et al. 2014;Xiong & Zhang 2014;Zhang et al. 2020).Because the broad-line region (BLR) clouds are photoionized by radiation from the accretion disk and then recombined, resulting in different velocity BLR lines (Kaspi et al. 2000(Kaspi et al. , 2005;;Bentz et al. 2009;Sbarrato et al. 2012), the BLR luminosity is used as a proxy for the accretion disk luminosity.For the jet, all of the power (P jet ) commonly contains two parts, namely, the radiant power (P rad ) and the kinetic power (P kin ), so , where L jet bol is the jet bolometric luminosity (Sbarrato et al. 2012).The γ-ray luminosity is generally used to represent the bolometric luminosity owing to the fact that the γ-ray luminosity dominates the bolometric luminosity for the γ-ray-loud blazars (Ghisellini et al. 2014;Xiong & Zhang 2014;Zhang et al. 2020).Ghisellini et al. (2014) found a closely linear correlation between the jet radiant power and the accretion disk luminosity, logP rad ∼ 0.98logL disk + 0.639, where , where the factor of 2 indicates two jets and f is a constant: f = 4/3 for BL Lacs, and f = 16/5 for FSRQs.The relation is consistent with the theoretical expectation.Thus, it is reasonable to represent the correlation between jet radiant power and the accretion disk luminosity by that between the γ-ray luminosity and the BLR luminosity.According to Ghisllini et al. (2014), the viewing angle of blazars is small, sin(θ) ≈1/Γ, thus δ ≈ Γ. Zhang et al. (2020) proposed a new method to estimate the Doppler factor based on the correlation of the γ-ray and emission-line luminosities.Now, a larger number of γ-ray sources are available in the fourth data release of the Fermi Large Area Telescope Fourth Source Catalog (4FGL-DR4; Ballet et al. 2023), and a large number of blazars with spectroscopic data detected by the 16th data release of the Sloan Digital Sky Survey (SDSS-DR167 ) or the eighth data release of LAMOST (LAMOST-DR8;8 Ahumada et al. 2020) can offer a good opportunity to reanalyze the relationship between the jet and the accretion and estimate the Doppler factor.That is the motivation for this work, which is arranged as follows: we present the sample in Section 2, our results are presented in Section 3, and discussions are given in Section 4. We then conclude our findings in the final section.Throughout this work, the cosmology constant is adopted by the ΛCDM model with H 0 = 71 km −1 s −1 Mpc −1 , Ω Λ = 0.73, Ω M = 0.27 (Komatsu et al. 2011).
Fermi Blazars with Broad-line Emissions
Our sample consists of two parts: one part is from the literature (Paliya et al. 2021;Zhang et al. 2022), and the optical spectroscopic information of Fermi blazars is systematically compiled by Paliya et al. (2021) and Zhang et al. (2022) (2021) and references therein.Notably, 4FGL-DR4 is the latest incremental version released in 2023 late July, covering the last 14 yr of survey data.Therefore, we only considered the blazars present in the 4FGL-DR4 catalog for sources in those works (Paliya et al. 2021;Zhang et al. 2022).There are 608 and 408 sources with available BLR luminosities in the work of Paliya et al. (2021) and Zhang et al. (2022), respectively.However, there are 315 common sources in the two samples.As a consequence, we found 701 blazars in total with broad-line emissions and γ-ray emissions from the literature.
For the second part, which is derived from the matching result, they are obtained as follows: (i) We considered BL Lacs and FSRQs present in the 4FGL-DR4 catalog and prepared a preliminary sample of 1609 objects (excluding the 701 blazars with published spectroscopic information by Paliya et al. 2021 andZhang et al. 2022).(ii) We used the associated source name in 4FGL-DR4 to search cross-identifications (cross-IDs) in the NASA/IPAC Extragalactic Database (NED)9 one by one.(iii) We compiled their preferred position coordinates and the cross-IDs with SDSS/LAMOST prefixes.(iv) The corresponding SDSS/LAMOST name and coordinate information are used to search their optical spectra in the SDSS website or the LAMOST website.This procedure led to a sample comprising 278 spectra with broad emission lines (249 BL Lac objects and 29 FSRQs).
Finally, we obtained a total of 979 blazars (384 BL Lac objects and 595 FSRQs).Following the acronyms by Abdo et al. (2010) and the classification by Yang et al. (2022a), i.e., log(ν p /Hz) < 13.7 for LSPs, 13.7 < log(ν p /Hz) < 14.9 for ISPs, and log(ν p /Hz) > 14.9 for HSPs, we have 518 LSPs, 212 ISPs, and 163 HSPs, or 55 LBLs, 119 IBLs, and 162 HBLs, and 463 low synchrotron peak FSRQs (LFs), 93 intermediate synchrotron peak FSRQs (IFs), and 1 (GB6 J0043+3426 with logν p = 15.3Hz) high synchrotron peak FSRQ (HF) if we considered the subclasses of BL Lacs and FSRQs.The redshift of the object collected from the literature (Paliya et al. 2021;Zhang et al. 2022) was adopted, while for the 278 new Fermi blazars with broad emission lines we adopted the redshift information from the fourth catalog of the Fermi-LAT-detected AGNs (4LAC; Ajello et al. 2022).If the object redshift information was not found in the 4LAC, we directly used the redshift in SDSS-DR16.The redshift distribution of the sample is shown in Figure 1(a).Table 1 summarizes our blazar sample.
The Broad-line Luminosity of 278 Fermi Blazars
There are 278 γ-ray sources in our sample whose optical spectra exhibit at least one of the broad emission lines Hα, Hβ, Mg II, and C IV.To derive the broad-line luminosity, we adopted the publicly available multicomponent spectral fitting code PYQSOFit (Guo et al. 2018) and a wrapper package based on it (QSOFITMORE; Fu 2021).The tool applies the spectral models and templates to data following a χ 2 -based fitting technique.A detailed description of the code and its application can be found in Guo et al. (2018), Shen et al. (2019), andFu (2021).
Based on the extinction curves from Cardelli et al. (1989) and the dust map of Schlegel et al. (1998), we first corrected the Galactic reddening for the target spectrum, and then a fitting was performed.The spectrum was decomposed into two components, namely the quasar and the host galaxy components, following the principal component analysis method presented in Yip et al. (2004aYip et al. ( , 2004b)).In order to efficiently fit the line-free continuum over the entire spectrum, four components are considered, namely a power law and a thirdorder polynomial along with optical and Fe II templates (Boroson & Green 1992;Vestergaard & Wilkes 2001;Shen et al. 2019).Afterward, we can obtain a line-only spectrum using the spectrum to subtract the best-fitted continuum, where the spectral properties of Hα, Hβ, Mg II, and C IV emission lines were extracted.
We fitted Hα and Hβ emission lines in the wavelength range [6400, 6800] Å and [4640, 5100] Å, respectively.The broad components of Hα and Hβ were modeled by three Gaussian profiles; the narrow components of Hα and Hβ, [N II] λλ6549, 6585, and [S II] λλ6718, 6732 were each modeled by a single Gaussian profile (Shen et al. 2019).
The Mg II and C IV line fittings were carried out in the wavelength range [2700, 2900] Å and [1500, 1700] Å, respectively.We used two Gaussians and a single Gaussian to model the broad and narrow components of the Mg II line, respectively.The broad component of the C IV line was modeled with three Gaussians (Shen et al. 2019).
In this way, we obtained the flux of at least one of Hα, Hβ, Mg II, and C IV emission lines and calculated the corresponding luminosity of the broad emission line (Zhang et al. 2020): is luminosity distance and λF(λ) is the flux density in units of erg cm −2 s −1 .We show, as examples, the fitting results in Figure 2.
In addition, we calculated the BLR luminosity from the available observational data as follows (Zhang et al. 2020;Paliya et al. 2021;Zhang et al. 2022): where á ñ L BLR is the total BLR fraction.We typically take á ñ = a L L 5.56 y BLR and set L yα = 100, and then we sum the line ratios (relative to L yα ) as in Francis et al. (1991) and Celotti et al. (1997).L i,obs are the observed luminosities obtained from a certain number of broad lines, and L i,est are the luminosities obtained from the same lines but estimated from the line ratios that are adopted: 77, 22, 34, and 63 for Hα, Hβ, Mg II, and C IV, respectively (Francis et al. 1991;Celotti et al. 1997).When there are two or more emission lines for a source, we will use their geometric mean as the BLR luminosity.For the 278 Fermi blazars, the logarithm of the BLR luminosity (logL BLR ) is listed in Table 2 and shown in Figure 1(c).
The Averaged BLR Luminosity
For the sample, we calculated their average logarithm of observed BLR luminosity for BL Lacs, FSRQs, LSPs, ISPs, HSPs, LBLs, IBLs, and HBLs and obtained the following statistical results.The corresponding average values are listed in Table 3.When we considered BL Lacs and FSRQs separately, we found that the BLR luminosity ranges from logL BLR = 40.44 to 46.14 erg s −1 with an average value of logL BLR = 43.28erg s −1 for the 384 BL Lacs and from logL BLR = 41.79 to 46.61 erg s −1 with an average value of logL BLR = 44.70erg s −1 for the 595 FSRQs.It is observed that the BLR luminosity in FSRQs is higher than that in BL Lacs.
If we considered LSPs, ISPs, and HSPs separately, we can find that their average logarithms of the BLR luminosity are 44.67,43.80, and 43.09 erg s −1 , respectively.For LBLs, IBLs, and HBLs, the average observed BLR luminosities are 43.65,43.37, and 43.08 erg s −1 , respectively.The statistic results and distributions are shown in Table 3 and Figure 3.
The Correlation between the Synchrotron Peak Frequency and BLR Luminosity
When the ordinary and symmetrical least-squares regression (OLS 10 ; Feigelson & Babu 1992) is employed for the BLR luminosity and the synchrotron peak frequency, an anticorrelation: with a correlation coefficient of r = −0.60 and a chance probability of p < 10 −4 was obtained for the whole sample and listed in Table 4, in which other results are also listed.
The Correlation between the γ-Ray Luminosity and the BLR Luminosity
To investigate the correlation between the γ-ray and the BLR Luminosities, we first calculated the γ-ray luminosity by (Lin et al. 2017;Xiao et al. 2022d) ph 10 https://astrostatistics.psu.edu/statcodes/sc_regression.htmlwhere z is redshift, ( is the photon spectral index, and F is the integral flux in erg cm −2 s −1 .In this work, the energy flux in 0.1-100 GeV is adopted from 4FGL-DR4. 11The logarithm of the γ-ray luminosity is listed in Table 1.When the OLS bisector regression was performed for the γ-ray luminosity and the BLR luminosity of sources, we obtained the results 1.03 0.02 log 0.94 0.77 BLR with r = 0.81 and p < 10 −4 for the 979 blazars.The corresponding result is shown in Figure 4 and listed in Table 4.
Estimation of the Doppler Factor
Since the viewing angle of blazars is small, sin (θ) ≈1/Γ, so δ ≈ Γ.The jet radiation power can be expressed as (Ghisellini et (This table is available in its entirety in machine-readable form.)
The Correlations
The relationship between the jet power and the accretion luminosity was discussed in the literature (Maraschi & Tavecchio 2003;Punsly & Tingay 2006;Celotti & Ghisellini 2008;Ghisellini et al. 2010Ghisellini et al. , 2014;;Zhang et al. 2020Zhang et al. , 2022)).In the present work, we used a larger sample to revisit the relation using γ-ray luminosity and the BLR luminosity.By the OLS method, we obtained a strong correlation 1.03 0.02 log BLR ( ) 0.94 0.77 with r = 0.81 and p < 10 −4 for the 979 sources as shown in Figure 4 and Table 4.
For comparison, we also studied the correlation between γ-ray luminosity calculated in this work from the 4FGL-DR4 and BLR luminosity for sources in the literature (Ghisellini et al. 2014 (2022).This shows that the correlation results obtained from the 979 sources are consistent with previous works (Ghisellini et al. 2014;Xiong & Zhang 2014;Paliya et al. 2021;Zhang et al. 2022).
The fitting results are listed in Table 4.
From Equations (1) and (3), it is obvious that the redshift is a key parameter in luminosity calculations.In this work, there are 979 Fermi blazars, 88 of whose redshifts are from SDSS-DR16 spectra.However, some sources have bad χ 2 in the redshift estimations from the SDSS spectra.We studied the relationship between the broad-line luminosity (logL BLR ) and γ-ray luminosity (logL γ ) for the 88 sources with redshifts from SDSS and 891 (979 -88) sources, respectively, to explore the effect of the 88 sources on the results, and obtained 1.02 0.05 log 1.38 2.3 BLR with r = 0.84 and p < 10 −4 for the 88 sources with redshifts from SDSS-DR16 spectra and 1.03 0.02 log 0.69 0.80 BLR with r = 0.80 and p < 10 −4 for a sample of 891 sources (excluding the 88 sources with redshifts from SDSS spectra).As shown in Figure 5, we found that the relationship between broad-line luminosity and γ-ray luminosity is very consistent in slopes and not much different in intercepts when the uncertainties are taken into account in both cases.This indicates that the redshift does not have much effect on our results.
The beaming effect of Fermi blazars has also been discussed (Kovalev et al. 2009;Arshakian et al. 2010;Fan et al. 2017;Yang et al. 2022b).We found a positive correlation between the γ-ray luminosity and the Doppler factor, 43.644 0.078 with r = 0.56 and p < 10 −4 , by the OLS method, which is shown in Figure 4 and Table 4, in which we also listed the correlation analysis results obtained from the γ-ray luminosity and the Doppler factors from the literature (Ghisellini et al. 2014;Chen 2018;Liodakis et al. 2018).All the fitting results in Table 4 suggest that the γray luminosity and the Doppler factor are positively correlated, though different estimation methods are used to obtain the Doppler factors, suggesting that the γ-rays are beamed.
A New Dividing Line between BL Lacs and FSRQs
Blazars, a unique subclass of AGNs, exhibit distinct SEDs featuring two peaks.The first peak, known as the synchrotron peak, spans the electromagnetic spectrum from the infrared to the X-ray range.It predominantly arises from the synchrotron emission.The second peak, referred to as the inverse Compton peak, extends from the X-ray to the γ-ray wavelengths.This peak is believed to originate from the process of inverse Compton scattering.Fossati et al. (1998) found that 5 GHz radio luminosity, synchrotron peak luminosity, and γ-ray luminosity all exhibited inverse relationships with the synchrotron peak frequency and that the synchrotron peak frequency increased while the luminosity consistently decreased.This finding has led to a blazar sequence, ranging from FSRQs to X-ray-selected BL Lacs, with luminosity decreasing as the peak frequency increases.Mao et al. (2016) obtained SEDs for a substantial selection of Roma-BZCAT blazars.Interestingly, their findings echoed those of Fossati et al. (1998), revealing a blazar sequence.They found that as radio (and bolometric/ integrated synchrotron) luminosity decreased, the peak frequency consistently increased.Later on, Fan et al. (2017) calculated the intrinsic SEDs for a sample of 86 Fermi blazars.They identified an inverse relationship between the luminosity (across radio, optical, X-rays, γ-rays, and the synchrotron peak) and the peak frequency when examining the observed data.When considering the intrinsic data, the correlation exhibited a positive trend.Yang et al. (2022b) revisited the correlations between the γ-ray (or radio, optical, X-ray, peak frequency, integrated synchrotron) luminosity and the synchrotron peak frequency with a larger sample of 260 Fermi blazars and confirmed the results by Fan et al. (2017).It is clear that the relationship between the multiband luminosities and the synchronized peak frequencies had been extensively investigated.However, there is not much discussion about the correlation between the BLR luminosity and the synchrotron peak frequency.
Here we plotted BL Lacs and FSRQs on a plot of the BLR luminosity versus the synchrotron peak frequency and found a significant anticorrelation between them and that FSRQs and BL Lacs clearly occupy different regions (see Table 4 and Figure 1(d)).In order to effectively separate these two classes, we employed a kind of machine-learning (ML) method to establish a dividing line.Recently, ML methods, such as support vector machine (SVM), artificial neural networks, K-nearest neighbors, etc., have been widely used in astronomy; see Kang et al. (2019) We found that the BL Lacs located above the dividing line exhibit higher BLR emissions than the BL Lacs below the line.According to blazar evolution (Böttcher & Dermer 2002), we proposed that those BL Lacs are in the early stages of transitioning from FSRQs to BL Lacs.At this phase, the central black hole is surrounded by abundant gas and dust, enabling the black hole to show a high accretion rate and enhancing radiation from the core region.Thus, the BLR clouds are effectively photoionized by radiation from the accretion disk and then recombined, resulting in different velocity BLR lines.Meanwhile, the high energy density in the external radiation field will enhance the level of Compton cooling, which leads to lower synchrotron peak frequencies (Ghisellini et al. 1998).The objects in this case are located in the upper left corner of Figure 1(d).In contrast, the average density of the circumnuclear material will gradually decrease with further evolution.This will lead to a decreasing accretion rate and a decreasing level of Compton cooling.The objects gradually move toward the lower right corner of Figure 1(d).
On the one hand, the spectral information is not accurate.In Paliya et al. (2021), a part of the spectrum data is obtained by digitizing a plot from the historical literature.This will reduce the resolution of the emission lines, making the emission-line intensity become smaller.We found only one corresponding SDSS spectrum (B3 0920+416) for the six objects; it is shown in Figure 2(a).
Figure 1 .
Figure 1.(a) The redshift distribution of sources, where the black histogram represents the whole sample, the orange-red histogram represents BL Lacs, and the blue histogram represents FSRQs.(b) The Doppler factor distribution.(c) The logarithm of BLR luminosity distributions for 278 blazars (black histogram).The orange-red histogram is for BL Lacs, and the blue histogram is for FSRQs.(d) The correlation between the synchrotron peak frequency (log(ν p /Hz)) and the BLR luminosity (logL BLR ), where triangles stand for BL Lacs, circles for FSRQs, and stars for changing-look blazar candidates.The dotted line is a dividing line, and other straight lines correspond to the best-fitting results, the solid line to the whole sample (ALL), the dashed line to BL Lacs, and the dashed-dotted line to FSRQs.
Figure 2 .
Figure 2. The optical spectra of B3 0920+416 or 4FGL J0923.5+4125(left) and 4C +25.01 or 4FGL J0018.8+2611(right) modeled with QSOFITMORE.The spectral data are shown with the black line.Red and green lines represent broad and narrow components of the emission line, respectively, and the modeled continuum is plotted with the orange line.The blue line is the sum of all the components.Horizontal gray dashes at the top of the plots denote the line-free wavelength regions selected to model the continuum emission.The data are adopted from SDSS-DR16 for B3 0920+416 and LAMOST-DR8 for 4C +25.01.
Figure 3 .
Figure 3. (a) The broad-line luminosities for blazars with different synchrotron peak; the red line is for the high synchrotron peak (HSP) blazars, the yellow line is for the intermediate synchrotron peak (ISP) blazars, and the blue line is for the low synchrotron peak (LSP) blazars.(b) The broad-line luminosities for BL Lacs with different synchrotron peak; the red line is for the high synchrotron peak BL Lacs (HBLs), the yellow line is for the intermediate synchrotron peak BL Lacs (IBLs), and the blue line is for the low synchrotron peak BL Lacs (LBLs).
Figure 4 .
Figure 4.The left panel shows the γ-ray luminosity vs. the broad-line luminosity, and the right panel shows the γ-ray luminosity vs. the Doppler factor, where triangles stand for BL Lacs and circles for FSRQs.The straight lines correspond to the best-fitting results: the solid line to the whole sample (ALL), the dashed line to BL Lacs, and the dashed-dotted line to FSRQs.
Figure 5 .
Figure 5.The γ-ray luminosity vs. the broad-line luminosity, where triangles stand for BL Lacs, circles for FSRQs, and squares for 88 blazars with redshifts from SDSS spectra.The straight lines correspond to the best-fitting results: the dashed line to the 88 blazars, the solid line to the 891 sources excluding the 88 blazars with redshifts from SDSS spectra (979 -88).
Table 4
Linear Regression Fitting Results
,
Zhu et al. (2023)bXiao et al. ( , 2022cXiao et al. ( , 2023))ao et al. (2022bXiao et al. ( , 2022cXiao et al. ( , 2023)), andZhu et al. (2023).The SVM model can easily handle both linear and nonlinear classification problems by choosing different kernel functions.The model is relatively simple, especially in linearly separable cases, making the decision boundary intuitively interpretable.In this work, based on the sample distribution, we chose a linear kernel function and used a cross-validation to determine the optimal penalty parameter (C = 1).Then, we got a dividing line with an accuracy rate of 90.73%, which is expressed as p BLR | 6,277.6 | 2024-02-26T00:00:00.000 | [
"Physics"
] |
Continuous treatment of paper mill effluent by electrocoagulation for holding time analysis
This paper presents the potential of continuous electrocoagulation operation for the treatment of paper mill effluent. The electrocoagulation experiments were carried out based on the batch study for holding time analysis. Process performance was assessed in terms of chemical oxygen demand, color, total dissolved solids, and total organic carbon. Continuous electrocoagulation experiments were performed at constant pH 7.45, conductivity 7.72 mS/cm2, electrode distance 1.5 cm and current density 10.35 mA/cm2 with flow rate varied between 0.1 to 0.6 L/min. Flow rate 0.2 L/min with 120 min of holding time was an optimum condition in which removal efficiencies were 82.5%, 90%, 92.5%, and 84 % for COD, Color, TDS, and TOC, respectively.
Introduction
Nowadays, paper mill industries use recycled fibers like waste paper, office waste, scrap newspapers, etc. as raw material for paper production. Every stage involved in this production consumes water and chemicals, which generates wastewater. Generated wastewater contains organic and inorganic pollutants. It is necessary to reduce this pollutant from wastewater for reuse and proper discharge in water bodies. Various methods are available for the treatment of wastewater, such as biological treatment [1], adsorption [2], oxidation [3], wet oxidation [4], and electrochemical [5][6][7][8]. These technologies are usually uneconomical for high flow rate treatments. In the past few decades, research has shown that electrocoagulation is a versatile treatment technique and successfully potential to treat wastewaters from the olive mill, dairy, petroleum, textile, and paper mill [9][10][11][12]. Most of the reported studies on electrocoagulation used batch mode of operation with paper mill effluent. The continuous operation provides process control and low variability of the process, while batch operation often blends segregation [13][14]. In addition, an appropriately designed continuous system handles small portions of effluent at the minimum time provided and increasing process monitoring. This is unfeasible for largescale batch processes with a similar throughput [15][16]. Regardless of these advantages, the continuous operation also has some difficulties, and whenever not executed properly, the continuous operation has a chance to be failed. Uniform flow throughout the operation is one of the challenges in continuous operation with maintained quality of outlet flow. In the wastewater treatment process, the holding time analysis describes how the effluent flows inside the reactor in continuous mode and provides quality assurance and control. A few researchers reported continuous electrocoagulation operation with [17], Papermill [14,19,22]. These continuous electrocoagulation treatments present the electrolysis time and flow rate as the variable parameters. But none of them used Stainless steel electrode with ongoing electrocoagulation treatment for paper mill wastewater. Moreover, utilization of continuous electrocoagulation permits another degree of adaptability since the utilization has control not just on the flow boundaries and the nature of the electrochemical reaction estimation itself but also to the measurement or nature of the electrodes. Stainless steel was used as electrode materials for paper mill wastewater treatment because it is cheaper than other materials for electrodes to be commercialized and used in large scale plants. As shown in Table 1, stainless steel electrodes give better results with higher COD value. Stainless steel was also corrosion resistance metal compared to aluminum and iron, making stainless steel a better choice than other metals. The stainless steel electrodes used in monopolar arrangement require a low voltage and a higher current, contrary to the arrangements available that operate under a high voltage and a lower current. Monopolar arrangement electrodes might be considered because this arrangement offers a high pollutant removal with lower energy consumption. Thus, the present study's objective was to assess the feasibility of continuous electrocoagulation operation for paper mill effluent. A previous survey of batch mode shows satisfactory results obtain on paper mill effluent. The present study's main concern is to measure the effect of holding time based on the inlet flow rate of effluent. Effect of holding time was seen on parameters like Chemical oxygen demand (COD), Color, Total dissolved solids (TDS), and Total organic carbon (TOC). C=Continuous, B=Batch
Paper mill effluent collection and characterization
The paper mill effluent used in present investigation was obtained from a recycled fiber-based paper mill located in Chhattisgarh, India. The characteristics of effluent are presented in Table 2. Characterization of the effluent was performed as per standard methods given in APHA [32]. All the chemicals used in the present study were of analytical grade and acquired from Merck, India. The standard discharge permissible limit for paper mill wastewater is presented in Table 3 [33]. 0.085×0.090×0.002 m, with a total effective electrode area of 144 cm 2 . Electrodes terminals are connected to DC power supply (0-30V, 0-5A). Working volume of 1.5 L and 250 rpm were maintained throughout EC experiments. Based on satisfactory results obtained by conducting batch EC treatment of paper mill effluent. Presented continuous EC experiments were carried out at optimum operating conditions obtained by batch experiment, which is pH 7.45, conductivity 7.72 mS/cm 2 , electrode distance 1.5 cm and current density 10.35 mA/cm 2 . Effect of flow rate in range of 0.1 L/min to 0.6 L/min was determined based on holding time analysis. Papermill effluent was fed into the reactor from the bottom using a peristaltic pump. Effluent samples from the reactor were taken at different intervals during the experiment, and the experiment proceeded until steady-state concentrations were achieved. Performance of EC was validated in terms of percentage COD, Color, TDS, and TOC.
Result and Discussion
Flow behavior changes with fluid properties, when flow rate varies the paper mill effluent flow patterns, directly affecting the removal efficiency of organic presents in the wastewater. Flow rate variation effect presented in this study with respect to holding time in the reactor and validated in terms of percentage removal of COD, Color, TDS and TOC. Optimum operating conditions obtained by batch experiment that is remained constant throughout all continuous experiments. Analytical readings of holding time were taken up to 360 min at a flow rate range of 0.1 L/min to 0.6 L/min. Figure 2 shows that COD's removal efficiency under the batch optimum conditions at the effluent flow rate varies between 0.1 L/min to 0.6 L/min. Generally, removal efficiencies were increasing when there was an increase in the holding time. At flow rate, 0.1 L/min removal efficiency was lower than 0.2 L/min. This is because a 0.1 L/min flow rate was not enough for rapid mixing to from metal hydroxide, which results in the decrement in COD removal efficiency [26][27]. At 0.2 L/min flow rate, organic pollutants have sufficient time in the reactor to form floc with metal oxides presented in the reactor due National Conference on Challenges in Groundwater Development and Management IOP Conf. Series: Earth and Environmental Science 597 (2020) 012015 IOP Publishing doi:10.1088/1755-1315/597/1/012015 5 to anode dissociation. When the flow rate is 0.2 L/min in continuous electrocoagulation operation, the holding time maximized and vice versa. Reactor volume was constant as 1.5 L, so when increment in inlet flow rate, more than 0.2 L/min occurs, it results in a decrement in holding time of wastewater in the reactor. As flow rate increased, 0.2 to 0.6 L/min decrease in holding time was lead to a decrement in COD removal efficiency as present in Figure 2. The COD removal 82.5% at 0.2 L/min flow rate with 120 min of holding time was maximum after that COD removal reaches study state, and notable increment was not noticed.
Effect on Color
The color is generally the primary contaminant to be perceived in paper effluent that influences the aesthetics, water lucidity, and oxygen level in water bodies. Papermill effluent is contaminated with lignins, lignin degradation products, and humic acids [26]. These contaminants make the effluent stream dark-colored and are often referred to as color bodies. The use of different types of dyes for printing of paper and coloring of paper makes effluent more toxic in color [23]. Since pulp mill produces large quantities of this densely-colored effluent, the discharge of this effluent into adjacent streams and water bodies can cause an objectionable discoloration of the water. Removal in color for continuous electrocoagulation study is presented in Figure 3 with flow rate variation. Same as COD removal, a 0.2 L/min flow rate was optimum. Effluent flow rate affects removal efficiencies because rapid mixing is required in the initial condition of the EC process to grow larger flocs and precipitate these enough in the solutions to reach the removal. Color removal was 90% at a flow rate of 0.2 L/min. After an increment in a flow rate of more than 0.2 L/min, results hindered the reactor, destroying the floc and electrostatic force between the particles. They were not enough to form a metal hydroxide. Holding time for formed floc was reduced at the maximum flow rate, leading to a reduction in color removal at a higher flow rate.
Effect on TDS
A total dissolved solids is a measure of the combined total of organic and inorganic substances contained in the wastewater. The solubilization of maximum chlorolignin compounds presented in paper mill results the high contamination of TDS in the effluent [23]. The TDS removal was investigated by varying at flow rate ranges of 0.1 L/min to 0.6 L/min with 360 min of holding time observation. Figure 4 shows that at 0.2 L/min flow rate, provide maximum TDS removal efficiency 92.5% at 120 min of holding time. The increases in flow rate more than 0.2 L/min could be attributed to the formation of insoluble metal hydroxide, which could not trap solid particles because of rapid mixing breaks the solid sludge formation. The predominant nature of amorphous hydroxides reduces removal efficiency after an increment in flow rate. This is due to the fact that aggregation of small particles persisted, forming flocs at slower flow rate. Flow rate at 0.1 L/min not provide better results than 0.2 L/min because of flow rate less than optimum not able to provide proper mixing in the rector, which results reduction in force between dissolved particles and metal hydroxides and it takes time to form flocs and settled as sludge.
Effect on TOC
Total organic carbon becomes an important parameter used to monitor overall levels of organic compounds presented in wastewater. Compared to COD analysis TOC, carbon is presented at low level of contaminations concentration for precise and accurate analysis of water purity. TOC is a potential option in contrast to the COD test and has the upside of being both rapid and possibly more exact than the COD test. While it has recently been considered a potential replacement for COD test, increasingly precise testing techniques have emerged, making the test considerably more feasible and accurate, particularly for increasingly complex compounds presented in wastewater. Wastewater treatment at different flow rates with varying holding time was measured in terms of TOC removal efficiency. The flow rate varies between 0.1 to 0.6 L/min for 360 min of holding time observation, as presented in Figure 5. In TOC, flow rate 0.3 L/min gives better results than 0.2 L/min, holding time at 180 min to provide maximum removal efficiency compared to 120 min. So if target responses are COD, Color, TDS and TOC together, then 0.2 L/min with 120 min holding has been taken as an optimum condition because all target responses provide optimum results with TOC removal of 84%. If TOC is the only response that has been taken into account, then 0.3L/min with 180 min of holding time gives maximum removal, which is 88%, but due to an increase in flow rate and holding time increase cost of the process with only 4% of removal variation. Flow rate 0.2 L/min with 120 min of holding time has been taken as an optimum condition. The variation occurs only in TOC removal due to fact that high flow rate create turbulence in the continuous flow reactor and remove carbon presented in wastewater and because of high flow rate the wastewater presence in reactor was minimized which results increase in holding time up to 60 min increment to maximum removal of all carbon presented in wastewater.
Conclusion
Based on the presented study, it can be concluded that the continuous electrocoagulation technique provided enough evidence of being more precise and the potential to expand in industrial-scale paper mill effluent treatment. Electrocoagulation is found to be the most effective technology for the removal of multiple contaminations in one pass. In addition, an appropriately designed continuous system handles small portions of effluent at the minimum time provided and increasing process monitoring. Process performance was assessed in terms of chemical oxygen demand, color, total dissolved solids, and total organic carbon. Flow rate 0.2 L/min with 120 min of holding time was an optimum condition in which removal efficiencies were 82.5%, 90%, 92.5%, and 84 % for COD, Color, TDS, and TOC, respectively. | 2,969.4 | 2021-02-04T00:00:00.000 | [
"Engineering"
] |
Flaky Tail Mouse as a Novel Animal Model of Atopic Dermatitis: Possible Roles of Filaggrin in the Development of Atopic Dermatitis
Understanding of human diseases has been enormously expanded by the use of animal models, because they allow for in-depth investigation of pathogenesis and provide invaluable tools for diagnostic and pharmaceutical purposes. Atopic dermatitis (AD) is a chronic, relapsing form of skin inflammation, a disturbance of epidermal-barrier function that culminates in dry skin, pruritus, and IgE-mediated sensitization to food and environmental allergens (Bieber, 2008, Mori, et al., 2010, Tokura, 2010). AD is a common disease with no satisfactory form of therapy; therefore, understanding the mechanism of AD through animal models is an urgent issue to be solved (Jin, et al., 2009, Matsuda, et al., 1997, Shiohara, et al., 2004). The complexity and variability of AD and multiple genetic and environmental factors underlying AD make creating a reproducible, accessible, and relevant animal model of AD particularly challenging (Scharschmidt S (2) transgenic mice that either overexpress or lack selective molecules; and (3) mice that spontaneously develop AD-like skin lesions. These models display many features of human AD, and their studies have resulted in a better understanding of the pathogenesis of AD. They allow for an in-depth dissection of the mediators and cells that are critical for the development of allergic responses (Jin, et al., 2009). Located at the interface between the body and the environment, the epidermis is an elaborate structure that shares few properties with other biological barriers. Key functions include providing physical and biochemical protection (O'Regan &Irvine, 2010), and playing important roles in host defense, inflammation, and regulation of immune responses (Schleimer, et al., 2007). Patients with AD exhibit impaired skin barrier functions and abnormal structure and chemistry of the stratum corneum (SC) (Leung &Bieber, 2003). Alteration of the skin barrier in AD is evidenced by reduction in the water content of the SC and by increased transepidermal water loss (TEWL) (Aioi, et al., 2001). Skin barrier dysfunction has emerged as a critical driving force in the initiation and exacerbation of AD and as “driver” of disease activity (Cork, et al., 2009, Elias, et al., 2008), although it has once been noted as a disease of immunological etiology (Leung &Bieber, 2003). Elias et al. proposed the outside-to-inside pathogenic mechanisms in AD for the following reasons: (1) the extent of the permeability barrier abnormality parallels the severity of the
disease phenotype in AD; (2) both the clinically uninvolved skin sites and the skin cleared of inflammation continue to display significant barrier abnormalities; (3) emollient therapy comprises effective ancillary therapy; and (4) specific replacement therapy which targets the prominent lipid abnormalities that account for barrier abnormality in AD, not only corrects the permeability-barrier abnormality but also comprises an effective anti-inflammatory therapy for AD (Elias, et al., 2008).The evidence for a primary structural abnormality of the SC in AD is derived from a recently discovered link between the incidence of AD and loss-of-function mutations in the gene encoding filaggrin (FLG).Genetic studies have shown a strong association between AD and this mutation (Jin, et al., 2009).Moreover, there is a dose-response relationship between FLG deficiency and disease severity, such that patients with double-allele or compound heterozygote mutation in FLG display more severe and earlier-onset AD and an increased propensity for AD to persist into adulthood (Brown, et al., 2008, Irvine &McLean, 2006).This rapidly growing body of work has led to a paradigm shift in conception of AD pathogenesis, with increasing weight being placed on the role of a primary barrier abnormality that then precipitates downstream causing immunologic abnormalities as proposed (Elias, et al., 2008).Based on these findings, it is assumed that mice that have a genetic defect in barrier function will provide a model of AD closer to the human disease than models provided by epidermal sensitization with allergens or haptens or by transgenic overexpression of cytokines in the skin or disruption of immune genes, and that these mice will have an advantage over NC/Nga mice in which the genetic defect is not known.Application of the knowledge gained from existing mouse models of AD to mice with genetic defects in skin barrier function should provide us with AD models that closely mimic human disease (Jin, et al., 2009).
Filaggrin and atopic dermatitis 2.1 Filaggrin mutation and atopic dermatitis
Filaggrin protein is localized in the granular layers of the epidermis.Profilaggrin, a 400-kDa polyprotein, is the main component of keratohyalin granules (Candi, et al., 2005, Listwan & Rothnagel, 2004).In the differentiation of keratinocytes, profilaggrin is dephosphorylated and cleaved into 10-12 essentially identical 27-kDa filaggrin molecules, which aggregates in the keratin cytoskeleton system to form a dense protein-lipid matrix in humans (Candi, et al., 2005).This structure is thought to prevent epidermal water loss and impede the entry of external stimuli, such as allergens, toxic chemicals, and infectious organisms.Therefore, filaggrin is a key protein in the terminal differentiation of the epidermis and in skin-barrier function (Gan, et al., 1990).The genetic contribution of FLG loss-of-function mutations to AD is now well established.FLG mutation was first identified in ichthyosis vulgaris (IV), a common keratinizing disorder (Irvine &McLean, 2006).In 2006, Palmer et al. first identified two such mutations within the FLG gene, which strongly predispose to AD as well as IV (Palmer, et al., 2006).Since then, several additional studies have confirmed this association and discovered other mutations within this gene that predispose to AD.To date, approximately 40 loss-offunction FLG mutations have been identified in IV and/or AD in European and Asian populations.(Brown, et al., 2008, Marenholz, et al., 2006, Nomura, et al., 2007, Rodriguez, et al., 2009, Sandilands, et al., 2006, Sandilands, et al., 2007).Major differences exist in the spectra of FLG mutations observed between different ancestral groups, and each population is likely to have a unique set of FLG mutations (Osawa, et al., 2011).Typically atopic manifestations follow a certain sequence, called the atopic march, beginning with AD in early infancy, followed by food allergy, asthma and the development of allergic rhinitis (Illi, et al., 2004).The association of FLG mutation with atopic march has been reported in cases involving pediatric asthma (Muller, et al., 2009), peanut allergy (Brown, et al., 2011), atopic asthma (Poninska, et al., 2011), allergic rhinitis (Poninska, et al., 2011) and nickel allergy (Novak, et al., 2008).In addition, epidemiological studies have identified extremely significant statistical association between FLG mutation and AD.Intriguingly, these mutations are highly associated with several characteristics in AD patients, such as reduced level of natural moisturizing factor (NMF) in the SC (Kezic, et al., 2008), increased incidence of dry and sensitive skin (Sergeant, et al., 2009), clinical severity and barrier impairment (Nemoto-Hasebe, et al., 2009), allergen sensitization and subsequent development of asthma associated with eczema (Weidinger, et al., 2008), and serum levels of IgE (Wang, et al., 2011).On the other hand, several studies failed to identify an effect of FLG mutations on AD, such as skin conditions assessed by clinical scoring of AD and measurement of TEWL in a French population (Hubiche, et al., 2007).A similar lack of association was reported in contact allergy (Carlsen, et al., 2011) andpediatric eczema (O'Regan, et al., 2010).As the conceptual framework underlying AD moves from solely immunological to epidermal barrier defects, the role of filaggrin and its putative mechanisms in priming AD have come under closer scrutiny.FLG mutations are postulated to have wide-ranging downstream biological effects, which include altered pH of SC, cutaneous microflora and aberrant innate and adaptive immune responses (O'Regan &Irvine, 2010).
Filaggrin and altered skin barrier function
AD is characterized by eczematous skin lesion, dry skin, pruritus, increased TEWL, and enhanced percutaneous penetration of both lipophilic and hydrophilic compounds (Jakasa, et al., 2011, Wollenberg &Bieber, 2000).The skin barrier defect is one of the primary events that initiate disease pathogenesis, allowing the entrance of numerous antigens into the epidermis in patients with AD (Onoue, et al., 2009, Osawa, et al., 2011).The FLG mutation carriers have demonstrated elevated TEWL (Jungersted, et al., 2010, Kezic, et al., 2008), basal erythema, skin hydration, increased skin pH (Jungersted, et al., 2010, Nemoto-Hasebe, et al., 2009), SC thickness (Nemoto-Hasebe, et al., 2009), impaired SC integrity upon repeated tape stripping (Angelova-Fischer, et al., 2011), and increased diffusivity of PEG 370 (Jakasa, et al., 2011) compared to healthy donors.Nevertheless, these alterations found in FLG mutation carriers are not consistently correlated with AD since AD patients without FLG mutation might also share some similar features.(Hubiche, et al., 2007, Jakasa, et al., 2011, Jungersted, et al., 2010, Kezic, et al., 2008).It is, therefore, suggested that other factors besides FLG lossof-function mutations modulate skin barrier integrity, especially in AD.Since the skin barrier is related to intercellular lipid bilayers of the SC, it might be interesting to examine the composition and the organization of intercellular lipids of the SC in AD patients in relation to FLG genotype and disease severity (Jakasa, et al., 2011).Carriers of FLG mutations showed significantly reduced levels of NMF in the SC (Kezic, et al., 2008).Similar lipid composition of FLG mutation carriers and individuals with normal filaggrin was observed (Angelova-Fischer, et al., 2011, Jungersted, et al., 2010), but a lower cermide/cholesterol ratio was detected in the former group (Angelova-Fischer, et al., 2011).Filaggrins proteolytically degraded into a pool of free amino acids including histidine and glutamine which are further converted to, respectively, urocanic acid (UCA) and 2-pyrrolidone-5-carboxylic acid (PCA).The concentrations of UCA and PCA in SC in the carriers of FLG mutations were significantly lower than those in healthy donors (Kezic, et al., 2009).Therefore, filaggrin deficiency is sufficient to impair epidermal barrier formation.An in vitro experiment using filaggrin knocked down human organotypic skin cultures showed enhanced penetration of hydrophilic dye Lucifer yellow, smaller lamellar bodies, and deficiency of their typical lamellae without altered lipid composition (Mildner, et al., 2010).In addition, UCA, one of the filaggrin-derived free amino acids and as an important UV absorbent within SC, was decreased following filaggrin knocked down, leading to increased sensitivity to UVB-induced keratinocyte (KC) damage (Mildner, et al., 2010).
Filaggrin and altered immunobiology
The SC serves as a biosensor of the external environment and a link between innate and adaptive immune systems (Vroling, et al., 2008).The critical association between the abnormal barrier in AD and Th2 polarization may in part be explained by the production of the cytokine, thymic stromal lymphopoietin (TSLP) (Ebner, et al., 2007).TSLP is expressed by epithelial cells, with the highest levels seen in lung-derived and skin-derived epithelial cells (Soumelis, et al., 2002, Ziegler, 2010), and is highly detected in the lesional skin of AD (Soumelis, et al., 2002) Inducible TSLP transgene specifically in the skin leads to the development of a spontaneous Th2-type skin inflammatory disease with the hallmark features of AD (Yoo, et al., 2005).TSLP has been shown to activate dendritic cells to drive Th2 polarization, through upregulation of the co-stimulatory molecules CD40, CD80, and OX40L, triggering the differentiation of allergen-specific naïve CD4 + T cells to Th2 cells that produce IL-4, IL-5, and IL-13 (Ebner, et al., 2007, Soumelis, et al., 2002).Patients with Netherton syndrome (NS), a severe ichthyosis in which affected individuals experience a significant predisposition for AD, have elevated levels of TSLP in their skin.Upregulated kallikrein (KLK) 5 in the skin of NS patients directly activates proteinaseactivated receptor 2 (PAR-2) and induces nuclear factor kappaB-mediated overexpression of TSLP, intercellular adhesion molecule 1, TNF-α, and IL-8.This phenomenon occurs independently of the environment, adaptive immune system and underlying epithelial barrier defect (Briot, et al., 2009, Briot, et al., 2010).In vitro study using human keratinocyte cell line HaCaT cells and reconstituted human epidermal layers transfected with filaggrin siRNA showed increased production of TSLP via toll-like receptor (TLR) 3 stimulation (Lee, et al., 2011).These findings suggest that reduced filaggrin levels may influence innate immune response via TLR stimuli and elevate TSLP, leading to AD-like skin lesions.AD is one of the emerging diseases in which epidermal dysfunction increases allergen and microbial penetration in the skin, with the consequent development of adaptive Th2 immune responses (Kondo, et al., 1998) within regional lymphoid tissue.The resultant Th2 cells may then home back to the skin or lungs, where they recognize allergen in the skin (McPherson, et al., 2010), which leads to local Th2 inflammation, reduced antimicrobial peptide expression (Nomura, et al., 2003), and filaggrin downregulation (Howell, et al., 2007).Indeed, the induction of circulating allergen-specific CD4 + T cells may be an important prerequisite underlying the pathogenesis of the atopic march (O'Regan, et al., 2009).Among moderate-to-severe AD patients, the FLG mutation carriers showed a greater number of house dust mite Der p1-specific IL-4 producing CD4 + T cells, suggesting that filaggrin mutations predispose to the development of allergen-specific CD4 + Th2 cells.The same result could be seen among HLA-DRB1*1501 (a HLA class II complex which is immunodominant in individuals with AD (Ardern-Jones, et al., 2007)) positive adult individuals with moderate-to-severe AD and FLG mutations (McPherson, et al., 2010).
Origin of flaky tail mice
The above findings indicate the involvement of filaggrin in the development of AD.Therefore, the impact of filaggrin deficiency on cutaneous biological functions in vivo should be analyzed in detail.To address this issue, animal models are of great value.Flaky tail mice (Flg ft ), first introduced in 1958, are spontaneously mutated mice with smaller ears, tail constrictions, and a flaking tail skin appearance (Lane, 1972).Flg ft mice were outcrossed onto B6 mice at Jackson Laboratory (Bar Harbor, ME, USA) (Lane, 1972, Presland, et al., 2000) (Note: Although this strain was crossed with B6, it is not a B6 congenic strain but rather a hybrid stock that is probably semi-inbred).Homozygous Flg ft mice have dry, flaky skin which expresses reduced amounts of profilaggrin mRNA and abnormal profilaggrin protein that is not processed to filaggrin monomers (Fallon, et al., 2009, Presland, et al., 2000).Recently, it has been revealed that the gene responsible for the characteristic phenotype of Flg ft mice is a single nucleotide deletion at position 5303 in exon 3 (5303delA) of the profilaggrin gene, resulting in a frameshift mutation and premature truncation of the predicted protein product.The copy number of the filaggrin repeat contained within this gene varies depending on the background strain.This mutant occurs in an allele with 16 copies of the filaggrin repeat (Fallon, et al., 2009).Flg ft mouse carries double gene mutation, Flg and matted (ma) in which the locations of the mutated genes are within close linkage to one another (Lane, 1972).The ma gene characteristic reported by Searle & Spearman (1957) causes the body-hair of affected mice to be brittle and inflexible, which results in longitudinal splitting and breaking due to friction against the cage and other objects.This mutation is a fully penetrant recessive house-mouse mutant which belongs to the "naked" category (i.e., a house-mouse with baldness resulting from the breaking of hairs or from hereditary hairlessness).This mutation can be identified morphologically by (1) erection of hairs, (2) matting of hair in clumps, (3) a tendency towards baldness, (4) a change from black-to brown-colored melanin in old hairs.The age at which this mutant is first identified based on external appearance varies from between two to four weeks (Jarret A, 1957, Searle A.G., 1957).Recognition of the features of this mouse is more evident between 5 and 14 days of age when constricted, flaking tail skin and thickened short pinna of the ears are observed.In addition, Flg ft mice are often smaller than their normal siblings at this age.Routine histological sections stained with hematoxylin and eosin showed that the stratum granulosum in Flg ft mice at 1, 2, 4, and 8 days of age does not contain as many granular layers as that of non-Flg ft mice (Lane, 1972).Mice of the Flg ft genotype express an abnormal profilaggrin polypeptide that does not form normal keratohyalin F-granules and is not proteolytically processed to filaggrin.Therefore, filaggrin is absent from the cornified layers in the epidermis of the Flg ft mouse (Fallon, et al., 2009, Presland, et al., 2000, Scharschmidt, et al., 2009).Consistently, we and others have described that Flg ft mice express a truncated and smaller profilaggrin protein that is not processed to filaggrin (Fallon, et al., 2009, Moniaga, et al., 2010, Presland, et al., 2000) (Fig. 1).Fig. 1.Flg ft mouse has a truncated and smaller profilaggin and a lack of filaggrin protein.
Flaky tail mouse and ichtyosis vulgaris
Ichthyosis vulgaris (IV) is a heterogeneous autosomal skin disease characterized by dry and scaly skin, mild hyperkeratosis, and a decreased or absent granular layer that either lacks, or contains morphologically abnormal, keratohyalin granules (Manabe, et al., 1991).Several lines of evidences point to a genetic defect in a gene encoding FLG in IV.Immunoblotting studies showed that filaggrin protein was absent or markedly reduced in the epidermis of individuals with IV (Fleckman, et al., 1987, Sybert, et al., 1985).In line with this, it was proposed that Flg ft mice could provide insight into the molecular basis of the filaggrindeficient human skin disorder IV.The epithelia of Flg ft mice showed defects in tissue organization especially in the tail, an attenuated granular layer, reduced profilaggrin and a lacked of filaggrin granules in SC.In addition, keratinocytes culture from Flg ft mice synthesized reduced amounts of profilaggrin mRNA and protein (Presland, et al., 2000).
Flaky tail mouse in a steady state
An early report demonstrated that Flg ft mice without the ma mutation showed flaky skin as early as postnatal day 2, but became normal in appearance by 3 to 4 weeks of age without spontaneous dermatitis except for their slightly smaller ears (Lane, 1972).Later, the lack of filaggrin in the epidermis was proposed in the commercially available strain of Flg ft mice, which has both Flg and ma mutations, as a model of IV, and therefore there was no discussion about the cutaneous inflammatory conditions from the perspective of AD (Presland, et al., 2000).There have been four recent papers of Flg ft mice as a model of filaggrin deficiency: the first paper used Flg ft mice from which the ma mutation had been eliminated with four additional backcrosses to B6 mice (Fallon, et al., 2009), and the others used the commercially available Flg ft mice (Moniaga, et al., 2010, Oyoshi, et al., 2009, Scharschmidt, et al., 2009).The first report showed only histological abnormality without clinical manifestation (Fallon, et al., 2009), and the second demonstrated spontaneous eczematous skin lesions after 28 weeks of age (Oyoshi, et al., 2009), and the third contained no notice of any spontaneous dermatitis in Flg ft mice (Scharschmidt, et al., 2009).The fourth paper by Moniaga et al. have demonstrated that Flg ft mice showed spontaneous dermatitis with skin lesions mimicking human AD as early as 5 weeks of age with mild erythema and fine scales and the cutaneous manifestations advanced with age in a steady state under SPF conditions (Moniaga, et al., 2010) (Fig. 2).The first manifestations to appear when mice were young were erythema and fine scaling; pruritic activity, erosion, and edema followed later (Fig. 3).In contrast, no cutaneous manifestation was observed in either C57BL/6 mice, studied as a control, or heterozygous mice intercrossed with Flg ft and B6 mice kept under SPF conditions.There was no apparent difference in terms of clinical manifestations based on the gender of Flg ft mice throughout the period (Moniaga, et al., 2010).Histological examination of the skin of Flg ft mice stained with H&E revealed epidermal acanthosis, increased lymphocyte and mast cell infiltration and dense fibrous bundles in the dermis, in both younger (8-week-old) and older (18-week-old) Flg ft mice; none of these conditions were observed in B6 mice (Fig. 4) (Moniaga, et al., 2010).These features were also reported in other studies (Fallon, et al., 2009, Oyoshi, et al., 2009) with more total cells, lymphocytes, eosinophils, and mononuclear cells in Flg ft mice compared to control mice.These data support the diagnosis of AD-like dermatitis in Flg ft mice in the steady state under SPF conditions.Therefore, there exist discrepancies among the results of four recent papers on the cutaneous manifestation in the steady states.It seems to be related to the presence or absence of the ma mutation and/or variation in the genetic backgrounds of the different strains used, and to environmental factor.It has been reported that Japan carries a higher morbidity of AD than other countries (1998, Williams, et al., 1999), possibly due to environmental factors such as pollen.Because barrier dysfunction is a common characteristic of AD (Elias, et al., 2008, Nomura, et al., 2007, Palmer, et al., 2006), TEWL is commonly measured as an indicator of barrier function (Gupta, et al., 2008).TEWL was significantly higher in Flg ft mice than in B6 mice from an early age (4 weeks) to an older age (16 weeks) (Fig. 5) (Moniaga, et al., 2010).Flowcytometry analysis of cells isolated from ear skin confirmed that Flg ft skin contained significantly increased percentages of CD4 + T cells and Gr-1 + neutrophils, but not CD11c + dendritic cells, compared with ear skin from controls (Moniaga, et al., 2010, Oyoshi, et al., 2009).The extent of severity of AD is known to be correlated with elevated serum IgE levels (Novak, 2009).Serum IgE and IgG1 levels in Flg ft mice were significantly higher than those in control mice in the steady state under SPF conditions (Moniaga, et al., 2010, Oyoshi, et al., 2009).In addition, the numbers of CD4 + and CD8 + cells in the skin draining LNs in Flg ft mice were significantly higher than those in control mice, but those of the spleen were similar for both groups.Thus, an enhanced cutaneous immune reaction seems to be induced in Flg ft mice due to the condition of their skin induced by filaggrin and/or matted deficiency.AD is thought to be mediated by helper T cell subsets, such as Th1, Th2, and Th17 (Bieber, 2008, Hattori, et al., 2010, Koga, et al., 2008).In the steady state, the skin of Flg ft mice showed no difference of Th1 cytokine IFN-γ and Th2 cytokines IL-4 and IL-13 compared to the control.In contrast, there is a significant increase in mRNA expression of the Th17 cytokine IL-17, IL-17 promoting cytokines IL-6 and IL-23 (p19), and IL-17 inducible neutrophil attractant chemokine CXCL2 in Flg ft mice (Moniaga, et al., 2010, Oyoshi, et al., 2009).
Flaky tail mouse showed enhanced percutaneous allergen priming
Since the barrier dysfunction is a key element in the establishment of AD, it is necessary to evaluate outside-to-inside barrier function from the perspective of invasion of external stimuli.Scharschmidt et al. reported increased bidirectional paracellular permeability of water-soluble xenobiotes by ultrastructural visualization in Flg ft mice suggesting a defect in the outside-to-inside barrier.The ultrastructural visualization of tracer perfusion was analyzed by water-soluble, low molecular weight, electron-dense tracer lanthanum nitrate or fluorophore calcium green with enhanced penetration in Flg ft mice.The data demonstrated that filaggrin deficiency leads to alterations in basal barrier function through a defect in the SC extracellular matrix and greater permeability through the same paracellular pathway that is used by water itself when exiting the skin (Scharschmidt, et al., 2009).A new method for evaluating outside-to-inside barrier function quantitatively by measuring the penetrance of fluorescein isothiocyanate isomer 1 (FITC) through the skin has been developed (Moniaga, et al., 2010).The epidermis of Flg ft mice contained a higher amount of FITC than that of B6 mice did (Fig. 6 left panel).Consistently, fluorescence intensities observation in the epidermis of both mice showed stronger fluorescence in Flg ft mice (Fig. 6 right panel).In addition, the Flg ft embryo was entirely dye permeable to toluidine blue solution compared to its control littermate.Another AD-like dermatitis model to test allergen priming of the skin in these mice was performed by application of ovalbumin (OVA) (Oyoshi, et al., 2009).Non tape-stripped skin of Flg ft mice exposed to OVA exhibited significantly increased epidermal thickening, hyperkeratosis, spongiosis, acanthosis, and cellular infiltrates, as well as TEWL compared to control mice.mRNA levels for IL-17, IL-6, IL-23, IL-4, IFN-γ and CXCL2 but not IL-5 and IL-13 in the skin of Flg ft mice after OVA exposure were significantly higher than those of control mice.The systemic immune response following cutaneous exposure revealed increased specific IgG and IgE to OVA, and splenocytes proliferated and produced OVAspecific Th1, Th2, Th17 and regulatory T cell cytokines (Fallon, et al., 2009, Oyoshi, et al., 2009).These findings demonstrate that Flg ft mice tend to generate allergen-specific IgE and cytokine following cutaneous allergen challenge to the skin even without additional barrier disruption.
Altered immunobiology response in flaky tail mouse
The skin abnormality associated with AD is well known to be a predisposing factor to sensitive skin (Farage, et al., 2006, Willis, et al., 2001) and allergic contact dermatitis (Clayton, et al., 2006, Mailhol, et al., 2009).However, children with atopic dermatitis had lower PPD induration size compared to healthy donors, but this was not statistically significant (Gruber, et al., 2001, Yilmaz, et al., 2000).In humans, sensitive skin is defined as reduced tolerance to cutaneous stimulation, with symptoms ranging from visible signs of irritation to subjective neurosensory discomfort (Farage, et 2006, Willis, et al., 2001).The question of whether human AD patients are more prone to allergic contact dermatitis than nonatopic individuals is still controversial (Mailhol, et al., 2009).Using phorbol myristate acetate (PMA) as an irritant, Flg ft mice exhibited an enhanced ear swelling response compared to age-matched B6 mice throughout the experimental period (1 hr to 140 hrs).In addition, Flg ft mice showed an increased skin-sensitized contact hypersensitivity (CHS) reaction to hapten, a form of classic Th1-and Tc1-mediated delayedtype hypersensitivity to haptens, emphasized by increased IFN-γ production, and terminated by regulatory T cells (Honda, et al., 2010, Mori, et al., 2008, Wang, et al., 2001).CHS is induced by epicutaneous sensitization and challenge.The ear thickness change was more prominent in Flg ft mice than in B6 mice.In addition, the relative amount of IFN-γ in the ear of Flg ft mice was higher than that of B6 mice.To further assess the immune responses of Flg ft mice, we elicited a delayed-type hypersensitivity (DTH) response through non-epicutaneous sensitization and challenge.Mice were immunized intraperitoneally with OVA, and challenged with a subcutaneous injection of OVA into the footpad.In contrast to the CHS response induced epicutaneously, the resulting footpad swelling in Flg ft mice tended to be lower than that in wild-type mice.This finding is consistent with the observation on tuberculin tests in human.The levels of IFN-γ in the spleen were comparable between Flg ft mice and wild-type mice.Thus, Th1/Tc1 immune responses were enhanced in Flg ft mice only when the stimuli operated via the skin, suggesting that the enhanced immune responses seen in Flg ft mice depend on skin barrier dysfunction and skin barrier function regulates cutaneous immune conditions, which hints at a possible mechanism involved in human AD.A reduced threshold in Flg ft mice for contact dermatitis was also reported.These mice showed enhanced propensity to irritant contact dermatitis via low-dose phorbol ester TPA Flaky Tail Mouse as a Novel Animal Model of Atopic Dermatitis: Possible Roles of Filaggrin in the Development of Atopic Dermatitis 13 which provokes only marginal inflammation in wild-type mice, and displayed a reduced threshold for the development of hapten-induced acute allergic contact dermatitis by oxazolone (Ox).Repeated Ox challenges with lower doses of Ox revealed AD-like dermatitis in Flg ft mice as shown by severe barrier abnormality (enhanced TEWL) and AD-like histological changes (Scharschmidt, et al., 2009).
Flaky tail mouse denotes human AD
Clinical studies have provided evidence that a house dust mite allergen plays a causative or exacerbating role in human AD (Kimura, et al., 1998), and that a strong correlation exists between FLG mutation patients and house dust mite-specific IgE (Henderson, et al., 2008).Dermatophagoides pteronyssinus (Dp) is a common mite aeroallergen, which is frequently involved in inducing human AD.Dp exhibits protease activities, and Der p1, Der p3, and Der p9, derived from Dp, are especially capable of activating the PAR-2 in human KC (Jeong, et al., 2008, Vasilopoulos, et al., 2007).A recent report has shown that activation of PAR-2 through Dp application significantly delays barrier recovery rate in barrier functionperturbed skin or otherwise compromised skin (Jeong, et al., 2008).Therefore, Dp may play a dual role in the onset of AD, both as an allergen and proteolytic signal and as a perturbation factor of the barrier function, leading to the persistence of eczematous skin lesions in AD (Jeong, et al., 2008, Roelandt, et al., 2008).It has also been reported that BALB/c and NC/Nga mice develop an allergic cutaneous immune response to mite antigens when they are applied to the skin after vigorous barrier disruption by means of tape-stripping or sodium dodecyl sulfate treatment (Kang, et al., 2006, Yamamoto, et al., 2007).Intriguingly, the application of Dp ointment to the skin without additional barrier disrupt induced dermatitis in Flg ft mice, while this treatment did not induce any skin inflammation in control C57BL/6 mice (Fig. 7).Petrolatum alone, used instead of Dp ointment as a control, induced no skin manifestation (Fig. 7).Histological examination of H&E-stained sections of involved Flg ft skin after 16 applications showed acanthosis, elongation of rete ridges, and dense lymphocyte and neutrophil infiltration in the dermis, accompanied by an increased number of mast cells in the dermis.Consistently, scratching behavior, TEWL, and Dp-specific IgE levels were significantly higher in Flg ft mice than in B6 mice (Fig. 8) (Moniaga, et al., 2010).Thus the treatment of Flg ft mice with Dp ointment, even without prior barrier disruption, remarkably enhanced both the clinical manifestations and the laboratory findings that correspond to indicators of human AD.
Summary and future direction
We have summarized the findings on Flg ft mice revealed by four different groups (Table 1).While most of these findings were consistent with each other, there still remain several issues to be solved, for example, the influence of the genetic background and other gene mutations in these mice.Since Flg ft mice are not a homogenous C57BL/6 background, two papers with spontaneous eczematous skin lesion on Flg ft mice compared their outcomes with other mouse strains, such as C57BL6 and BALB/c mice as controls (Oyoshi, et al., 2009); these two strains lie on opposite ends of the spectrum of T helper responses.Nevertheless, the skin inflammation and susceptibility to EC sensitization of non-tape stripped skin in Flg ft mice were not observed in other strains.In the second paper, they observed immune responses in mice of other genotypes, such as BALB/c and C3H, as controls, but both of these lines exhibited much less severe CHS responses compared to Flg ft mice (Moniaga, et al., 2010).These data suggested that the enhanced responses seen in Flg ft mice were not solely due to their genetic background.In addition, other studies used the Flg ft mice which were backcrossed four generations to a B6 strain (a background coding sequence showed 99.3% identity between B6 and Flg ft ), and similar enhanced responses to OVA-induced AD models were observed (Fallon, et al., 2009).Furthermore, unlike human AD patients, most of whom are heterozygous for the FLG mutation, the heterozygous mice intercrossed with Flg ft mice and B6 mice did not develop spontaneous dermatitis (Moniaga, et al., 2010).Similar results were obtained with the OVA-induced AD model, where homozygous, but not heterozygous (crossed with B6 mice) Flg ft mice, showed enhanced susceptibility to cutaneous exposure to OVA (Fallon, et al., 2009).Not only human studies but also additional mouse studies will be required to clarify these relationships.Since Flg ft mice express a hair phenotype (matted), one cannot eliminate the possibility that some of the observations could have been influenced by the concurrent ma mutation (Scharschmidt, et al., 2009).Nevertheless, one study indeed removed the matted hair allele (ma) early in the course of backcrossing with B6 mice, and showed enhanced antigen (OVA) ingress in mice with the same Flg mutation, but no ma mutation in their background (Fallon, et al., 2009).The effect of the ma mutation in relation to the Flg mutation in commercially available Flg ft mice in the development of AD-like skin lesions needs to be clarified in future studies.
www.intechopen.comFlaky Tail Mouse as a Novel Animal Model of Atopic Dermatitis: Possible Roles of Filaggrin in the Development of Atopic Dermatitis 5
Fig. 2 .
Fig. 2. Clinical photographs of 20-week-old Flg ft mice (left panel) and total clinical severity scores (right panel)
www.intechopen.comFlaky Tail Mouse as a Novel Animal Model of Atopic Dermatitis: Possible Roles of Filaggrin in the Development of Atopic Dermatitis 11
Fig. 6 .
Fig. 6.Amount of FITC in the skin of B6 and Flg ft mice (left panel) and fluorescence intensities of FITC of the skin (right panel) after topical application.
Fig. 7 .
Fig. 7.The mite-induced dermatitis model showed severe eczematous skin lesion after being topically treated with Dp ointment in Flg ft mice, as well as ear thickness change.
Fig. 8 .
Fig. 8. TEWL and mite-specific serum IgE levels of Flg ft mice and control mice after the last application.
Table 1 .
Summary of the phenotypes of flaky tail mice www.intechopen.comFlaky Tail Mouse as a Novel Animal Model of Atopic Dermatitis: Possible Roles of Filaggrin in the Development of Atopic Dermatitis 15 | 7,641.2 | 2012-02-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Assessment of Innovative Architectures, Challenges and Solutions of Edge Intelligence
– Data collecting, caching, analysis, and processing in close proximity to where the data is collected is referred to as "edge intelligence," a group of linked devices and systems. Edge Intelligence aims to improve data processing quality and speed while also safeguarding the data's privacy and security. This area of study, which dates just from 2011, has shown tremendous development in the last five years, despite its relative youth. This paper provides a survey of the architectures of edge intelligence (Data Placement-Based Architectures to Reduce Latency; 2) Orchestration-Based ECAs-IoT. 3) Big Data Analysis-Based Architectures; and 4) Security-Based Architectures) as well as the challenges and solutions for innovative architectures in edge intelligence.
analysis near to the data source are all examples of "edge intelligence," which is a network of linked devices and systems that increase data quality and speed while simultaneously protecting the safety and confidentiality of that data.In this data.Like cloud-based intelligence, Edge Intelligence analyzes data locally, protecting users' privacy, reducing response time and saving bandwidth resources.User data may also be used to build customized machine learning and deep learning models.
A key component of the 6G system is likely to include edge intelligence.AI has the potential to aid with edge computing as well, which should not be overlooked.In this paradigm, "intelligent edge" is used instead of edge intelligence.Unlike intelligent edge, edge intelligence concentrates on building intelligent programs in the contexts of edge devices and safeguarding the privacy of its users, rather than solving edge computing difficulties with AI solutions.Rather than focusing on the cognitive advantage, we will ignore it for the time being.This paper subdivides the human-created architectures into further classes: 1) Data Placement-Based Architectures to minimize Latency; 2) Orchestration-Based ECAs-IoT.3) Big-Data-Analysis-Based Architectures; and 4) Security-Based Architectures.The rest of the paper is organized as follows: Section II presents a review of the previous works.Section III provides a survey of innovative edge intelligence architectures, while Section IV analyses the challenges and solutions of edge intelligence architectures.Lastly, Section V provides final remarks to the whole research.
II. LITERATURE REVIEW A number of studies have defined edge intelligence viability by implementing various concepts to real-world application fields.Foukalas and Tziouvaras [1] have employed smartphones and edge servers to build an application for face recognition.From 900ms to 169ms, the latency has been lowered.Cloudlets may cut energy usage by 30 to 40 percent, for example, in cognitive assistive devices such as smartwatches.Some academics are particularly interested in the performance of edge computing and AI.For activity recognition, Tang,Liu,Xiao and Sebe [2] constructed a limited deep learning model.The example shows that simple DL models may be deployed to smart devices, which outperform shallow models.
Additionally, wearable and integrated gadgets are subjected to the same tests.G-Board, Google's smartphone-based prediction model, is an instance of edge intelligence.G-board picks up on the unique typing styles of those who use it throughout the training process.As a result, the trained G-board could be used to power experiences that were specifically suited to the application's usage by the user.
Researchers studied human-created architectures for edge intelligence.Nonetheless, there were a few flaws in this study.This study breaks down architecture into many different types.However, since architectural search requires hardware, most researchers are unable to use this search approach.Human-made architecture is the subject of the majority of the literature now available.A deep neural network for mobile and embedded devices was created using depth-wise separate convolutions by Georgiev,Bhattacharya,Lane and Mascolo [3].Using MobileNets, a convolutional filter may be divided into two types: a depth-wise filter and a point-wise filter.Convolution just filters the input channels, which is a drawback.The combining of depth-wise and 1-to-1 inversion with separate convolution may be utilized to overcome this limitation.It uses 33 depthwise separate convolutions that need 8-9 percent less computation than standard convolutions to execute effectively.The deployment of KWS algorithms and depth estimations on edge devices, on the other hand, may also profit from the usage of convolutions, both in terms of points and depths.
Another approach is to use group convolution to reduce the processing expenses associated with the model construction process.It is unable to use certain basic designs like Xception and ResNeXt because of the resource-intensive deep 1-to-1 convolutions.For 1-to-1 convolutions, Zhang, Lo and Lu [4] suggest using pointwise group convolutions, which reduce the computational burden.Because the outcomes of one band are formed from a small percentage of the input networks, this has an unanticipated effect.Information transmission between groups may be hampered by "scarcelyconnected" convolutions, which are often utilized in depth-and organization convolutions.When it comes to dealing with this problem, Qin et al. suggest a merge and develop approach.It is possible to generate a major feature map by integrating information about the same location gathered from several sources.Data from the newly added features are collected and added to the network in this way.As a consequence, the problem of intergroup data loss is effectively addressed since information is distributed across all channels.
This paper subdivides the human-created architectures into further classes.Orchestration-based ECAs-IoT for reducing latency via data placement.In addition, there are architectures based on big data analysis and security.Edge computing architectures are also discussed in this study, along with their obstacles and possible solutions.
Data Placement-Based Architectures
For reduced maintenance costs and trustworthy SLAs (Service Level Agreements) compared to the conventional data storage method, more and more enterprises are moving their data to the cloud as cloud computing grows.Cloud Service Providers (CSPs) in the mainstream provide a number of data storage options to meet the needs of various customers.CSPs provide a wide range of price options for the same functionality.In addition, different locations have varied pricing policies for the same CSP.Data migration between datacentres of the same CSP is less expensive than migration between data centers of distinct CSPs.
Additionally, a single cloud is subject to risks of vendor lock-in, i.e., main concerns include the pricing of cloud computing and the disruption of Service Level Agreements (SLA).Users may be forced to pay hefty relocation expenses in these cases.An ant colony algorithm-based method was proposed in our past efforts for cost-effective information hosting in multicloud locations with high accessibility.Because of this, we're able to split up our data and store it in numerous CSPs instead of just one.Many researchers have worked to provide a data input to the system in multicloud settings that is both cost-effective and high-availability based on user requirements.
Choosing a data placement strategy is influenced by the data object's workload.If you have a certain amount of time, then the amount of data you have to deal with (DAF, or Get access rate) is directly connected to the amount of data you have to retrieve (DAF).The workload fluctuates during the course of a data placement's lifecycle.It is more probable to be hosted in CSPs with reduced out-bandwidth charges if the data item is read-intensive and in hot-spot state.In contrast, data that has a low DAF is more likely to be kept in CSPs with smaller capacity costs since it is storage-intensive and has a cold-spot status to it.Because of this, as DAF grows, it is possible that a user's data placement technique may result in higher outbandwidth charges if they stick with it over the storage lifecycle.It is possible to incur high storage costs if a user utilizes tactics that are better suited for hot-spot status across the whole life-cycle of data storage.
It is vital to build a technique for dynamically adjusting the data processing scheme depending on abstract data workload in order to decrease overall costs and boost reliability over the data object's lifespan.The total cost cannot get the optimal outcome because of the unpredictability of future data workload.As a result, the data stream placement technique relies heavily on forecasting future workloads.It's important to think about how to build a dynamic placement system that takes future accessibility frequency into consideration.Data generated by IoT systems is enormous.E-health and other critical IoT applications demand low latency data retrieval.ECAs-IoT have difficulty distributing IoT data to the correct edge nodes.So far, the following designs have been proposed:
IFogStor and IFogStorZ
There are a few ideas in [5] that use fog-node distributions and variations to reduce the overall latency of fog-node storage and retrieval of IoT data.A collection of IoT-enabled devices, fog computing devices, data facilities, IoT services comprise the systems infrastructure.Data created by the IFogStor system is stored and retrieved from efficient fog nodes to decrease overall network latency.Run-time execution of the data takes place on a node that is resilient.Details on information flow, network delay, application location and storage capacity are all available to this node.Fig. 1 depicts the architecture of the IFogStor system, which comprises three basic types of actors: special nodes that preserve IoT data, such as fog nodes or data centers.Any layer, save for layer 0, may include them.Data producers are nodes that create information.Nodes of this sort may be found in a variety of different tiers.Nodes that analyze or interpret IoT data are known as "data consumers.They might exist in multiple tiers.As a result of their capabilities, fog nodes might serve as both a data host and a data producer and consumer all at once.Two remedies were suggested to mitigate the issue: IFogStor: a single integer program-like method to solve the issue of data placement.For small-scale applications, it identifies the best location; but, for a wide range of applications, its efficiency is unacceptably slow. A technique that uses regional points of presence (RPoPs) as partitioning sites to separate geographical places as part of a divide-and-conquer strategy.A global solution is found by solving the specific problems in each site.However, this method does not locate the best location, but it significantly reduces the amount of time it takes to get data.IFogStorG Even tthough IFogStorZ is easy to build, it loses a significant optimality amount when the generators of data are far away from information consumers.The amount of fog nodes and Internet of Things (IoT) services may also differ across subregions.As a result, subproblems with imbalanced causes are discovered.Improved runtime speed and reduced intricacy of the data placement technique are the goals of IFogStorG.A more advanced network topology-adapting technique is therefore possible.Following are the main considerations in the partitioning step: Data users and data providers are kept as close to one another as feasible in order to maximize the strategy's efficiency.Data consumers and producers were represented by matrices, while fog nodes were represented by the adjacency matrix, which mapped the value of latency in the architecture.For each subgraph, they used the IFogStor technique to find a solution.After that, the findings of each subsection are added together to arrive at the ultimate global answer.An example of a real-world smart city was used to gauge the effectiveness of their approach.The trade-off in number of data copies and latency reduction should be made as the number of information subscribers grows.
Multireplica Data-Placement Approach Baranwal and Vidyarthi [6] dealt with the problem of delay that arises whenever information consumers from various geographic regions subscribe to the same information or data, but only a single copy of the data resides at the precise fog node.A greedy technique called IFogStorM was devised to reduce latency as a result of this problem.Overall latency was lowered with 10 percent over IFogStorG, and with 6 percent over IFogStorZ, according to the results.
Orchestration-Based ECAs-IoT
Because IoT networks improve system and security stability, and make network maintainability easier, they are considered a significant problem.As a central solution, some ECAs-IoT use software-defined networks, whereas other ECAs-IoT utilize other methodologies.
Services and Tasks Allocation-Based Architectures
By distributing critical delivery of services and task allocation to the most efficient edge nodes, cloud infrastructure improves Internet of Things (IoT) systems.This section focuses on ECAs-IoTs, which oversee the distribution of activities and resources in IoT systems.
Mobile Fog Services' Allotment (MFSA)
IoT system may benefit from edge computing, which manages service processing and job distribution at the edge node.In MFSA, Ibaraki [7] used an Integer-Programming (IP) formulation to minimize the overall costs of delivering service while assigning requests to the available resources.The approach assigns a "probability of availability" to each server.Users and fog nodes are connected by a middleware controller, which has access to accurate information about the complete architecture.A nondeterministic component of each server's availability is also known to the program.Each user's service request is broadcast to various servers in order to handle this component.
However, the user's ability to submit queries to an unlimited number of servers is constrained.It was also suggested that each user would have a budget for making requests to every server.The server resources might be shared by various users only if the total number of services supplied by the server does not exceed the capacity of each server.Servers' availability probabilities are unrelated to one another.There is a fee for every service.The Quality of Service (QoS) level was also governed by a set of limitations.Constraints include the likelihood that the user-specified server will be available.For the purpose of reducing the overall cost of assigning services, this challenge has to be solved.
Multiagent-Based Flexible ECA-IoT (MAFECA)
MAFECA is a multi-agent-based, adaptable version of the Internet of Things.Pirbhulal et al. [8] proposes a modular architecture that addresses the usual IoT network issues while optimizing tasks distribution between edge machines and the cloud.Two system abilities are employed in this architecture: user-oriented and environment-adaptation.The capacity to cater services to individual users in real time is enabled by the usage of data received by IoT devices, such as user behavior.When a job has a high volume and high quality, the processing site is determined by the environment in which it is being performed.
Hierarchical Architecture to Place Mobile Workloads (HAM)
HAM was designed by Lee and Lee [9] by presenting an algorithm that puts mobile workloads across various levels and determines how much processing capacity each task needs.
Scalable IoTs Architectures-Centered on Transparent Computation (SAT)
When it comes to the allocation of services, Das,Santra,Bodra and Chakravarthy [10] presented an infrastructure, which utilizes transparent computing to maximize scalability and minimize response time.An end-user layer is made up of IoT devices; an end-layer disburses solutions to end-users; a foundational internet protocol connects edge computing systems and the virtualized system; a cloud layer incorporates high-performance computer technology and stashing resources to negotiate with huge datasets; and a layer for managing the entire infrastructure, which encompasses a managerial and functionality layer.
Edge-centered Aided Living Platforms for Home Automations (E-ALPHA)
Device handler, which separates the technology-based operations from communication-based services; embedded system, which dynamically loads specific protocols; and database, which is responsible for storing and retrieving data from the ehealth applications were some of the components proposed by Kopmaz and Arslanoğlu [11] in their architecture for enhancing e-health applications.The EdgeCloudSim simulator was used to model this design.
SDN-Based Fog Architectures
We need to manage resources and data in IoT networks.ECAs-IoT might benefit from Software-Defined Networks (SDNs).ECAs-IoTs, which utilize SDN technologies to potentially manage the networks are covered in this section.
Multi-level SDN-centered 5G Vehicular Architecture (VISAGE)
Every year, the number of automobiles on our roads increases.Two sub-frameworks of the 5G-VANETS scheme proposed by Das and Gurusamy [12] are Local SDN Controller (LSDC), and Central SDN Controller (CSDNC).Fog nodes, such as moving automobiles or even stationary cars, may be used in this way.Fig. 2 depicts the components of the architecture in [13]: The CSDNC is a permanent portion of the system that is hosted in the cloud and reflects global intelligence.Intelligence is centralized and represented by the DNC.In this case, it is a fog cell that is governed by the CSDNC.The LANC controls the fog cell nodes.Customers are unable to do their own computations, and fog nodes step in to help.People, vehicles, or organizations all fall under this broad category.In order to keep clouds and fog nodes connected, base stations must be used.Fog-SDN capabilities are broadcast by the LSDNC in VISAGE.Alternatively, each vehicle might use the fog cell's services e.g., fog nodes.The LSNDC might be used to link the fog cells to the Internet.As a result, the LSD interacts with the CSDNC, which in turn coordinates the resources.VANET's resources are better managed thanks to the design in [14].Components of this architecture include the following elements: To control a group of Roadside Units (RSUs), forward data, store data on specific road systems, and provide timely services, the SDN controller uses a Road-Side Unit Controller (RSUC), which is located on the roadside and is accountable for global intelligence.The RSU is also obliged for communicating data and it typically controlled by SDN controllers.Cellular base stations are in charge of local intellect, data forwarding, and conveying fog warnings.However, there is no resource management or network orchestration in this design, thus there is no way to evaluate how well it performs.
Software-Defined Fog Computing Networks Architectures for IoTs (SDFN) Sham and Vidyarthi [15] developed an integrated SDN and fog processing system, which differs from earlier systems by being generic.For the architecture, there are three main components: end-devices, SDN controllers (which are responsible for picking the best access points for the IoT nodes with the skillset about systems e.g., fog-device capacity to delegate works to them) and fog architecture.It is the center of the network where cloud computing takes place, and fog devices utilize APIs to give their services.Using a hierarchical deployment, the same application may execute on numerous fog devices at the same time A job is assigned to each fog device depending on its individual capabilities.Inter-transportation systems, video monitoring, and precision agriculture might all benefit from this architecture's flexibility and scalability in the Internet of Things.For the evaluation of the architecture, there is no simulation and no central control of the networks.
SDN-Based Cloudlet Architectures
In this part, we show how an SDN and a cloudlet may be used to administer an IoT networks.
Dynamic Distribution of IoT Analysis (DDA)
Multilayer architecture built on SDN that keeps track of Internet of Things (IoT) traffic, uses congestion prevention methods, and disperses analysis of IoT data among a Data Center (DC) and the network's edge has been suggested by Lozano-Rizk et al. [16].The structure of this design is as follows: Connections to DCs at the network's edge is given, and the median bandwidths of IoTs data flows is tracked at the architecture tier of the networks.A specialized DC controller exists for each DC in the infrastructure layer.Cloud orchestrator is also deployed at upper layer of the DC controllers, providing federated cloud services.An IoTs-aware Transportation SDN Orchestrators (TSDNO) work as a controller of controllers and sits atop each domain's SDN controllers.TSDNO is also in charge of keeping IoT traffic flowing smoothly.IoT-aware GSOs are at the top of cloud orchestrators, orchestrating global end-to-end operations from the cloud to the edge.
Big Data Analysis Architectures
Using sensors, massive volumes of data are generated every second.Fog computing architectures for large data processing are discussed in this section.
Hierarchical Dicentralized Fog Computing Platforms for the Smart City (HDF)
Birkholzer, Cihan and Bandilla [17] presented a four-tiered hierarchical framework.The suggested architecture's tiers are shown in Fig. 3. Various sensors are spread around the network in order to gather and create data.There are several kinds of sensors in Layer 4 that are deployed across the environment to gather and produce data.Subsequently, Layer 3 receives the unprocessed sensor data.In Layer 3, the edge devices manage a layer 4 sensor network that covers a local area, such a neighborhood.In this layer, edge devices perform real-time data analysis.For example, reports are generated by evaluating the data, and the infrastructure is alerted to dangers that were detected by sensors on the edge devices.Edges are then pooled and linked to one of intermediary compute nodes in Layer two when this step is completed.Using temporal and geographical data, this node can identify and respond to potentially risky situations.When it comes to total infrastructure analysis, monitoring and management, a cloud computing data center is the last layer.They created a pipeline system prototype and ran simulations of 12 distinct events using the sensors as part of their evaluation process.A hidden Markov structure was used to train the model to identify the events.Fog computing using cloud resources reduced latency in big-data processing, according to the findings.According to Pang, Wang and Fang [18], a comparable architecture was proposed for collecting data from a variety of sensors by employing cloud computing and hierarchical edge strategy.Many sensors provide information to the first layer of collectors, which is called edge level, before it is sent to a generic cloud service provider.In the hands of the main service provider, all information is centralized.Once fused data has been obtained, it may subsequently be utilized for big-data analysis by tailored service providers.
Security-Based Architectures
There are several security issues that arise as a result of the structure of IoT networks, including data confidentiality and authentication.This section focuses on ECAs-IoT, which tackle security issues on IoT networks, and explores edgecomputing technologies.Following are some examples of IoT network topologies that address security concerns while avoiding the use of Software-Defined Networks (SDNs).
Privacy Preservation While Aggregating the Information/Data (P2A) Hu,Dong and Wang [19] suggested an infrastructure for protecting sensor data privacy, which typically manages multifunctional aggregation, computing overhead, and linking overhead, among other things.Devices, fog networks, fog centres, and a cloud provider are all part of this system, as depicted in Fig. 4. Smart gadgets use sensors to gather information.Two separate fog nodes receive the acquired data in order to maintain the privacy of the user.When fog centers issue aggregation queries, the fog nodes act as store nodes to aid in the aggregate of data.As a consequence of this, fog centers are able to gather the findings of queries made by fog nodes.Sending the primary query results to the internet center is the next step in the process.The cloud center is a service provider-managed aggregation application.Fog centers and cloud centers are built to be untrustworthy since they attempt to acquire secret original data.In order to avoid collusion, fog nodes have an interest in the original data since they can't trust each other.
Fig 3. Suggested architecture's tiers
It was suggested by Singh [20] to aggregate data while preserving confidentiality at the same time using a machine learningbased approach.Rather than sending the real data, the model delivers a projected value that it has learned via training.Each region's training data is included in the dataset.It's explained here how the procedure works.For example, the cloud center may send questions to fog centers such as "average," "q percentile," "min," "max," and "summation aggregation," among others.All of these inquiries are sent to the fog centre via the cloud center.Due to its inability to respond to cloud center inquiries, the fog center produces its own queries from the ones that were originally sent.Sensors provide fog-node sensory data after separating sensory input into two parts.The fresh set of queries created by the fog centre are used to train and forecast the incoming data.Finally, the cloud center gets the expected values from the fog center and retransmits them to it.
Lightweight Security based on Virtualization (LSV) Mechanisms
According to Tiburski et al. [21], embedded virtualization and trust mechanisms may be used to protect edge systems with no necessity to re-engineer the programs placed in edge devices.Certain security needs are met by the proposed architecture: the secrecy of permanently stored components, the authenticity of executed codes, and run-time state integrity.Securing the boot, key storage, and cross domain communications are all part of the security architecture, which is composed of four techniques.By using Root of Trust (RoT), an algorithmically secure foundation, and the Chain of Trust (CoT) that would be built to start it up only after cryptographically secured technologies by a legal source is first installed employing publickey authentication, edge endpoints are protected.Keys are also kept in specialized hardware, which is also accountable for verifying and executing the RoT process.Thus, in order to accomplish a second degree of safe boot verification, this embedded virtualization design uses several Virtual Machines (VMs) from various manufacturers.Running time assaults are still possible even after the CoT has been formed and hardware assisted virtual maintains a TEE; hence, the system architecture must be secured via run-time mechanisms.It was decided to test the system's design based on three metrics: storage footprint, speed, and latency between VMs.Edge-device protection might be provided with no need to re-engineer edge application.
Service Architectures with Balanced Dynamic centered on the Cloud (SBDC)
Since IoT devices have inherent limitations, traditional security measures are rendered ineffective.It was reported that Xu,Hang,Jin and Kim [22] used distributed edge devices to create a secure architecture built on trust methods and service templates that could withstand assaults and comparable service demands.It is made up of two themes: the service and parsing templates.
The edge system, the edge system, and the cloud system make up this architecture's three main components.The data collecting, processing, and app-service levels are all separated into three tiers.It is on the app-service layer where the cloud is located.The edge platform and edge network are all placed at this level.A trust condition for IoT devices is established, and the trusted IoT devices are chosen to potentially execute services.This design dynamically changes the IoT load, and it satisfies end-users' needs, such as authenticity and accuracy, using this architecture.Virtualization processes are performed by converting physical components into virtual devices on this edge platform.Cloud load is dynamically adjusted through the use of edge layer services.IoT reliability is ensured through the use of the edge node at the data levels as well as the foundation of service-parsing templates.
Services, which need additional resources than those available at the edge computing processing layer are processed in the cloud.Old data is logged and utilized for future analysis and data mining is handled by the cloud, which creates and stores service parameter templates and stores information matching them.The MATLAB platform was used extensively to conduct extensive tests to assess its architecture.Four Interconnects and a cloud make up the system's architecture.There is only one edge platform per IoT network.The findings suggest that this design has the potential to improve service efficiency and data integrity.
SIOTOME: Edge-ISP Collaborative Architectures for IoTs Security
When it comes to IoT devices, Dina Merlinda Izzah [23] collaborated with the Internet service provider (ISP) to discover risks and vulnerabilities early on.When compared to typical networks, SIOTOME's intrusion-detection system can learn from several domains to recognize different types of attacks.As an example, one domain may represent an individual Internet Service Provider's (ISP), as well as an individual building network.There are two high-level domains in the SIOTOME system architecture: SIOTOME/edge and SIOTOME/cloud.As part of the system design, the following components may be found in the smart house: IoT data is collected by the edge data collector, which is then processed by the edge analyzer, which then reports back to the edge controller via SDN for further analysis.The edge controller, based on SDN, is then used to configure the gateway, ensuring that all IoT devices on the home network are managed by the gateway.Cloud collector, cloud analyzer, cloud controller, and cross-layer controller make up the SIOTOME/cloud component.
Edge-Computing Architectures for Mobile Crowd Sensing (MCS)
In order to support mobile crowd sensing, Hamdan,Ayyash and Almajali [24] suggested a four architecture: the userequipment layer, which includes IoTs devices like wearable sensor devices; the edge computing layer, which manages workers in specific geographic areas; the cloud computing layer, which processes complex data; and the application layer, which analyzes the data.When a wireless crowd-sensing situation happens, this architecture delivers an alert to mobile devices, ensuring data privacy and reducing latency, while it distributes data across servers.
ECAs-IoTs Integrating Virtualized IoTs Devices (ECV) Ullah et al. [25] presented an ECAs-IoTs to develop the intelligent cities.This design acts as the intermediary layer for potentially processing IoTs data.Data validation, metadata annotation, and security are all part of this architecture's six components, which include collection proxies, which connect every IoTs system to other components in the design; information affirmation that potentially maintains the integrity of gathered information; and safety, which accomplishes symmetric information encryption for Cloud computing before conveying them to virtual IoT devices.
IV. CHALLENGES AND SOLUTIONS TO EGDE INTELLIGENCE ARCHITECTURE
As sensors become more widely used in the real world, more physical objects are being linked to the Internet of Things (IoT) to share data.Wearable medical devices, smart cities, smart homes, and environmental perception are just a few examples of where Internet of Things (IoT) technology is now being used.Traditional IoT services need data to be uploaded to cloud servers by sensors and devices that are linked through IoT.The IoT devices will get the processed data when the tasks have been finished.Sensors and gadgets may benefit from cloud computing, but the significant data transmission overhead cannot be overlooked.In 2018, the total number of IoT-enabled devices throughout the globe surpassed 11.2 billion, and this number is expected to expand to 30 billion by 2025.However, network capacity expansion is now lagging significantly behind the growth rate of data, and the complexity of the network environment makes it difficult to reduce latency.Traditional IoT services have a bandwidth crunch, which must be addressed if they are to be successful.
A new computing concept known as "Edge Computing" (EC) has recently been suggested to alleviate the aforementioned bottleneck.EC is a term used to describe the technology that moves computing workloads to the periphery of the network.Comparing EC to cloud computing, there are several benefits: end-users' confidentiality is protected, data transmission is more efficient, network bandwidth is less burdened, and data centers' energy consumption is lessened.To reduce latency, Edge Nodes (ENs) may process, store, and send raw data produced by IoT devices rather than relying on centralized cloud platforms [26].This eliminates duplicate data transfer.EC may better serve IoT and mobile computing applications that have tight reaction time requirements.
There is no guarantee that EC will solve all of your problems.It is true that IoT systems under the EC have substantially increased their potential in many domains, including computation offloading, accurate location and real-time processing.However, it is also true that low-latency data manufacturing near end-users has been given credit for this expansion.EC, on the other hand, raises additional security concerns and expands the system's attack surfaces in three ways: As a result of the ENs being scattered over the network, it is impossible to centrally supervise all of the equipment.The attacker may target vulnerable ENs and utilize the nodes it has taken control of as a launching pad for an assault on the whole system.Limitation in processing power: unlike cloud computing, the physical construction of ENs limits the computing power available, making them vulnerable to large-scale centralized assaults like Distributed Denial of Service (DDoS), which may inflict significant damage to the ENs in question.A broad variety of technologies, including wireless sensor systems, mobile data collecting, grid computing, and mobility data gathering, are used in EC.It is challenging to build a single security mechanism and ensure consistency across multiple security domains in this diverse environment.
Due to the inherent dangers of edge computing, several security strategies and algorithms have been developed.Algorithms and models for intrusion prevention, privacy preservation, and access control all follow a consistent pattern.Traditional defenses are often rendered obsolete by the constant improvement of assault tactics and approaches.Artificial Intelligence (AI) is fascinating because it can help solve some of the most pressing security and privacy challenges.DDoS assaults and Distributed Denial of Service (DDoS) attacks are prevalent forms of intrusion.DDoS refers to the use of several hacked ENs to assault the server, increasing the strain on the website and affecting the server's responsiveness to routine requests.Attacks from the hijacked ENs are detected by the network's intrusion detection system (IDS), which blocks their access by looking for unusual network traffic.ML may assist IDS detect intrusions more quickly and correctly than classic identification approaches by extracting harmful access patterns from earlier data sets and training on that data.
We need to keep our privacy protected since IoT devices are present in every area of our life, which holds a lot of sensitive information.In order to assure data security and privacy protection, the majority of currently used technologies encrypt the transferred data.However, the following approaches often have a substantial computational burden, rendering them inaccessible to resource-constrained ENs.Distributed Machine Learning (DML) reduces the danger of data leakage and network stress during transmission by making the ENs only need to communicate the variables to other ENs for collaborative learning after each training instead of directly transferring the actual data.Access control represents a critical problem when several Internet of Things (IoT) devices are working together in that environment.In other words, only the networks and data under their authority may be accessed by each authorized node.The classification technique under ML corresponds to the necessity to classify ENs into distinct groups based on permissions.Low-privilege IoT applications and high-privilege IoT devices are categorized by the algorithm.There will be rigorous controls on who has access to these highprivilege gadgets.
Artificial Intelligence (AI) is increasingly being used in a wide range of edge security applications as research into the topic progresses.However, the implementation of EN-related ideas faces several obstacles.Large volumes of unambiguous data are critical to the efficacy of ML training; yet, the assumption of adequate information is that the computer has suffered mass assaults and can properly recognize these hostile actions.The model's performance will suffer if the training set is tampered with, thus it's important to keep an eye out for assaults on the training set.However, since ENs have a limited computation and storage capacity, a lightweight AI method is also required.
V. CONCLUSION
The term "edge intelligence" refers to a network of interconnected devices and systems \used for artificial intelligence-based data gathering, caching, processing, and evaluation near to the point of data acquisition.Data processing quality and speed may be improved while maintaining data privacy and security via edge intelligence.Because of AI's recent advancements, the number of AI-based applications and services is on the rise.Face recognition, natural language generation, computer vision, traffic predictions, and anomaly-based may now be achieved utilizing AI technology.In this study, researchers looked at designs for human-created edge intelligence.However, there were several flaws in this study.This study breaks down architecture into many different types.However, since it requires specialized hardware, architectural search is out of reach for most researchers.Human-made architecture is the subject of the majority of the literature now available.This paper subdivides the human-created architectures into further classes: 1) Data-Placement-Based Architectures to minimize Latency; 2) Orchestration-Based ECAs-IoT.3) Big-Data-Analysis-Based Architectures; and 4) Security-Based Architectures.In most cases, existing security measures are based on the same algorithms and models for penetration detection, privacy retention or access control.Traditional defenses are often rendered obsolete by the constant improvement of assault tactics and approaches.However, the growth of artificial intelligence (AI) offers new answers to privacy and security challenges, including penetration detection, privacy retention, and access controls.
Fig 1 .
Fig 1.The architecture of the IFogStor system
Fig 4 .
Fig 4. Parts of the server system | 8,101.8 | 2022-10-05T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Autoregressive Moving Average with Exogenous Excitation Model for Acous- tic Scattering from Underwater Objects
This study aims to identify and predict objects underwater using the autoregressive moving average with exogenous excitation (ARMX) model in such a way that the outcome of the model is similar to actual measurements. It is used for parameter estimation. This model is validated by comparing results in actual model with ARMX model, autoregressive with an exogenous variables, and Box Jenkins (BJ) model. The results are analyzed in frequency and time domain by using mean square error criterion. Initial results show that ARMX predicts the acoustic scattering response with an accuracy of 96%, while ARX provides an accuracy of 78%, and BJ model poorly estimates the signal with an accuracy of 35%. ARMX also provides higher accuracy of detection by 7-8% as compared to the existing techniques. Keywords—Autoregressive Moving Average with Exogenous Excitation (ARMX), Autoregressive with Exogenous Variables (ARX), Box-Jenkins (BJ) Model.
Introduction
T he field of acoustics studies the propagation and interaction of mechanical waves such as sound waves through different media including air or water. Various methods can be used to model and understand the behavior of those waves including stochastic, deterministic and statistical methods based on underlying physical mechanics of sound waves. Chen et. al. [1] proposed a model for reverberation by using timespace discretization to increase the computation time. A reverberation is the linear response of scatters to the emitting signal and defined by a linear method to attain the impulse response function. Similarly, L. Zao et. al. [2] presented an adaptive noise detection technique for non-stationary acoustic noisy signals. This approach was used in empirical mode decomposition and a vector of Hurst exponent coefficients. The authors worked for UWA DSSS signals detection where they used Lab-VIEW. It incorporates the power spectrum, autocorrelation and cepstrum methods by using the acquisition card and acoustic transducer. Lu et. al. [3] suggested that different detection methods can be independently or jointly used by tuning the display and control parameters in real time. A summary of existing work for signal identification of underwater objects is presented in Table 1. However, all of these approaches usually require some form of propagation model that can provide basic information such as propagation delay and loss. On the other hand, physics-based models incorporate the effects of attenuation, noise, multi-path, Doppler Effect due to object motion, surface waves [4] and bubbles [5] to provide a more accurate depiction of acoustic behavior.
This study provides a unique approach to evaluate modal parameters that result from impulse response from underwater objects in the presence of random excitation and random measured noise as a result of acoustic scattering. The approach involves the use of an autoregressive moving average with exogenous excitation model to predict the contours of a structure while taking random noise and unmeasured excitation into account. Some of the benefits of this technique are that it provides a scaled model shape of the structure and can be used with periodic signals including dif-ferent excitation signals. To mitigate the unmeasured excitations, these periodic signals can be combined (with time domain) by synchronous averaging.
This technique requires a multistage estimation algorithm that is first used to evaluate the parameters of the ARMX model and subsequently to compute the modular parameters of the structure. ARMX estimation model uses the position of estimated poles on the Z-plane to find the most suitable model and to discriminate between structural modes and false numerical poles.
This paper is organized as follows. Section 2 mathematically describes the types of models and excitation signals used in the identification process. It also explains the model as a regressive estimator and least square error minimize. Section 3 discusses the simulation results. Section 4 concludes the paper.
Methodology
Following sections describe the methodology proposed in this paper.
Error Estimation Methods
Error estimation methods aim to identify the system model and minimizing a cost function that represents the expected difference between the actual and estimated output. This estimated output is computed using a predefined model, whose unknown parameters are the optimization variables and the inputs/outputs are in time domain. Mathematically, cost function can be defined as follows [11], whereŷ (t+τ /t) (θ) defines the estimated value of the output y at time t + τ , determined using the information available at time t for a model with parameter θ. The fundamental model used for error estimation that the system outputs is found between two input signal u t and the effect of unmeasured noise or disturbances e t . Mathematically, Finite Impulse Response (FIR) model is used which is given as follows, where u t = input vector and B(q −1 ) = FIR kernel: Characteristics of models depend on the structure of B(q −1 ) and adaptive noise e t .
ARX Model (Autoregressive with Exogenous
Variables) The global system characteristics and the coefficients of FIR model are found using the ARX model. ARX model with exogenous input is stated as [4] A where, and B(q −1 ) is specified in Equation 2. Therefore, the outcome of the model is given as follows.
The noise is modelled by a factor of 1/A multiplied by the dynamics model. It is important to note that ARX does not model noise and dynamics independently [12].
ARMX Model Structure and Estimation Algorithm
ARMX model with variable inputs and noise disturbance signals [13] is given as follows, where A(q −1 ) is as in Equation 4 and B(q −1 ) is given as in Equation 2. The output of this model is given as follows.
Equation error methodology in ARMX algorithm detects the output, which is a summation of three regression terms: past inputs, outputs and white noise. The inputs of model are the combination of observed input (u t ) which gives the deterministic component by transfer function B(q −1 ) A(q −1 ) ; however, its output y 0 (t) is not available. The other input, which is known as white noise (e t ), helps to determine the stochastic component by C(q −1 ) A(q −1 ) transfer function and its output is a noise ν(t) that signifies the effect of white noise [14]. The observed output is given as follows.
In ARMX, the dynamics of the noise that are parameterized are more flexible than in the ARX model. ARMX extends the ARX structure by providing more flexibility for modeling noise using the C parameters (a moving average of white noise). This allows ARMX to be the preferable option when the input is dominated by the disturbances known as load disturbances "Material recognition based on the time delay of secondary reflections using wideband sonar pulses".
Methodology using wideband chirp pulses. "Size identification of underwater objects from backscattering signals of arbitrary looking angles." The approach uses various frequency ranges to find different parameters Authors claims that proposed method achieved better accuracy then other state of art methods [20]. One of the major advantages of autoregressive (AR) based representation is that the resonant frequencies can be accurately localized and detected. In order to obtain the elastic and geometrical properties, the resonant frequencies are clustered together in the form of identifiable wave families that are then classified with respect to scattered waves using resonance scattering theory [16].
Box-Jenkins (BJ) Model
Polynomial model and it is written as: where, Equation 11-12 characterize the dynamics and noise using rational polynomial functions. As a result, the BJ model is a better option when the input is not dominated by noise. Instead, the noise is a primary measurement disturbance that is added afterwards. This structure allows more flexibility to model noise eue to its ability to independently model noise. The input-output relationship can take the following form, where, ψ(k) is the regression vector of y and u and a function of past data. θ ∈ DM ⊂ RP is a parameter vector to estimate the model [17].
Simulation & Results
The parameters of the ARMX model can now be chosen through a multistage estimation algorithm that chooses the best-fit model from the estimated models based on the position of their corresponding estimated Figure 4. It is difficult to optimize the polynomial orders in ARMX model. It is the most suitable and immediate method for finding the poles of a system and scattering the resonance frequencies. In the proposed methodology, ARMX technique includes zeros in the parametric model, allowing for a more significant and accurate data representation. In order to optimize the computation of the model, an assessment of time and frequency correlation properties of the path coefficients are used. The ARMX, ARX and BJ models are finally validated using real life data from four independent experiments. Specifically, experimental data is used to assess the statically and the auto-correlation functions of the large-scale loss and the short-term path gains. ARMX extends the ARX structure by providing more flexibility for modeling the trended noise as shown in Figure 5, using the C parameters (a moving average of white noise). This allows ARMX to be the preferable option when the input is dominated by disturbances. Figure 6 shows the results of ARMX, ARX AND BJ models outputs. Varying the parameters and comparing the performance of all three models, i.e., ARX, BJ and ARMX, simulation results in Table 3 show that ARMX is the best-fit model and more accurate as compared to other parametric models. algorithm was developed to yield more accurate modal parameters than ARX models estimated using the least squares criterion that included unmeasured excitation, Error estimation is shown in Figure 7 by least square method. As shown in Figure 8, amplitude bode plot, the resonant frequency peaks for the ARMX model can be easily identified as sharp distinct features; whereas, in ARX and BJ models, the plot is smoother and the resonant frequency is difficult to distinguish.
Input Parameters
The spectrum is only plotted for frequencies smaller than the Nyquist frequency. Table 4 shows that the results improve by using ARMX model as compared to the existing techniques.
Conclusion
After comparing the results of simulations including frequency diagrams, parameter estimations, pole-zero analyses and auto and cross correlations, it can be concluded that the best-fit model was provided by ARMX. This model can be described by 6 parameters with a delay of one sample. As a result, the ARMX model provides an accuracy of 97% which is much higher than the corresponding accuracy of the ARX model (78%) . Therefore, the accuracy from ARMX model was improved by 19.3% as compared to ARX model, and by 62% as compared to BJ model. In addition, the detection accuracy improved by 7-8% by using ARMX model in coparison with the exising techniques. A 97% accuracy of estimation can be deemed as a successful identification of the real object based on acoustics. The results showed that ARMX is best linear model because of its low number of parameters and higher percentage of accuracy. This model will be validated in future by conducting a probe analysis experiment. It is intended to consolidate the idea that ARMX can realistically provide a higher accuracy as compared to the state-of-the-art techniques used for underwater object detection | 2,598.4 | 2021-01-01T00:00:00.000 | [
"Engineering"
] |
Ackermann's Function in Iterative Form: A Proof Assistant Experiment
Ackermann's function can be expressed using an iterative algorithm, which essentially takes the form of a term rewriting system. Although the termination of this algorithm is far from obvious, its equivalence to the traditional recursive formulation--and therefore its totality--has a simple proof in Isabelle/HOL. This is a small example of formalising mathematics using a proof assistant, with a focus on the treatment of difficult recursions.
Introduction
The past few years have seen significant achievements in the mechanisation of mathematics [3], using proof assistants such as Coq and Lean. Here we examine a simple example involving Ackermann's function: on how to prove the correctness of a system of rewrite rules for computing this function, using Isabelle. The article also includes an introduction to the principles of implementing a proof assistant.
Formal models of computation include Turing machines, register machines and the general recursive functions. In such models, computations are reduced to basic operations such as writing symbols to a tape, testing for zero or adding or subtracting one. Because computations may terminate for some values and not others, partial functions play a major role and the domain of a partial function (i.e. the set of values for which the computation terminates) can be nontrivial [10]. The primitive recursive functions-a subclass of the recursive functions-are always total.
In 1928, Wilhelm Ackermann exhibited a function that was obviously computable and total, yet could be proved not to belong to the class of primitive recursive functions [10, p. 272]. Simplified by Rózsa Péter and Raphael Robinson, it comes down to us in the following well-known form: Isabelle/HOL [13,14] is a proof assistant based on higher-order logic. Its underlying logic is much simpler than the type theories used in Coq for example. In particular, the notion of a recursive function is not primitive to higher-order logic but is derivable. We can introduce Ackermann's function to Isabelle/HOL as shown below. The specification invokes internal machinery to generate a lowlevel definition and derive the claimed identities from it. Here Suc denotes the successor function for the natural numbers (type nat ). It is easy to see that the recursion is well defined and terminating. In every recursive call, either the first or the second argument decreases by one, suggesting a termination ordering: the lexicographic combination of < (on the natural numbers) for the two arguments.
Nevertheless, it's not straightforward to prove that Ackermann's function belongs to the class of computable functions in a formal sense. Cutland [6, p. 46-7] devotes an entire page to the sketch of a construction to show that Ackermann's function could be computed using a register machine, before remarking that "a sophisticated proof" is available as an application of more advanced results, presumably the recursion theorem. This raises the question of whether Ackermann's function has some alternative definition that is easier to reason about, and in fact, iterative definitions exist. But then we must prove that the recursive and iterative definitions are equivalent.
The proof is done using the function definition facilities of Isabelle/HOL and is a good demonstration of their capabilities to the uninitiated. But first, we need to consider how function definitions are handled in Isabelle/HOL and how the later relates to symbolic logic.
Recursive function definitions in Isabelle/HOL
Isabelle's higher-order logic is a form of Church's simple type theory [5]. As with Church, it is based on the typed λ-calculus with function types (written α → β: Greek letters range over types) and a type of booleans (written bool ). Again following Church, the axiom of choice is provided through Hilbert's epsilon operator ǫx.φ, denoting some a such that φ(a) if such exists and otherwise any value.
For Church, all types were built up from the booleans and a type of individuals, keeping types to the minimum required for consistency. Isabelle/HOL has a multiplicity of types in the spirit of functional programming, with numeric types nat, int, real, among countless others. Predicates have types of the form α → bool, but for reasons connected with performance, the distinct but equivalent type α set is provided for sets of elements of type α.
Gordon [8] pioneered the use of simple type theory for verifying hardware. His first computer implementation, and the later HOL Light [9], hardly deviate from Church. Constants can be introduced, but they are essentially abbreviations. The principles for defining new types do not stretch things much further: they allow the declaration of a new type corresponding to what Church would have called "a non-empty class given by a propositional function" (a predicate over an existing type). These principles, some criticisms of them and proposed alternatives are explored by Arthan [2].
The idea of derivations schematic over types is already implicit in Church ("typical ambiguity"), and in most implementations is placed on a formal basis by including type variables in the calculus. Then all constructions involving types can be schematic, or polymorphic, allowing for example a family of types of the form α list, conventionally written in postfix notation. Refining the notion of polymorphism to allow classes of type variables associated with axioms-socalled axiomatic type classes-is a major extension to Church's original conception, and has required a thoroughgoing analysis [12]. However, those extensions are not relevant here, where we are only interested in finite sequences of integers.
There are a number of ways to realise a logical calculus on a computer. At one extreme, the implementer might choose a fast, unsafe language such as C and write arbitrarily complex code, implementing algorithms that have been shown to be sound with respect to the chosen calculus. Automatic theorem provers follow this approach. Most proof assistants, including Isabelle, take the opposite extreme and prioritise correctness. The implementer codes the axioms and inference rules of the calculus in something approaching their literal form: providing syntactic operations on types and terms while encapsulating the logical rules within a small, dedicated proof kernel. This LCF architecture [7] requires a safe programming language so that the proof kernel-which has the exclusive right to declare a formula to be a theorem-can be protected from any bugs in the rest of the system.
Formal proofs are frequently colossal, so most proof assistants provide automation. In Isabelle, the auto proof method simplifies arithmetic expressions, expands functions when they are applied to suitable arguments and performs simple logical reasoning. Users can add automation to Isabelle by writing code for say a decision procedure, but such code (like auto itself) must lie outside the proof kernel and must reduce its proofs to basic inferences so that they can pass through the kernel. In this way, the LCF architecture eliminates the need to store the low-level proofs themselves, a vital space saving even in the era of 32 GB laptops.
Sophisticated principles for defining inductive sets, recursive functions with pattern matching and recursive types can be reduced to pure higher-order logic. In accordance with the LCF architecture, such definitions are translated into the necessary low-level form by Isabelle/HOL code that lies outside the proof kernel. This code defines basic constructions, from which it then proves desired facts, such as the function's recursion equations.
In mathematics, a recursive function must always be shown to be well defined. Non-terminating recursion equations cannot be asserted unconditionally, since they could yield a contradiction: consider f (m, n) = f (n, m) + 1, which implies f (0, 0) = f (0, 0) + 1. Isabelle/HOL's function package, due to Alexander Krauss [11], reduces recursive function definitions to inductively defined relations. A recursive function f is typically partial, so the package also defines its domain D f , the set of values for which f obeys its recursion equations. 1 The idea of inductive definitions should be familiar, as when we say the set of theorems is inductively generated by the given axioms and inference rules. Formally, a set I(Φ) is inductively defined with respect to a collection Φ of rules provided it is closed under Φ and is the least such set [1]. In higher-order logic, I(Φ) can be defined as the intersection of all sets closed under a collection of rules: gives rise to a familiar principle for proof by induction. Even Church [5] included a construction of the natural numbers. Isabelle provides a package to automate inductive definitions [15].
Krauss' function package [11] includes many refinements so as to handle straightforward function definitions-like the one shown in the introductionwithout fuss. Definitions go through several stages of processing. The specification of a function f is examined, following the recursive calls, to yield inductive definitions of its graph G f and domain D f . The package proves that G f corresponds to a well-defined function on its domain. It is then possible to define f formally in terms of G f and to derive the desired recursion equations, each conditional on the function being applied within its domain. The refinements alluded to above include dealing with pattern matching and handling easy cases of termination, where the domain can be hidden. But in the example considered below, we are forced to prove termination ourselves through a series of inductions.
For a simple example [11, §3.5.4], consider the everywhere undefined function given by U (x) = U (x) + 1. The graph is defined inductively by the rule Similarly, the domain is defined inductively by the rule It should be obvious that G U and D U are both empty and that the evaluation rule x ∈ D U =⇒ U (x) = U (x) + 1 holds vacuously. But we can also see how less trivial examples might be handled, as in the extended example that follows.
An Iterative Version of Ackermann's Function
A list is a possibly empty finite sequence, written [x 1 , . . . , x n ] or equivalently x 1 # · · · # x n # []. Note that # is the operation that extends a list from the front with a new element. We can write an iterative definition of A in terms of the following recursion on lists: the idea being to replace the recursive calls by a stack. We intend that a computation starting with a two-element list will yield the corresponding value of Ackermann's function: [n, m] −→ * [A(m, n)].
An execution trace for A(2, 3) looks like this: We can regard these three reductions as constituting a term rewriting system [4], subject to the proviso that they can only rewrite at the front of the list. Equivalently, each rewrite rule can be imagined as beginning with an anchor symbol, say : A term rewriting system is a model of computation in itself. But termination isn't obvious here. In the first rewrite rule above, the head of the list gets bigger while the list gets shorter, suggesting that the length of the list should be the primary termination criterion. But in the third rewrite rule, the list gets longer. One might imagine a more sophisticated approach to termination based on multisets or ordinals; these however could lead nowhere because the second rewrite allows 0 # 1 # L −→ 1 # 0 # L and often these approaches ignore the order of the list elements.
Although some natural termination ordering might be imagined to exist, 2 this system is an excellent way to demonstrate another approach to proving termination: by explicit reasoning about the domain of definition. It is easy, using Isabelle/HOL's function definition package [11].
The Iterative Version in Isabelle/HOL
We would like to formalise the iterative computation described above as a recursive function, but we don't know that it terminates. Isabelle allows the following form, with the keyword domintros, indicating that we wish to defer the termination proof and reason explicitly about the function's domain. Our goal is to show that the set is universal (for its type). The domain, which is called ackloop dom, is generated according to the recursive calls. It is defined inductively to satisfy the following properties: For example, the first line states that if ackloop terminates for Suc n # L then it will also terminate for n # 0 # L, as we can see for ourselves by looking at the first line of ackloop. The second and third lines similarly follow the recursion. The last two lines are unconditional because there is no recursion.
It's obvious that ackloop dom holds for all lists shorter than two elements. Its properties surely allow us to prove instances for longer lists (thereby establishing termination of ackloop for those lists), but how? At closer examination, remembering that ackloop represents the recursion of Ackermann's function, we might come up with the following lemma: And this is the right-hand side of P (m + 1, n + 1). It must be stressed that when typing in the Isabelle proof shown above for lemma ackloop dom longer, I did not have this or any derivation in mind. Experienced users know that properties of a recursive function f often have extremely simple proofs by induction on f.induct followed by auto (basic automation), so they type the corresponding Isabelle commands without thinking. We are gradually managing to shift the burden of thinking to the computer.
Completing the Proof
Given the lemma just proved, it's clear that every list L satisfies ackloop dom by induction on the length l of L: if l < 2 then the result is immediate, and otherwise it has the form n # m # L ′ , which the lemma reduces to A(m, n) # L ′ and we are finished by the induction hypothesis.
A slicker proof turns out to be possible. Consider what ackloop is actually designed to do: to replace the first two list elements, n and m, by A(m, n). The following function codifies this point. As mentioned above, recursive function definitions automatically provide us with a customised induction rule. In the case of acklist, it performs exactly the case analysis sketched at the top of this section. So this proof is also a single induction followed by automation. Note the reference to ackloop dom longer, the lemma proved above. It is possible to reconstruct the details of this proof by running it interactively, as was done in the previous section. But perhaps it is better to repeat that these Isabelle commands were typed without having any detailed proof in mind but simply with the knowledge that they were likely to be successful. Now that ackloop dom is known to hold for arbitrary L, we can issue a command to inform Isabelle that ackloop is a total function satisfying unconditional recursion equations. We mention the termination result just proved. The equivalence between ackloop and acklist is another one-line induction proof. The induction rule for ackloop considers the five cases of that function's definition, which-as we have seen twice before-are all proved automatically. The equivalence between the iterative and recursive definitions of Ackermann's function is now immediate. We had a function that obviously terminated but was not obviously computable (in the sense of Turing machines and similar formal models) and another function that was obviously computable but not obviously terminating. The proof of the termination of the latter has led immediately to a proof of equivalence with the former.
Anybody who has used a proof assistant knows that machine proofs are generally many times longer than typical mathematical exposition. Our example here is a rare exception. | 3,599.4 | 2021-04-22T00:00:00.000 | [
"Computer Science"
] |
Anomalous phenomena in ECRH experiments at toroidal devices and low-threshold parametric decay instabilities
In the paper the possibility of total 3D trapping of electron Bernstein (EB) waves in the tokamak equatorial plane in the vicinity of the local density maximum produced by electron pump-out-effect is demonstrated. Thresholds and growth rates of the associated absolute (temporally growing) parametric decay instability (PDI) leading to anomalous absorption is predicted in the range of less than 100 kW. Its possible role in explanation of ion acceleration observed in ECRH experiments as well as in redistribution of the deposited power is discussed.
Introduction
Electron cyclotron resonance heating (ECRH) at power level of up to 1 MW in a single microwave beam is routinely used in present day tokamak and stellarator experiments and planed for application in ITER for plasma heating, current drive and neoclassical tearing mode island control. Parametric decay instabilities (PDI) leading to anomalous reflection and/or absorption of microwave power are believed to be deeply suppressed in tokamak megawatt power level electron cyclotron (EC) resonance fundamental harmonic ordinary mode and 2 nd harmonic extraordinary mode heating experiments utilizing gyrotrons [1] - [3]. Therefore the wave propagation and absorption in these experiments are thought to be well described by linear theory and thus predictable in detail.
However during the last decade a number of observations have been obtained evidencing presence of anomalous phenomena that accompany ECRH experiments at toroidal devices.
First of all, non local electron transport was shown to accompany ECRH in some cases indicating that the RF power is not deposited in the regions predicted by standard theory, but is rather redistributed very quickly all over the plasma [4].
Secondly, the first observations of the backscattering signal in the 200 -600 kW level second harmonic ECRH experiment at Textor tokamak were reported [5], [6], which can be explained in terms of an anomalous backscattering of the EC pump waves.
And finally, fast ion generation was observed during ECRH pulse under conditions when energy exchange between electrons and ions should be very low [7], [8].
It is worth noting here that the later two phenomena were observed at the non-monotonic plasma density profile, caused in each specific case by different physical mechanisms such as features of plasma confinement in the magnetic island or so-called electron pump-out effect, originated due to the anomalous convective particle fluxes from the EC layer at the intensive ECRH.
The novel low threshold mechanism of the PDI excitation was proposed in [9] based on analysis of the actual Textor density profile. It was shown that the local maximum of the plasma density, which is usually observed in the O-point of magnetic island at Textor [10], can lead to localisation of the low frequency ion Bernstein (IB) decay wave and thus to suppression of IB wave convective losses in radial direction. A more complicated 2D analysis of the IB wave propagation accounting for the poloidal inhomogenuity of magnetic field in toroidal plasma have shown possibility of IB wave localization in the poloidal direction, as well [11], [12]. The threshold of the backscattering PDI was calculated in this case and shown to be more than four orders of magnitude lower than predictions of standard theory (in the range of 50 kW for the Textor experiment parameters).
Quite recently, the possibility of the IB wave 3D localization and the low-threshold absolute PDI excitation was reported in [13]. The analysis performed at fusion relevant parameters for the planned ECRH experiments in JET predicts the threshold of the absolute PDI in the range of 100 kW being approximately four orders of magnitude lower than that predicted by the standard theory [1] - [3] and an order of magnitude lower than the threshold of the fast convective PDI. Nevertheless, at low plasma density and temperature the growth rate of this instability appears to be too small to make the instability important for the energy budget.
It should be stressed however, that at these plasma parameters another scenario of the lowthreshold absolute PDI of the extraordinary mode EC wave, leading to its anomalous absorption via decay into low frequency IB wave and high frequency electron Bernstein (EB) wave, may be realized at the non-monotonic density profile as well. The EB wave in this case is trapped in the equatorial plane near the density maximum in a 3D toroidal cavity that substantially decreases the threshold of instability, whereas the IB wave propagates to the nearest ion cyclotron harmonic layer and accelerates ions.
In the present paper the experimental conditions leading to the 3D EB wave trapping and substantial reduction of the threshold of the anomalous absorption in ECRH experiments are analyzed. The absolute PDI excitation is predicted and the corresponding threshold is shown to be much smaller than that provided by the standard theory [1-3].
Theoretical approaches
To elucidate the physics of the absolute PDI we analyze the most simple but nevertheless relevant to the experiment [8] three wave interaction model in which the extraordinary mode pump wave propagates almost perpendicular to the magnetic field H in the density inhomogeneity direction x with its polarization vector being mostly directed along the poloidal direction y. We represent a wide microwave beam of the extraordinary mode pump wave propagating from the launching antenna inwards plasma along major radius in the tokamak mid-plane as where c.c. is complex conjugation, z stands for toroidal direction being periodic in tokamak, P is the pump wave power and w is the beam waist. The basic set of integral-differential equations describing the decay of the extraordinary mode pumping wave (1) into a daughter IB wave and EB wave: The integral operatorsD in (2) are defined in weakly inhomogeneous plasma as follows: , e and i being defined at a fixed coordinate r and consisting of the real and imaginary part are familiar expressions for electron and ion susceptibilities in homogeneous plasmas [14], [15]: We have also introduced in (2) where
The reduced Bernstein wave equations
As it was shown in [9], [11][12][13], the PDI threshold decreases substantially when one of the daughter waves, IB wave in the particular case of these references, is trapped at least in x -direction.
EPJ Web of Conferences
This is also possible for the EB wave if the turning point of its dispersion curve and the local maximum of the non-monotonous density profile are close one to another. Seeking a solution of the system x ). This guarantees the existence of two nearby turning points ("warm" to "hot" mode) of the EB wave dispersion curve in plasma and EB wave trapping between them leading to entire suppression of the corresponding convective losses. The trapping of the EB wave is shown in Figure 1 where its dispersion curves at frequency The poloidal dependence of the magnitude of the magnetic field can ensure the localization of the EB wave also in poloidal direction. As it is shown in Figure 2 by the results of the EB wave ray tracing analysis performed accounting for the tokamak equilibria for the same parameters and profiles as used in Figure 1, the ray trajectory is localised in the finite plasma volume. The phase portraits of the motion in radial and poloidal direction are represented by elliptic curves shown in Figure 3 that corresponds to the finite motion. Accordingly, in a vicinity of the turning point of the EB wave x E q q , the local maximum of the dispersion function in the radial direction E x x (further we will assume 0 E x ) and the minimum of the magnetic field in the poloidal direction 0 y the system of integral-differential equation (2) with (3) and (4) reduces to and coordinates along and across magnetic field on the magnetic surface are introduced by relations
Threshold and growth rate of the absolute PDI
We perform the analysis of the EB wave toroidal cavity parametric excitation using the perturbation theory approach. Following approach of [17], at the first step of the perturbation procedure we assume that the trapped EB wave and IB wave propagating between two nearby IC harmonics are not interacting. Thus assuming the EB wave PDI pumping (r.h.s. of equation (6)) and its damping (see second term in l.h.s. of equation (6)) small, we neglect them in the zero order approximation and obtain equation which can be solved by separation of variables: At the next step of the perturbation theory procedure we take the EB wave damping and PDI pumping into account. Assuming the EBW is localized in x direction in a very narrow layer close to 0 E x where the IBW damping is weak we neglect the last term I D in the l.h.s. of equation (5) describing nonlinear excitation of the IB wave, its damping as well as losses both along the magnetic field and in radial direction. We integrate equation (5) to obtain I b in the form (6) and using the zero order solution (7) for the EB wave potential in the r.h.s. of (6) we can obtain the first order perturbation theory corrections to the localized EB wave eigen frequency. Namely, multiplying both sides of the equation by the complex conjugated zero order eigen function (7) and performing integration over coordinates we get for the EB eigen mode (k, n) a condition connecting its growth (decay) rate with the pump wave power: where E R stands for the major radius at 0 E x . We illustrate the above equation (9) in the most dangerous case of the fundamental EB mode ( 0 k , 0 n ) excitation, possessing the minimal PDI threshold. Neglecting for the sake of simplicity the mismatch of decay condition by putting 0 q we obtain for the growth (damping) rate of the PDI an expression ) to complicate quasi linear instability saturation by variation of plasma parameters due to MHD or transport phenomena. Dependence of the PDI threshold pump power on the microwave beam waist obtained from equation (10) and shown in Figure 5 deserves a few comments. At the small beam waist the inequality 1 T holds so that the IB wave diffraction losses along the magnetic field line dominate over its radial convective losses, moreover the EB wave poloidal localization region is larger than the PDI pump region when y w . As is shown in Figure 5, in this limit the PDI threshold is increasing 0 1 / th P w as w decreases. In the opposite case when the beam waist is large so that the PDI pump region is Figure 5. The minimal threshold value is more than order of magnitude smaller than that calculated for the induced backscattering absolute PDI [13] for the same TCV parameters. The growth rate is an order of magnitude higher than in the later case, which makes the 2 nd harmonic extraordinary mode decay into electron and ion Bernstein waves a main candidate for explanation of fast ion acceleration observed in experiment [8]. This mechanism, leading to the microwave power absorption far from the actual ECR layer position, can be to some extent responsible for the non-local electron transport effect often observed in the ECRH experiments [4].The saturation of the analysed absolute instability is provide by nonlinear effects among which one should mention the amplitude dependent stochastic damping of the IB wave or further decay of the weakly damped EB daughter wave into the EB mode of the toroidal cavity and IB wave. In both cases one may expect intensive interaction of the produced IB waves with the ion component and fast ion tail production. Investigation of the instability nonlinear saturation as well as description of the fast convective instability which takes place in the vicinity of the pump beam will be given in the further coming papers.
Conclusion
Summarizing we would like to stress the possible role of the low-threshold absolute PDI predicted in this paper in anomalous absorption of microwave power and, in particular, in fast ion production often observed in second harmonic ECRH in toroidal plasmas. | 2,826.6 | 2012-01-01T00:00:00.000 | [
"Physics"
] |
Evaluation of Ethyl Acetate, Chloroform and Toluene Fractions of Thevetia peruviana (Pers.) K. Schum Methanolic Leaf Extract for Uterotonic Activity
: Thevetia peruviana (Pers.) K. Schum (Apocynacae) leaves have a reputation of abortifacient activity. We investigated traditional claim and found that methanolic leaf extract produce antifertility activity by lowering the progesterone level in rat model. Aim of the present study was to find out the chemical constituent (s) responsible for antifertility activity of methanolic leaf extract of Thevetia peruviana (Pers.). The ethyl acetate, chloroform and toluene fractions of methanolic extract of T. peruviana leaves freed from cardiac glycosides [TPL-Me-G] were selected for phytochemical investigation and in-vitro uterotonic activity. The methanolic extract of T. peruviana leaves (1.103g) was fractionated with toluene (100ml, n=20); chloroform (100ml, n=20); and ethyl acetate (100ml, n=20) in the successive order. These fractions were examined for phytoconstituents and evaluated for in-vitro uterotonic activity. The toluene fraction (TPL-T) was found to have triterpenes, flavonoids and phytosterols. Quercetin (0.8904%) is present in TPL-T. The chloroform fraction (TPL-Ch) was found to contain flavonoids, triterpenes and phytosterols. Presence of alkaloids and flavonoids (quercetin 0.1606%) were observed in ethyl acetate fraction (TPL-Et-Ac). In contrast to TPL-Et-Ac, the TPL-T and TPL-Ch induced dose dependent uterine contraction in the isolated estrogenized rat uterus model. Highest uterotonic activity was found with TPL-Me-G which has kaempferol as phyto-constituent additionally. The in-vitro uterotonic activity is not influenced by quercetin and primary contributor is kaempferol though some unknown phytoconstituent/s also contributes to uterotonic activity and synergizes the action of kaempferol too. So, further research is needed to identify other contributory unknown phytoconstituent/s for antifertility activity of methanolic leaf extract of Thevetia peruviana (Pers.).
Introduction
To meet the ever increasing need of safe and economic antifertility agents, plants with ethno-pharmacological or ethno-botanical reputation are now focused [1], Exhaustive literature survey revealed that Thevetia peruviana (Pers.) K. Schum (Apocyanaceae) is one of the 577 plants used traditionally for regulation of female fertility [2]. T. peruviana is a rich source of secondary metabolite like flavonoids and terpenoids [3][4][5]. Different parts of T. peruviana plant (fruits, leaves, seeds) have been found to possess cardiac glycosides such as thevetin A, thevetin B, neriifolin, peruvoside, thevetoxin and ruvoside etc [5] Researches on T. peruviana proved that it exhibits several pharmacological effect in life threatening diseases like cancer, AIDS as well as some common patho-physiological conditions like inflammation, pain, microbial and fungal infections.
Recently, T. peruviana had been explored for its male antifertility potential. T. peruviana stem bark methanolic extract causes significant decrease in spermatogenic elements and the weight of reproductive organs in male rats [6].
In our previous study, we have found that cardiac glycoside free methanolic extract of T. peruviana leaves (Quercetin 0.0326% and Kaempferol 0.138%) exhibited significant (p<0.001) antifertility activity by decreasing the serum progesterone level in female rat model [7].
In the present study, we aimed to identify the exact chemical constituent responsible for antifertility activity of methanolic extract of T. peruviana leaves (freed from cardiac glycosides). For the same, methanolic extract of T. peruviana leaves was fractionated with toluene, chloroform and ethyl acetate successively. The fractions of methanolic extract were selected for phytochemical investigation and evaluated of uterotonic activity.
Collection and Extract Preparation
Collection of T. peruviana leaves was done from vicinity of Panchkula (30.74°N, 76.80°E), Haryana, India in the year September'2011. Plant specimen was identified from Punjab University (Botany Department), Chandigarh, India by comparing with preserved specimen (specimen number PAN/5046) in the department. Air dried leaves were defatted with petroleum ether by continuous hot extraction method and defatted leaves were freed from cardiac glycoside by boiling leaves with 80% methanol: ethanol (8:2) for 3h at 45°C by the modified method of Oluwaniyi and Ibiyemi, 2007 [7,8]. Treated leaves were extracted with methanol (solid to solvent ratio was 1:10) by cold maceration process at 27°C. The methanolic extract of T. peruviana leaves (pretreated to remove cardiac glycoside) was designated as TPL-Me-G and dried to solid mass with rotary evaporator.
Further these fractions of TPL-Me-G were subjected to HPTLC analysis. Desaga Densitometer CD60 and precoated (with silica gel GF254) HPTLC plate of dimension 100mm x100mm were employed for HPTLC analysis. TPL-T, TPL-Ch and TPL-Et-Ac were individually dissolved in methanol to prepare stock solution (50mg/ml). Standard solutions (0.5mg/ml) of two flavonoids namely kaempferol (KAMP) and quercetin (QUE) were also made in methanol. Standard and test solutions (10µl) were applied on precoated HPTLC sheet. The selected mobile phase was n-butanol: acetone: water (4:1:5). The TLC chamber (twin-trough) was sheathed with Whatman No. 1 filter paper and allowed to be impregnated with the vapors of n-butanol: acetone: water (4:1:5). Up to 80mm of the HPTLC plates were run over with the mobile phase. Then, plates were dried at 45°C after removal from the TLC chamber. Then dried plates were scanned at 300nm.
Animals
To perform uterotonic activity, estrogen primed rat uterus was required. Female virgin Sprague Dawley (SD) rats (50-60g body weight) were used to get estrogen primed rat uterus.
Uterotonic Activity
The effect of TPL-T, TPL-Ch and TPL-Et-Ac were evaluated on isolated estrogen-primed rat uterus [11,12]. To procure the estrogen-primed uterus, the virgin female rats were injected (subcutaneously) with 17-β-estradiol benzoate (13.28 nM per animal) 24 hrs before the removal of uteri. 17-β-estradiol benzoate oil base injection (5mg/ml) was supplied by Macmillon Pharmaceutical Ltd., Amritsar, India. The rats (treated with17-β-estradiol benzoate) were sacrificed by decapitation and uteri were removed promptly. These estrogen primed uteri were removed of connective tissues and small strips (1cm long) were made. Each uterine strip was mounted in an organ bath. The capacity of the organ bath was20 ml and was filled with fresh De Jalon solution (mM). The composition of De Jalon solution (mM) was NaCl 153.85, KCl 5.64, CaCl 2 0.648, NaHCO 3 5.95 and glucose 2.78. The temperature of the organ bath unit was conserved at 37±0.5°Cand supply of air was continuous. A thirty minute time period was permitted as equilibrium period for each preparation. During this 30 minute, the wash up solution was changed three times, once in 10 minutes. Soon after the equilibration period, uterine cumulative contractile responses were observed after addition of oxytocin (0.05-1.50 µM). TPL-T, TPL-Ch and TPL-Et-Ac were added in a concentration range (0.05-1.60mg ml -1 ) to evoke the same response. The uterine muscle contraction was noted after addition of every concentration of each extract. The logconcentration verses response curves were constructed using GraphPad Prism 6. An isotonic transducer connected to a single channel recorder was used to record the contractions. The transducer was calibrated to record changes in the tension generated on g versus mm displacement basis. The applied tension to the preparation was 0.5g.
Fractionation and Chemical Characterization of Phyto-constituents
TPL-Me-G (1.103 gm) was fractionated with toluene (100ml, n=20); chloroform (100ml, n=20); and ethyl acetate (100ml, n=20). The extract was dissolved in toluene was 0.353 gm, in chloroform 0.155gm and in ethyl acetate 0.075gm. Phytochemical investigation revealed that the toluene and chloroform fraction contain triterpenes, alkaloids, flavonoids and phytosterols whereas ethyl acetate fraction contain only flavonoids [ Table 1]. TLC of these entire fractions had been performed using different solvent medium to identify phytosterol and essential oil, flavonoid and alkaloids. TPL-T and TPL-Ch has shown the presence of phytosterols, flavonoids and alkaloids respectively, where as ethyl acetate fraction has shown the presence of only flavonoids. The presences of flavonoids (kaempferol and quercetin) were confirmed in TLC studies using solvent system n-butanol: acetone: water (4:1:5). The presences of these flavonoids were reconfirmed by HPTLC in the same solvent. Quercetin and kaempferol in the TPL-Me-G were quantified to be 0.0326 and 0.138 percent respectively [ Table 2]. Presence of quercetin in TPL-T (0.8904%) and TPL-Et-Ac (0.1606%) was confirmed [ Table 2]. These phytochemically characterized extracts were then used for further studies.
Discussion
TPL-Ch (EC 50 , 0.525 mg ml -1 ) and TPL-T fraction (EC 50 , 0.930 mg ml -1 ) evoked contractions onthe isolated rat uterus which was directly proportional to concentration of extracts employed. TPL-Et-Ac (EC 50 , 6 mg ml -1 ) has very mild effect on uterine contraction. Watcho et al (2011) reported uterotonic activities of Ethanolic extracts of Ficus asperifolia fruits in rats with concentration depended contraction of the isolated estrogenized rat uterus which was matched with our observations [12]. Urugo et al (1998) demonstrated uterotonic properties of the methanol extract of Monechma ciliatum [13] and Veale et al (1999) reported contractile effects of Agapanthus africanus on the isolated rat uterus [14]. These observations and our previous study reports [7,15] are also in conformation with the present study.
The presence of flavonoids as phyto-constituent was the governing factors for many anti-fertility agents [16]. TPL-Et-Ac contained quercetin (0.1606%) as flavonoid and TPL-Et-Ac was observed with a negligible uterotonic activity which proved that flavonoid particularly quercetin was not responsible for uterotonic effect. TPL-Ch was found to contain some unknown substances which were neither kaempferol nor quercetin but the fraction (TPL-Ch) was able to induce uterine contraction in isolated estrogenized rat uterus model to a significant extent. The toluene fraction (TPL-T) was found to have triterpenes, flavonoids and phytosterol. Quercetin (0.8904%) was present in the TPL-T but the same fraction was devoid of kaempferol. The TPL-T was also able to induce uterine contraction in the isolated estrogenized rat uterus model in a dose dependant manner. The activity in this case was not linked to the presence of quercetin and kaempferol. It means the toluene fraction illicit the in vitro uterotonic activity independent of the flavonoids quercetin and kaempferol.
TPL-Me-G (EC 50 , 0.170 mg ml -1 ) [7] produced highest uterotonic activity in comparison with its fractions which may suggest that kaempferol (flavonoid) and these unknown substances produced synergistic action to exhibit antifertility activity.
Conclusion
The uterotonic activity in TPL-Ch exhibited due to the phyto-constituent other than quercetin and kaempferol. The in vitro uterotonic activity of TPL-T was illicited independent of kaempferol. These results emphasized the fact there are some phytoconstituent/s apart from kaempferol could be able to produce uterotonic activity. Negligible uterotonic effect of TPL-Et-Ac emphasized the fact that quercetin has no role for this activity. But TPL-Me-G itself produces higher uterotonic activity than its fraction which emphasizes the synergistic action of unknown phytoconstituent/s along with kaempferol.
The anti-fertility activity of Thevetia peruviana leaves is attributable to the presence of phyto-constituent yet to be identified and needs further investigation. Semisythetic modification of kaempferol and other chemical constituent/s may give us revolutionary anti-fertility agents | 2,431.8 | 2021-04-26T00:00:00.000 | [
"Chemistry"
] |
Simulation and experimental study of active noise barrier based on multiple-channel and decentralized system design
Active noise barrier (ANB) was helpful to enhance the noise reduction performance of noise barrier especially in low-frequency range. But it was quite difficult to realize an ideal effect in the far-field as most of the exiting work of ANB took the realization of soft boundary at the edge of noise barriers as the target. An active control strategy based on the decentralized system design was proposed in this paper. The virtual sensor technology was used in order to obtain the far-field noise signal with the help of near-field sensor, while multiple-channel design of both centralized system and decentralized system was studied. An experimental ANB system was then built in semi-anechoic chamber. The performance of the above control strategies was calculated with the actual measured parameters, and the advantages and disadvantages of different systems were compared and studied. The effectiveness of the new strategy was verified through the noise reduction tests of active noise control. The ANB established in this paper which was based on decentralized system design, could be extended to large scale usage directly in the future.
Introduction
Noise barrier is a common control measure of occupational noise or traffic noise.Noise barriers can usually effectively isolate the sound propagation of medium or high frequency, but the effect decreases in the low frequency range, while active noise control technology usually has better performance in the low frequency band [1] due to its own characteristics.So, they can be combined in the form of hybrid active-passive design to improve the noise reduction performance of noise barriers.
A lot of work has been done on active noise barriers.Researchers of Nanjing University have carried out relevant research work on virtual noise barriers, and have conducted research on the application of engineering examples of virtual noise barriers for application scenarios such as sound insulation windows.Most of the established work revolves around the influence of the structure and physical placement of the noise barrier on the noise reduction [2][3][4][5][6][7].Some work has also entered the stage of product-type experiments, such as Ohnishi [8], who designed a feedback active noise barrier system, and installed a 20 m-long active noise barrier on a road for experiments.Duhamel et al. [9] adopted FxLMS algorithm to make the noise attenuation exceed 10 dB.Ohnishi et al. [10] adopted the feedback algorithm with an insertion loss of 4-5 dB.Kwon et al. [11] adopted the multichannel FxLMS algorithm, using 6 microphones and 4 speakers, and achieved 20 dB(A) noise reduction at 190 Hz.Zou et al. [12] adopted the decentralized feedforward ANC system to reduce the transformer line spectrum noise, with a noise reduction of 15 dB in the near area.One common drawback of these work is their poor far-field performance.
From the current research, the application of active noise control (ANC) technology in noise barriers mainly take two forms, named centralized system design and decentralized system design.The centralized system uses a multiple-channel centralized system in the form of a unified processing unit for all channels and combines the near-field or (and) far-field error sensor signals to implement noise reduction in a specified area, which can usually achieve some optimal strategy, but the disadvantage is that the system is more complex and also limited by the number of channels, which is difficult to be applied in practical large-scale situations.The decentralized design [13][14][15] decomposes the centralized calculation of a single controller into a number of controllers, typically by using a series of single-channel controllers to control the sound field at localized points in the near field to realize a soft boundary at the top of the barrier.The decentralized design is often less effective than the centralized system design in the target area because it uses only a local noise reduction strategy.
Based on the non-stationary and non-deterministic characteristics of actual sound sources and the demand of large-scale of actual noise barriers, a decentralized adaptive active noise control system is studied in this paper, which incorporates the centralized pre-parameter design and virtual sensing methods.An ANB is established in semi-anechoic chamber, then control strategies are studied, and several typical methods are compared with the new strategy based on the parameters of the experimental ANB.
Method
Berkhoff proposed a virtual sensor technique in active noise barrier systems and compares the noise reduction effects of three cases where the error signal is the far-field sensor signal, the near-field sensor signal and the virtual sensor signal, and points out that the use of virtual sensors can achieve the same effect as using the far-field sensor.The active noise control system in this paper is based on Berkhoff's architecture of a single-channel decentralized system and uses virtual sensing in order to achieve a larger control area in the far field.
In a single-channel system, the variables can be defined, such as the primary acoustic disturbance d, the reference signal x, the secondary source control signal u, the near-field error signal and the farfield error signal .From the physical model, minimizing the near-field error signal does not mean the minimum of far-field error signal .In order to minimize the far-field error signal , the transfer matrix between the near-field to far-field signals should be constructed, and it can be used to predict the far-field error signal in real time.The relationship of the parameters can be described by (1) where , , and denote the transfer functions between the primary acoustic disturbance to the reference microphone, the near-field error microphone, and the far-field error microphone, respectively.
, and denote the transfer functions from the secondary source control signal to the reference microphone, the near-field error microphone and the far-field error microphone, respectively.
The active noise barrier noise reduction model is shown in Figure 1, denotes the transfer function between the positions of the near-field error microphone and the far-field error microphone.and denotes the acoustic contributions of the primary and secondary sources at the far field, respectively.The predicted far-field error signal ̂ is given by the sum of and ̂ in real time and used in the coefficient update calculation of the adaptive filter.The predicted can be obtained using the predicted value by .The estimated ̂ can also be obtained by using the estimated value ̂ by ̂ ̂ .̂ is determined by and the real-time value of u, then can also be obtained.So, the key to implement the system is , and , which can be obtained by off-line identification.The above method did not consider the effect of multiple-channel system strictly.A common diagram of a multiple-channel system is shown in Figure 2, where the wide arrow denotes an array of signals.Consider the simplest multiple-channel near-field noise reduction strategy first, in which the error signal of near-field microphones is minimized to achieve noise reduction.There is only in the control system, while is out of the control system.The block diagram is illustrated in Figure 3.
The matrix and represents primary path transfer functions, from the primary source to error sensors.The matrix and represents secondary path transfer functions, from secondary sources to error sensors.For simplicity, the matrix and which represents feedback paths from secondary sources to reference sensors is ignored here.The matrix W represents adaptive filters, as there are K×J channels in the control system, each has a separate filter.
When using a simplified system design in which the reference signal, secondary source and error signal correspond to each other one by one, that is, when J=K=M, W is an M×M matrix.Since each error sensor is most sensitive to its corresponding secondary sound source, when the effect of other secondary sound sources is ignored, the system can degenerate into a decentralized system.The W of a decentralized system degenerates into an array with a length of M, while each unit is a single channel filter.
When virtual sensor is used, the above system needs two more parameters and , which are both N×M matrix.The calculation of the system becomes more complex, as shown in Figure 4.In each iteration of the algorithm, it is necessary to calculate K×J filters based on the real-time value u to obtain the estimated value of and .After that, 2×N×M filters should be calculated to obtain the estimated value of and .Generally, N ≥ M is required to ensure the stability of the data system.Therefore, a centralized system needs to run a large amount of real-time computing.As the above processing is quite complicated, the algorithm is usually unable to execute in real time with numerous channels, so it needs to be simplified.Similarly, the implementation of the system is expected to adopt a decentralized system, where J=K=M and W is an M×M matrix, as shown in Figure 5. Since each error sensor is sensitive to its corresponding and perhaps several nearby secondary sound sources, W behaves as a banded sparse matrix.Similarly, if the influence of other secondary sound sources is ignored, the system can further degenerate into a decentralized system, where W degenerates into an array with a length of M. The overall system behaves as a combination of a series of single channel systems.The simplified system needs to be checked whether its noise reduction performance is still well.This article establishes an actual system, the simulation and experimental verification is done.
Noise barrier setup
In the semi-anechoic chamber (cut-off frequency less than 80Hz), a noise barrier of 5.0 m in length and 1.6 m in height is installed.The primary sound source is a loudspeaker, with a height of 1.2 m and a straight-line distance of 2.0 m from the noise barrier.As the secondary sources, 8 loudspeakers are arranged with an interval of 0.6 m at the top of the noise barrier.The photo of the noise barrier arrangement is shown in Figure 6.Measurement diagram is shown in Figure 7. Considering the limitation of the cut-off frequency of the semi-anechoic chamber and the spacing of the secondary sources should be less than half of the wavelength, the design frequency band of the active noise control system is set to 100-300Hz.The simulation of the noise reduction performance of the active noise control models can be directly calculated from theoretical models of sound radiation and sound barrier diffraction.However, in order for the actual application, the parameters such as , , , , , and were measured, and the simulation was conducted using the actual tested parameters.
Simulation results
The theoretical noise reduction of each system is discussed in the form of simulation based on the experimental system, and the key parameters are obtained through actual measurement.The simulation mainly considers the following system designs: centralized system, decentralized system near-field optimization algorithm and decentralized system combined with virtual sensor algorithm.Due to the actual measurement of the parameters, when the primary sound source is set up, the noise transmission of different active noise control methods to the far-field evaluation point can be calculated.
The feedback term in the active noise control model can be suppressed by using techniques such as directional microphones, so the feedback pathway will not be considered here for simplicity.
The theoretical noise reduction of the centralized system at the far-field evaluation points is shown in Figure 8, with 8 curves representing the frequency-based noise reduction effect of 8 far-field evaluation points.Due to the more ideal assumption, the insertion loss of active noise control system is quite high.The theoretical noise reduction of the decentralized system with near-field optimization is shown in Figure 9.The theoretical noise reduction of the decentralized system combined with virtual sensor algorithm is shown in Figure 10.From comparison and analysis of the above algorithms, the centralized system has the best theoretical noise reduction performance, the decentralized system with near-field optimization has very little noise reduction effect at far-field evaluation points, and the decentralized system with virtual sensor still has significant noise reduction performance at far-field evaluation points.Due to its simple structure, it has more practical value compared to the centralized system.
Experiment and discussion
The assessment of the active noise control was carried out by means of a comparison test with the active control off and on.The measurement point was located of different height and different distance.The implementation of the single-channel algorithm is based on BMILP ANC2.0 controller, which is developed by the Institute of Urban Safety and Environmental Science, Beijing Academy of Science and Technology.The circuit of BMILP ANC2.0 controller is based on ADSP21489 chip with an adaptive active noise control code inside, the sampling rate is adjustable up to 48 kHz.The controller also has a high-quality power amplifier.It can support the implementation of the algorithm in this paper.The distribution of insertion loss of the active control system was further tested, as shown in Figure 12.Due to the size limitation of the semi-anechoic chamber, the experiment range is up to 4.0 m in the horizontal direction, with a maximum height of 3.1 m.The noise reduction performance is good in the shadow zone behind the noise barrier and decreases with height.Since each controller takes the point with distance of 2.0 m as the target control point, the experimental results also show that the noise reduction performance is best inside this range, which is consistent with the design.
The comparable simulated distribution of the sound field is shown in Figure 13.The experimental results and simulated results show a quite similar distribution pattern.From the experimental results and simulated results, the noise reduction strategy based on decentralized system in this paper can achieve effective noise reduction in a large area.Overall, the additional noise reduction is more balanced in the whole test area than other methods.The active noise barrier design of this paper can be easily extended to a larger size of noise barrier in noise reduction practice.
Conclusions
Based on the decentralized system architecture, an active control strategy for noise barriers was proposed.In this method, a virtual error signal was derived from the near-field microphone at the top of the noise barrier and thus far-field sound pressure reduction was achieved using active noise control.The effectiveness of this method is verified through simulation and experimental tests.The results show that the active noise barrier system achieves effective noise reduction in a wide range of space behind the noise barrier in the design target frequency band.
Compared with the common centralized system active noise barrier, the method avoids the problem of sophisticated system design and pre-testing as well as the large demands of real time computation, and it can be directly extended to large scale noise barrier applications.Compared with the common decentralized active noise barrier, the method can realize noise reduction in a longer distance range after the barrier, avoiding the problem that the current active noise barrier is only efficient to near-field noise reduction.
The verification has been carried out only in the semi-anechoic chamber at present.For the application over longer distances and wider frequency bands, the further verification and optimization work of the algorithm should be continued in the future.
Figure 1 .
Figure 1.Model of active noise control.
Figure 2 .
Figure 2. Structure of a multiplechannel ANC system.
Figure 4 .
Figure 4. Block diagram of adaptive multiple-channel feedforward ANC system with virtual sensor technology (centralized system).
Figure 5 .
Figure 5. Block diagram of adaptive multiple-channel feedforward ANC system with virtual sensor technology (decentralized system).
Figure 6 .
Figure 6.Photo of noise barrier experiment.
Figure 8 .
Figure 8. Insertion loss of centralized ANC system.
Figure 9 .
Figure 9. Insertion loss of decentralized ANC system with near-field optimization.
Figure 10 .
Figure 10.Insertion loss of decentralized ANC system with virtual error sensor.
7
The noise reduction result is shown in Figure11, which gives the noise reduction in frequent domain at each evaluation point located behind the noise barrier at a distance of 1.0 m, 2.0 m, 4.0 m, respectively.Because of the small size of the secondary source in the experiments, the noise reduction decreases at lower frequency.
Figure 11 .
Figure 11.Frequency related noise reduction of active noise control by experiment.
Figure 12 . 8 Figure 13 .
Figure 12.Noise reduction distribution of active noise control by experiment. | 3,718.4 | 2023-09-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Analysis of Rocket Sled Vibration Signal Transmission Based on Zigbee Application
Multi-path effecting and Dopplereffecting on missile-borne tester signal transmission based onZigbee application was researched. The maximum transmission distance was derived based on communication channel loss model and multi-path model. The effecting of multi-path on transmission distance was discussed. The effecting of Doppler shift was researched by speed of rocket sled and receiver position. Transmission speed and error rate of missile-borne tester was studied by Doppler effecting. The measure which reduced multi-path fading and Doppler effecting was gained. These can advance signal transmission quality and improve revamping of equipment.
Introduction
In the rocket sled test, due to the track irregularity, the vibration of the rocket working, the aerodynamic instability and so on, the tested product will bear the larger vibration. Even after the vibration damping measures are taken on the tested products, it also needs to monitor the parameters of the vibration and speed of the tested products all the way. The fault location of the test product provides the most original test data to ensure the safety of the tested products. To ensure the safety of test data, test data must be transmitted to ground storage [1,2] . Therefore, the signal transmission quality of skid tester is particularly important in rocket sled test.
This paper analyzes the influence of multipath effect, Doppler frequency deviation and Doppler expansion factors on the transmission quality of sled vehicles at high speed, and puts forward measures to improve the quality of signal transmission, which is of great significance to guide the test of rocket sled.
Influence of Zigbee communication channel loss on signal transmission distance
The Friis free space equation defines the distance as [3,4] : In the formula: T P means power of transmitter. Power of receiver is described by ) (d P R . The distance of receiver and transmitter is described by d , and the unit is m. Transmitted antenna gain is described by T G . Received antenna gain is described by R G . Length of wave is λ ,and the unit is m. According to characteristics of 802.15.4a channel, IEEE organization has carried out actual measurement in the actual environment, and constructed a channel transmission loss model based on 80215. 4A channels suitable for UWB (2~10GHz) and 100MHz~1000MHz. The formula of loss calculation is [5] : 0 ( ) 20(lg( / 5))( 1) 10(lg( )) Among them, the power of transmitter is described by T P . Distance between transmitter and receiver is described by d . The attenuation factor of antenna is described by ant A . s means standard variance, n means correction coefficient and k means the frequency effect coefficient. 0 d means reference distance, c f is a reference center frequency , equal to 5 GHZ (UWB2 ~ 10 GHZ frequency band), Lo P is the size of the loss under the reference range.
The logarithm of the formula (2) is expressed as: The unit for d is m, and the unit of f is GHz.
The maximum distance equation is obtained as follows: Measured values [6] was given by IEEE802.15.4a channel model, as shown in Table 1
The influence of multipath effect on signal transmission distance
In some cases, the amplitude of the vector plus signal is lower than the amplitude of the direct signal due to the mutual cancellation of the direct and reflected signals, which is called the multipath fading [7] , which is caused by the multipath effect, as shown in Figure 1 [8] .
When the signal of direct phase is opposite to the reflection signal, the fading depth of the signal which is received by the receiving antenna is L ,and extreme value of L will appear at this time. (5) The Zigbee wireless transmission module is known to work frequency 2.4GHz , the sled vehicle antenna height 1m, the ground emission coefficient 0.9, the relationship with distance and fading has been shown in Figure 2, by formula (4).
Fig 3 Fading depth and the reflection coefficient
When the horizontal distance between the sled car and the gateway is (11), the fading depth will appear extreme value, the fading depth extremes appear near the 60m, the specific location is related to transmitting antenna height and receiving antenna height, and fading depth is determined by the size of the ground reflection coefficient (Fig 3), and the greater the reflection coefficient, The greater the depth of the decline.
When the sled car and the gateway distance is 60m: The signal attenuation intensity due to multipath fading L=14.3db, therefore, total signal attenuation at 60m distance is: Through calculation, we can see that when the fading signal is strongest, it can also meet the requirement of 1km communication distance. In order to further reduce the influence of multipath fading, the height of the gateway can be reduced from 4m to 1.5m, and the depth of the fading can be reduced from -14.3dB to -2.2dB, and the fading depth is moved forward to 20m, and the total signal attenuation is reduced to -73.9dB. At the same time, the reflection coefficient can be reduced by maintaining the drying or laying of absorbing materials near the 60m region, and the depth of the fading can be reduced to 2 to 10dB.
The influence of Doppler frequency offset on data transmission
The relative motion between the sled and the gateway will make the radio wave undergo an obvious frequency shift process. The frequency offset caused by the Doppler effect is related to the moving speed of the sled, the direction of movement and the angle of the incoming wave of the receiver [9,10] : In the form: The direction of the sled is the X axis. the vertical direction is the Y axis. the coordinate system is set up, the speed of the sled running is When the carrier frequency is 2.4 GHz to 2.4835GHz, the Doppler frequency deviation curve of different speed of the sled can be obtained from Figure 5. we can see that the Doppler frequency offset increases with the speed of the sled, the maximum frequency deviation is 5696Hz at 700m/s speed, and the Doppler frequency deviation varies with the change of the sled position. The frequency offset is positive and negative jump, with a maximum value of 11394Hz. Fig. 6 describes the curve of Doppler frequency deviation as vertical distance between the gateway and the sled. It can be seen from the graph that the Doppler frequency deviation is decreasing from the distance between the gateway and the track.
Doppler expansion influence on the signal transmission rate
In time domain, there is a quantum correlation time [11] [11] , which is the statistical mean of the time interval of the constant channel impulse response, that is, the two time domain signals was amplitude within period of time interval, and the relationship with the Doppler expansion can be expressed as: The maximum offset frequency of Doppler is d f .
Therefore, As to avoid the signal distortion which caused by Doppler shift, data rate u exceeding channel fading rate V is necessary to ensure , which makes the channel display slow fading [12] . The carrier frequency is 2.4GHz and the speed of the sled is 700m/s, which can be obtained from (8) . That is to say, if Zigbee wireless transmission, the lower limit of data transmission rate is y (when the data transmission rate is lower than the value, the transmission signal will be distorted). According to reference [13] , when the error rate is 10 -3 to 10 -4 , the Doppler frequency offset is 0.01 to 0.02 of the signal transmission bandwidth, that is, the data rate should exceed 100~200 times the fading rate. When the data transmission rate reaches 980kb/s to 1960kb/s, the error rate can be guaranteed to be 10 -3 to 10 -4 .
Conclusions and Countermeasures
(1) the mature Zigbee module in the market can meet the requirement of the data transmission distance. Reducing the height of the gateway and the reflection coefficient in the multipath reflection region by laying the absorbing material can reduce the multipath effect.
(2) the Doppler frequency deviation is proportional to the running speed of the sled, and the vertical distance from the gateway to the orbit is inversely proportional to the gateway.
(3) data transfer rate of Zigbee module will not be able to transmit data at 700m/s speed; (4) when the data storage space is 128B, and the running speed of the sled car is more than 156m/s, the Zigbee wireless technology is used to realize the data transmission, which can not meet the requirement of the bit error rate of less than 10 -4 . | 2,031.4 | 2019-03-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Hidden Tensor Structures
Any single system whose space of states is given by a separable Hilbert space is automatically equipped with infinitely many hidden tensor-like structures. This includes all quantum mechanical systems as well as classical field theories and classical signal analysis. Accordingly, systems as simple as a single one-dimensional harmonic oscillator, an infinite potential well, or a classical finite-amplitude signal of finite duration can be decomposed into an arbitrary number of subsystems. The resulting structure is rich enough to enable quantum computation, violation of Bell’s inequalities, and formulation of universal quantum gates. Less standard quantum applications involve a distinction between position and hidden position. The hidden position can be accompanied by a hidden spin, even if the particle is spinless. Hidden degrees of freedom are, in many respects, analogous to modular variables. Moreover, it is shown that these hidden structures are at the roots of some well-known theoretical constructions, such as the Brandt–Greenberg multi-boson representation of creation–annihilation operators, intensively investigated in the context of higher-order or fractional-order squeezing. In the context of classical signal analysis, the discussed structures explain why it is possible to emulate a quantum computer by classical analog circuit devices.
I. INTRODUCTION
Quantum computation [1][2][3][4] begins with a single quantum digit, represented by a vector |n , n ∈ {0, . . ., N − 1}, an element of an orthonormal basis in some Ndimensional Hilbert space.Typical quantum digits, the qubits, correspond to N = 2 and are modeled by polarization degrees of photons, spins of electrons, or states of a two-level atom.Larger values of N are often realized in practice by means of multi-port interferometers.In order to process more than one digit, one typically considers quantum registers consisting of multi-particle quantum systems.The mathematical structure that allows to combine K quantum N -dimensional digits into a single register is given by a tensor product, (1) This is how one introduces the so-called computationalbasis representation of a number [5], Tensor-product bases (1) are characterized by scalar products of the form which means that different numbers correspond to orthogonal vectors, and two numbers are different if they differ in at least one digit.
A general K-digit quantum state is represented by a general superposition Vector |ψ is said to be a product state if the amplitude ψ lK−1...l0 = f lK−1...lL g lL−1...l0 is a product of at least two amplitudes.We then write ψ = f ⊗ g.Nonproduct |ψ are called entangled.The distinction between product and entangled states occurs at the level of probability amplitudes ψ lK−1...l0 , and not at the level of the computational-basis basis vectors, which are nonentangled by construction.Products of amplitudes occur also in probability amplitudes for paths in interferometers, sometimes termed the histories, so they can be easily confused with hidden tensor products, the subject of the present paper.The distinction is most easily explained by the Kronecker product of matrices, a matrix form of a tensor product.The four-index tensor X ab Y cd = Z abcd is a matrix element of the Kronecker product X ⊗ Y = Z.A path occurs if b = c, so this is a three-index object, X aj Y jd = Z ajd , representing a process a → j → d (or d → j → a), joining a with d, and passing through j, hence the name path or history.If we additionally sum over the intermediate states, we obtain a transvection j X aj Y jd = Z ad .The transvection, when interpreted in terms of paths, is equivalent to interference.Each transvection removes a pair of indices.An example of a transvection is given by a matrix product of two matrices, XY = Z.A path or history is thus an intermediate concept, halfway between tensor and matrix products.
The above brief introduction illustrates several important conceptual ingredients of the formalism of quantum computation.First of all, one thinks in terms of a hardware whose building block is a K-digit quantum register.Secondly, one works from the outset with finitedimensional Hilbert spaces which, however, cannot occur in practice.And indeed, there are no true N -level atoms -all atoms involve infinitely many energy levels.Even objects as elementary as photons or electrons are described by infinitely dimensional Hilbert spaces.All elementary particles involve Hilbert spaces of squareintegrable functions, and all such spaces are separable [6].The latter means that one can always introduce a countable basis |n , indexed by integer or natural numbers.
The basis is countable even in case continuous degrees of freedom are present.
The goal of this paper is to show that any separable Hilbert space is naturally equipped with infinitely many hidden tensor-product structures.As opposed to the 'bottom-up' tensor structures (1) that demand large numbers K of elementary systems, we are interested in the 'top-down' tensor structure inherent in any single quantum system, even as simple as a one-dimensional harmonic oscillator.One should not confuse the resulting structure with the one of interfering paths, an intrinsic feature of any quantum dynamics.
At the present stage, we are more interested in the issue of fundamental principles than in their implementation in practice.However, as a by-product of our discussion we will show that hidden tensor structures are in fact sometimes literally hiding behind some well known quantum-mechanical or quantum-like constructions.The problem of multi-photon squeezed states provides an example.
On the implementation side, our approach explains why it is possible to emulate a quantum computer by means of classical analog circuit devices [7][8][9].
II. HIDDEN TENSOR PRODUCTS
To begin with, let N ∈ N be a natural number.Any integer n can be uniquely written as Here, k = ⌊n/N ⌋ and l is the remainder of the division n/N .The map is one-to-one.The latter means that n What makes these trivial observations important from our point of view is the following orthonormality property of the basis vectors in our separable Hilbert space, a formula typical of a tensor-product structure (3).Actually, the basis |n , if parametrized as possesses the required properties of a basis in a tensorproduct space.Formula (9) is central to the paper.The procedure can be iterated: k = N k 1 + l 1 , etc. (which is essentially the Euclidean algorithm for the greatest common divisor), leading to with integer k and l j ∈ {0, . . ., N − 1}.Notice that the parametrization we use, implies that the value of M is the same for all integer n.So, this is not exactly the usual representation of a number n by its digits, because k can be arbitrarily large or negative.
For arbitrary fixed M we find a tensor-product orthonormality condition, where the ks are integers and 0 ≤ l j < M .However, one should bear in mind that, in consequence of ( 10)- (11), we can find for L = M , and thus (13) dos not apply to In other words, one cannot treat the index M in |k M , l M−1 , . . ., l 1 , l 0 as a tensor power in the sense we know from Fock spaces.Rather, we should treat k as an external degree of freedom (such as position or momentum), and l j as internal degrees of freedom (such as spin).There is some similarity between k and l j and the modular variables [10][11][12][13][14][15].
However, for any n there exists a maximal M such that The ls in (15) are the digits of n.Then, the following variant of (13) holds good, Orthonormality ( 16) is typical of Fock spaces.We conclude that hidden tensor products can be regarded as Fock-type ones if one parametrizes (nonnegative) integers by their digits.If instead we prefer the 'modular' version with a fixed M , then the first index, k M , is an integer.Thus, in order to distinguish between the two hidden tensor structures we will speak of modular hidden products if we work with (10)- (11), and Fockian hidden products, if we employ (15).
Let us finally illustrate on a concrete example the dependence of entanglement on N .
An example of a product state for N = 3, say, is with is entangled (non-product), as it cannot be written as is a product state if we choose N = 10.( 20) and ( 22) represent the same state ψ but decomposed into different subsystems, because two different forms of a tensor product are employed.The resulting multitude of hidden tensor structures is reminiscent of the better known bottom-up alternative tensor-product structures [16,17].
III. ASSOCIATIVITY VS. NESTING
Modular hidden products are non-associative.Namely, if we write Indeed, whereas Actually, the very form ( 26) is inconsistent, because N l 1 + l 0 is not limited from above by N − 1, and thus cannot be treated as an l variable.However, the product possesses a nesting which allows for an analogue of tensor multiplication, namely Here the indices l j do not change their values as we switch between the alternative forms.In quantum mechanical notation we could write them in a nested form, still maintaining the fundamental property of tensor products, The braces can be skipped but we have to remember the rule (30) that determines the order of multiplications, The form (31) is indistinguishable from the standard quantum notation.
IV. HIDDEN SUBSYSTEMS
A subsystem is defined by specifying the form of subsystem observables: The 'left' subsystem observable, and the 'right' subsystem observable, The so-called reduced density matrices are a consequence of ( 32) and (34).
In the next section we show that hidden tensor products indeed hide behind certain well known theoretical constructions.
V. BG REPRESENTATION
Hidden tensor structures imply that one can define hidden subsystems of any quantum system.In particular, the hidden tensor structure of a one-dimensional harmonic oscillator, admits subsystem observables of the form, Q ⊗ I N and I ∞ ⊗ R.
In order to make this statement more formal, let us consider two vectors in some Hilbert spaces of appropriate dimensions.The hidden tensor product is defined by In this way the Hilbert space H of a one-dimensional harmonic oscillator becomes a tensor-product Hilbert space H ∞ ⊗ H N with the basis |k, l .Let us stress that we have not introduced a new Hilbert space, because H = H ∞ ⊗ H N .This is still the same textbook Hilbert space of the one-dimensional harmonic oscillator, which should be clear from ( 40)-( 42).The creation operator in H ∞ can be defined in the usual way, (we distinguish between b † that acts in H ∞ , and the or- . This is a typical subsystem operator, related to subsystem canonical position X ∞ and momentum Now, let us consider its hidden tensor product with the identity operator Obviously, A N and A † N operate in H, and Contrary to appearances, one should not automatically identify A N with a.Let us for the moment forget about the subtleties with associativity and nesting, whose analysis we shift to the Appendix, and perform the following calculation: Its result is a formula typical of Brandt-Greenberg (BG) N -boson creation-annihilation operators [18].However, known from the literature relationship between the BG operators and the ordinary a and a † is much more complicated, and does not resemble (45) in the least.On the other hand, one indeed finds [19] but its proof based on (49) is far less evident than the trivial calculation from (48).Furthermore, note that the forms (44)-( 45) are trivially normally ordered, whereas the simplest normally ordered form of the BG operator A N one finds in the literature, is Expressions ( 45) and ( 49) are so different that one may really have doubts if we have not abused notation by denoting both operators by the same symbol.Yet, we will now directly show that we have not abused notation -both forms of A N represent the same operator.A N looks so simple if one realizes that this is a tensor-product operator, but with respect to the hidden tensor structure, implicit in the separable Hilbert space of a one-dimensional harmonic oscillator.
By definition of ⊗ and |k, l , formula (44) implies that It is clear from (55) that the only non-zero matrix elements of A † N are n + N |A † N |n for some n.Because of the one-to-one property of the map n → (k, l) we can replace the double sum in (55) by a single sum over n.It is simplest to evaluate directly the matrix element The only non-vanishing term in the double sum corresponds to n = N k + l.The inverse map implies k = ⌊n/N ⌋, l = n − N ⌊n/N ⌋, so that and which is indeed the BG operator.One concludes that the BG creation operator is the usual creation operator b † but operating in the 'left' subsystem defined by |k, l = |N k + l .Alternatively, one can directly write |k, l = 1 From a purely conceptual point of view it is perhaps even more important that it is allowed to go against the whole tradition of research on BG operators, and treat A N just as a form of a, that is begin with the hidden tensor structure defined by a = b ⊗ I N and |n BG coherent states were intensively investigated in the 1980s as a possible alternative to N -boson squeezedstates e za †N −za N |0 .The latter are mathematically problematic [20,21], whereas e zA † N −zAN |0 are as wellbehaved as the ordinary coherent states in H.The N = 2 case of e zA † N −zAN |0 indeed leads to squeezing of variances of position X ∼ a + a † and momentum P ∼ i(a † − a), although qualitatively different from the squeezing implied by e za † 2 −za 2 |0 [22].
A general state representing the hidden subsystem associated with H ∞ can be described by means of its reduced density matrix ρ ∞ .To this end, consider a general Reduced density matrix ρ N is defined analogously,
VI. HIDDEN STATISTICS OF A COHERENT STATE A. Two hidden subsystems
Consider the coherent state satisfying a|z = z|z , Its hidden statistics of excitations is determined by the diagonal matrix element (ρ describes the probability of finding excitations within the the interval [2k, 2k + 1].For N = 3, is an analogous probability for [3k, 3k + 2].A generalization to arbitrary N is now obvious.Statistics for the 'right', N -dimensional subsystem looks as follows so this is the probability of finding an nth excitation with n ≡ l mod N .Formulas (69)-( 71) provide an operational definition of the hidden subsystems.
B. Three hidden subsystems
The three-subsystem decomposition of the harmonicoscillator Hilbert space is obtained by means of and The map n → (k 1 , l 1 , l 0 ) is one-to-one as a composition of two one-to-one maps: n → (k 0 , l 0 ) and k 0 → (k 1 , l 1 ).The three reduced density matrices read For (68) and N = 2 we find Comparison with (69) and (70) shows that probabilities (78) could have been obtained directly also from the twosubsystem case of N = 4, but the remaining two hiddensubsystem single-qubit reduced density matrices would have no counterpart for N = 4. Here, for N = 2 and the three-subsystem decomposition, we obtain The above two reduced 2×2 single-qubit density matrices represent states of two single-qubit hidden subsystems of a one-dimensional harmonic oscillator in a coherent state |z .Such a formal structure has no counterpart in the usual textbook presentation of a quantum harmonic oscillator.
VII. POSITION VS. HIDDEN POSITION
For N = 2 we naturally obtain an analogue of a spinor structure, even if the particle in question is spinless and one-dimensional, we find ψ 0 |ψ 1 = 0 and In position representation, a formula that allows us to discuss probability of hidden spin and probability density of hidden position, namely where Resolution of unity implies We can finally pinpoint the difference between position and hidden position.Probability density of position is given by The normalization is preserved,
VIII. PARITY AS A HIDDEN SPIN
In order to explicitly construct U (2) spinor transformations of the hidden spin, we define where U ll ′ is a unitary 2 × 2 matrix.In terms of ψ n the transformation reads and where Transformation I ⊗ U is indeed unitary, and acts only on the binary (spinor) indices.The Bloch vector is defined in the usual way as As an example consider a free one-dimensional scalar particle on the interval [−L/2, L/2], with the standard scalar product Let us take the basis of real standing waves, For antisymmetric ψ(x) we find ψ 2j = ψ j,0 = 0; for symmetric ψ(x) we find ψ 2j+1 = ψ j,1 = 0, so that where ψ(−x) = s 3 ψ(x).In this concrete basis, hidden spin one-half can be identified with parity.Symmetric or antisymmetric ψ(x) satisfy A change of basis, for example by selecting different boundary conditions, will rotate s on the Bloch sphere, and will influence the relation between symmetry of ψ(x) and the form of ρ hid (x).
IX. HIDDEN QUANTUM COMPUTATION
The very idea that a subspace (N K -dimensional, say) of a Hilbert space of a single quantum system can be used to define a K-digit computational basis -is not in itself new [23].However, our parametrization is not just a vector representation of n in terms of digits.This is especially visible in the formula linking hidden tensor product with modular arithmetic of integers.Representation in terms of hidden tensor products links number-theoretic structures with internal symmetries in Hilbert spaces.For example, the map is unitary if a and N are coprime.Analogously, many theorems of classical number theory can be automatically translated into theorems about unitary transformations on tensor-product Hilbert spaces -and, perhaps, the other way around.
On the other hand, thinking in terms of tensor products is typical of many quantum algorithms, so once we identify a hidden tensor structure we can follow certain well established strategies of quantum information processing.
In such a new paradigm, one begins with as many subsystem decompositions as one needs for a given task, in exact analogy to the two-subsystem and three-subsystem decompositions we have just discussed on the coherentstate example.Since the subsystems can be uniquely identified, it remains to define appropriate universal quantum gates, formulated according to the standard procedures discussed in the literature [23,24].
As an illustration of a universal quantum gate that acts on a hidden coherent-state subsystem of a onedimensional oscillator, consider the Hadamard gates H 0 and H 1 defined by and At the level of eigenvectors of a † a the operators read, and Of particular interest is the action of these two universal gates on the coherent state, a|z = z|z .One finds for n = 2k, and
X. HIDDEN ENTANGLEMENT
The hidden spinor structure discussed in Sec.VII was constructed by means of the zeroth bit l 0 .One can analogously define spins higher than one-half, or construct systems analogous to multi-particle entangled states.As our final example, let us analyze the problem of the Bell inequality [25] and its violation by hidden-spin singletstate correlations.
Consider N = 2 and the three-subsystem decomposition of the harmonic-oscillator basis for some ψ k .The two sets of Pauli matrices, representing bits in position 1 or 0, are defined by means of the tensorproduct structure in the usual way: and The standard calculation leads to (128), (130), and (132) refer only to the excited states |n = |4k 1 + 2l 1 + l 0 of a single one-dimensional harmonic oscillator, with no 'bottom-up' multi-particle tensor products (1).Obviously, average (133) violates Belltype inequalities [25], a fact once again proving that all the necessary entanglement properties are encoded in states of a single one-dimensional harmonic oscillator.Mutatis mutandis, they are present in any quantum system described by a separable Hilbert space.
XI. LINKS WITH CLASSICAL EMULATION OF QUANTUM COMPUTATION
The discussed structures are based on a single assumption, namely that the space of states is given by a separable Hilbert space.This is one of the von Neumann axioms of quantum mechanics.The canonical example of a separable Hilbert space is the space of square-integrable functions, a mathematical structure shared by quantum mechanics with classical electrodynamics, acoustics, and last but not least, classical signal analysis.Hence the question if one can we somehow implement a quantum computer by means of a completely classical signal analysis?The answer is, in principle, yes.
Actually, in a series of recent papers La Cour and his collaborators have demonstrated that an emulation of a quantum computation by means analog circuits is practically feasible [7][8][9].The key element of their approach is the octave spacing scheme, discussed in detail in [7], which is a form of binary coding of a Fockian type we have discussed above.Each qubit is here represented by its carrier frequency ω c .If ω b is a baseband offset frequency, then any ω c = ω b + 2 j ∆ω, for some j and ∆ω > 0. And this is enough from the point of view of our analysis.
Any function of several variables, say ψ, is formally equivalent to a tensor-product state vector |ψ .The link between the state and its wave-function (i.e. its amplitude) ψ(ω 1 , . . ., ω n ) is given by the Dirac recipe, Product states correspond to functions with separable coordinates, such as Sometimes a confusion arises (cf.[7,8]) if the commutativity of function values, should be identified with commutativity of such a tensor structure, namely if this is equivalent to |f ⊗ g = |g ⊗ f -but the answer is, of course, no.Non-commuativity of the tensor product follows here from the unique identification of ω 1 with f , and ω 2 with g.A very similar identification of a 'position' in a tensor product with appropriate index is employed by Penrose in his abstractindex formalism [26,27].As correctly noticed in [7,8], what is essential for the unique coding of a basis vector is the octave spacing that allows to identify an n-tuple of frequencies with a unique n-bit number.
XII. FINAL REMARKS
According to Feynman's famous statement, quantum mechanics is 'a theory that no one understands'.There are two levels to this lack of understanding.One is purely physical: we just don't have everyday experiences with the microworld.The second is related to the mathematical structure of the theory -there is not even a general agreement on what should be meant by quantization, or why there is first quantization and second quantization.
Reformulation of separable Hilbert spaces in terms of hidden tensor products allows us to rephrase anew various standard structures of quantum mechanics.We have given several examples of such a rephrasing: parity as a form of hidden spin, hidden spin degrees of freedom of a spinless particle, GB multi-boson operators as simple hidden tensor products, quantum gates operating on hidden subsystems, or internal entanglement between hidden subsystems.There are intriguing and unexplored connections between hidden tensor structures and methods of classical number theory.Is it already a third quantization?Furthermore, we have purposefully left open the issue of uniqueness of ⊗.Depending on N , the same state can be simultaneously entangled and non-entangled.This suggests nontriviality of the problem.The most important subtlety can be illustrated as follows.Assume |cyan , |magenta , |yellow , |green , are elements of some orthogonal basis.If we label the colors by numbers, we can write |0 , |1 , |2 , |3 , which suggests that |0 = |cyan .But why cyan and not magenta or yellow?Can we make sense of 'inequalities' such as 'cyan < magenta' ?In fact, the order seems undefined by first principles, so formulas like are to some extent ambiguous.It turns out that this ambiguity is very interesting in itself, has status of a gauge freedom, and is linked to the old problem of completeness of quantum mechanics [28], or relations between correlations and correlata [29].Further details can be found in [30].The map (5)-( 7) is one-to-one for any N .A composition of two such maps remains one-to-one even if each of them involves a different N , where 0 ≤ l 0 < N 0 , 0 ≤ l 1 < N 1 .(138) defines a bijection n → (k 1 , l 1 , l 0 ).and so on, where we keep in mind the problem with associativity and nesting.
Orthonormality (3) is valid independently of the choice of N i .
ACKNOWLEDGMENTI
'm indebted to Marcin Nowakowski and Tomasz Stefański for discussions.Calculations were carried out at the Academic Computer Center in Gdańsk.The work was supported by the CI TASK grant 'Non-Newtonian calculus with interdisciplinary applications'. | 5,745.6 | 2023-08-08T00:00:00.000 | [
"Physics"
] |
Temporal characterization of femtosecond laser pulses using tunneling ionization in the UV, visible, and mid-IR ranges
To generalize the applicability of the temporal characterization technique called “tunneling ionization with a perturbation for the time-domain observation of an electric field” (TIPTOE), the technique is examined in the multicycle regime over a broad wavelength range, from the UV to the IR range. The technique is rigorously analyzed first by solving the time-dependent Schrödinger equation. Then, experimental verification is demonstrated over an almost 5-octave wavelength range at 266, 1800, 4000 and 8000 nm by utilizing the same nonlinear medium – air. The experimentally obtained dispersion values of the materials used for the dispersion control show very good agreement with the ones calculated using the material dispersion data and the pulse duration results obtained for 1800 and 4000 nm agree well with the frequency-resolved optical gating measurements. The universality of TIPTOE arises from its phase-matching-free nature and its unprecedented broadband operation range.
of the applicability for a broad wavelength range has not been demonstrated. TIPTOE has been tested only for single-cycle and few-cycle laser pulses in the wavelength range of 400-1000 nm 16,17 .
In this work, we demonstrate the universality of TIPTOE by applying it over a broad wavelength range from the UV range to the IR range in the chirped, multicycle regime and discuss the underlying basic theory. Several fundamental aspects are addressed for the first time both theoretically and experimentally. The validity of TIPTOE is tested by solving the time-dependent Schrödinger equation (TDSE) for various cases. We also perform experiments at wavelengths of 266, 1800, 4000, and 8000 nm. The experimental results obtained at 1800 and 4000 nm are compared with the results obtained using the FROG technique.
Results
tiptoe theory in the multicycle regime. An arbitrary time-dependent laser field can be measured in the time domain when a subcycle temporal gate is available. In TIPTOE, subcycle ionization in an intense laser pulse provides such a temporal gate. Two laser pulses, referred to as the fundamental (E F ) and the signal (E S ), are used. The fundamental pulse is strong enough to induce ionization, while the signal pulse is too weak by itself to cause ionization. It, however, interferes with the fundamental pulse, resulting in a modulation of the ionization yield that contains information of the temporal profile of the signal pulse.
An accurate description of the ionization in a multicycle laser field requires the consideration of both the amplitude and the phase of an electron wavepacket created by either multiphoton absorption or tunneling. The interference of electron wavepackets created at different half cycles of the laser pulse should be taken into account when modeling the ionization 18 . The ionization yields of single atoms, calculated by solving the TDSE, are shown in Fig. 1. The ionization yields increase with intensity, but they also exhibit wiggles due to interference effects. These are more pronounced for a longer wavelength laser with high intensity. Therefore, an accurate theoretical model is required for modeling the ionization yield of a single atom.
In TIPTOE, the ionization yield is simply modeled as the integration of the ionization rate, neglecting interference effects. As we are interested in the total ionization yield obtained over a volume where individual atoms experience different intensities, such interference effects are averaged out. The ionization yield obtained for the focal volume follows a smooth increase with increasing laser intensity, as shown in Fig. 1. Thus, the total ionization yield obtained from a focal volume can be approximated by the integration of the instantaneous ionization rate w(E) as F S In Eq. (1), we also neglect ground state depletion, excitation and ionization from the excited states. Although Eq. (1) is derived under certain approximations, it is accurate enough for the description of the ionization yield modulation shown in the following.
The ionization rate w(E) can be obtained using various ionization models such as the ADK 19 , PPT 20 and Yudin-Ivanov models 21 . It should be noted that TIPTOE does not rely on a particular ionization model as long as the extreme nonlinearity of ionization is properly modeled. Furthermore, the TIPTOE method is not restricted to being implemented in the tunneling regime [as opposed to what its name 'TIPTOE (tunneling ionization …)' may suggest]. Therefore, for the sake of simplicity, the ionization rate defined as w(E) = I n = E 2n can be used. Here, the integer n is the nonlinearity coefficient, which can be estimated from the slope of the ionization curve in Fig. 1.
For the description of the ionization yield modulation, we also assume that the temporal profiles of the fundamental and signal pulses are identical (i.e., E(t) = E F (t) = E S (t)/r, with r being the amplitude ratio between the two pulses). For r≪1, the modulation of the ionization yield can be expanded as zeroth order δ N (0) is a constant. It is the ionization yield obtained without the signal field. The first order δ N (1) that contains the amplitude and phase information of the signal pulse can be written as As we increase the amplitude ratio r, the higher orders can be included in the ionization yield modulation by which the modulation becomes asymmetric. However, this asymmetry can be removed by frequency filtering. frequency response of the tiptoe measurements in the multicycle regime. The first derivative of the ionization [dw/dE = E(t − τ) 2n−1 ] plays the role of the temporal gate. The effect of the temporal gate differs depending on the temporal shape of the pulse. If the duration is extremely short and the carrier envelop phase (CEP) of the pulse is zero (i.e. cosine like pulse), the ionization occurs in a single half optical cycle. The derivative (dw/dE = E(t − τ) 2n−1 ) behaves like a delta function. The electric field can be directly obtained from the modulation of the ionization yield as δ N (τ) ∝ E(τ). If the pulse duration is long (i.e., multicycle), ionization occurs over multiple half optical cycles over which the signal field is sampled. In this case, the ionization yield modulation may not directly represent the temporal profile of the signal pulse.
Since the expression for the ionization yield modulation δ N (1) in Eq. 2 is the cross-correlation of the two functions, E(t − τ) 2n−1 and E(t), the expression can be written with the cross-correlation theorem (or convolution theorem) in the frequency domain as is the frequency response of the TIPTOE measurement that determines the relation between the ionization yield modulation and the original field. The modulation of the ionization yield δ N (t) may differ from the signal field E(t) depending on the frequency response − ⁎ E t { ( ) } n 2 1 . Therefore, to extract the signal pulse from the modulation of the ionization yield, the effect of the frequency response should be properly considered.
General property of the frequency response function. The multiplication of the frequency response affects both in amplitude and phase. It is clear from Eq. 2 that the CEP information of the signal pulse is canceled out due to the multiplication of the frequency response function if the fundamental and signal pulses have an identical temporal profile. In order to measure the temporal profile of the laser pulse including the CEP, the CEP of the fundamental pulse should be set so that a single dominant ionization occurs in a half optical cycle. This condition can be found by using an additional second harmonic pulse 16 . In this work, however, we measure multi-cycle laser pulses for which the CEP is not an important parameter. Therefore, we use an inline interferometer in which the fundamental and signal pulses co-propagate and have identical temporal profiles. The CEP of the laser pulse is not stabilized.
The bandwidth of the ionization yield modulation can be narrower than the bandwidth of the signal pulse due to the multiplication of the frequency response function To understand this effect, we compare two cases calculated using transform-limited and chirped Gaussian pulses. The laser pulse of the transform-limited Gaussian pulse is shown in red in Fig. 2a, together with the first-order derivative of the ionization modulation [E(t) 2n−1 ] in blue. The duration of the first-order derivative is only 28% of the pulse due to the high nonlinearity (n = 7) of the ionization. In this case, the amplitude of the frequency response near the central frequency is broader than that of the signal pulse spectrum, and the phase is flat, as shown in Fig. 2a.
The spectral bandwidth of the modulation ( δ t { ( )} N ) can be analytically calculated for the transform-limited Gaussian pulse as Δω − n n (2 1)/(2 ), where Δω is the bandwidth of the signal pulse. When n = 7, the spectral bandwidth is only slightly reduced. The spectral bandwidth is 96% of the bandwidth of the signal pulse, as shown by the black and red lines in Fig. 2c. Thus, the effect of the frequency response would be negligible when the pulse duration is close to transform-limited. In this case, the ionization yield modulation δ N (t) is a very good approximation of the signal pulse E(t).
If the pulse is chirped, the effect of the frequency response becomes significant. As a second example, the ionization rate is calculated using a chirped Gaussian pulse whose duration (75 fs) is three times longer than the transform-limited duration (25 fs), as shown in Fig. 2b. The duration of the pulse is estimated by the full width at half maximum (FWHM). The duration of the first-order derivative [E(t) 2n−1 ] is now comparable with the transform-limited duration, as shown in Fig. 2b. The bandwidth of the frequency response is still broader than the signal pulse spectrum. However, the spectral bandwidth of δ t { ( )} N is only 82% of the signal bandwidth, as shown in Fig. 2d. The spectral phase of the frequency response is also slightly curved with an opposite sign. The spectral phase of the modulation ( δ t { ( )} N ) slightly differs from the spectral phase of the signal pulse. Therefore, the ionization yield modulation does not directly represent the original pulse. An appropriate reconstruction process to find the signal pulse from the ionization yield modulation is required.
Reconstruction of the signal pulse. The frequency response of the TIPTOE measurement can be corrected through a few different approaches. In this work, we use a very simple approach in which we assume that the spectral amplitude of the original pulse is known (e.g., due to a measurement of the spectral intensity). First, an approximated signal pulse E ′ (t) is obtained by taking an inverse Fourier transform as using the amplitude A(ω) obtained from the spectrum and the phase obtained from the ionization yield modulation. The phase of the signal pulse is then corrected www.nature.com/scientificreports www.nature.com/scientificreports/ from the frequency response calculated using the approximated signal pulse.
It should be noted that the reconstruction can only be applied when the input pulse is not much longer than the transform-limited duration. If the pulse is too long, ionization occurs over many multiple half optical cycles over which the signal field is sampled. This multicycle effect results in a narrow spectrum of δ t { ( )} N , as shown in Fig. 2d, which sets the fundamental limitation of the TIPTOE measurement. The spectral amplitude itself can be recovered using the separately measured spectrum; however, the spectral phase information on both sides of the spectrum becomes inaccurate when the spectral amplitude of δ t { ( )} N is small. Therefore, the TIPTOE measurement becomes inaccurate if the pulse is much longer than the transform-limited duration.
To test the accuracy of the TIPTOE measurement quantitatively, we perform TDSE (1d) calculations in which a soft-core potential with the ionization potential of O 2 (12.07 eV) is used. We assume that the temporal shapes of the two pulses (fundamental and signal pulses) are identical. The peak intensity of the fundamental pulse is 1 × 10 13 W/cm 2 , and the intensity of the signal pulse is 1 × 10 10 W/cm 2 at a wavelength of 800 nm. The ionization yield is calculated from atoms distributed near the focus. The total ionization yield is obtained by integrating over the focal volume. A chirped Gaussian pulse is examined whose transform-limited duration (τ TL ) is 25 fs. We obtained the ionization yield modulation for signal pulses with a GDD from −1000 fs 2 to 1000 fs 2 . The reconstruction results are summarized in Fig. 3. The duration of the ionization yield modulation only shows good agreement with the duration of the original pulse (τ) for low GDD values. The error becomes large for high GDD values, as shown in Fig. 3a. However, the duration of the reconstructed pulse is accurate even for high GDD values with an error below 5% for τ < 4τ TL . The phase of the reconstructed signal pulse is very accurate, as shown in Fig. 3b, with an error below 3% for τ < 4τ TL . These results indicate that, as expected, the reconstruction error increases as the pulse duration increases, and the reconstruction errors for the duration and GDD remain below 5% when τ < 4τ TL .
Thus far, we have discussed the characteristics of the frequency response of TIPTOE measurements for a single Gaussian pulse. We now study advanced temporal structures. In general, the existence of weak pre-and postpulses will not affect the accuracy of the reconstruction of a TIPTOE measurement due to a high nonlinearity, unless there is a comparable peak intensity with the main pulse. Thus, the frequency response in a TIPTOE measurement is determined only by the duration of the main pulse that contributes to ionization.
To test the accuracy of the reconstruction for such a complex temporal structure, a test pulse is created by adding a pre-and postpulse to the main pulse, as shown in Fig. 4a. The intensity profile of the ionization yield modulation already shows good agreement with the original pulse. The spectral amplitude of the ionization yield www.nature.com/scientificreports www.nature.com/scientificreports/ modulation is slightly narrower than the original pulse, as expected (Fig. 4b), because the duration of the main pulse (42.7 fs) is slightly longer than the transform-limited duration (25.1 fs). When the amplitude and phase are corrected, the reconstructed pulse shows a temporal profile identical to the original pulse with a duration of 42.9 fs, as shown in Fig. 4a. Therefore, the TIPTOE method is generally applicable in the multicycle regime. Phase (rad) www.nature.com/scientificreports www.nature.com/scientificreports/ The reconstruction method described here would not work if multiple pulses contribute to ionization. To handle a more general condition, a reconstruction algorithm that directly finds a solution of Eq. 1 should be developed. However, the development of such an algorithm and its stability and accuracy tests are beyond the scope of the current work. We will discuss these improvements in future works. Therefore, we applied the simple reconstruction method described here for the limited range of the pulse durations (e.g., τ < 4τ TL ) to show the applicability of the TIPTOE method in the multicycle regime for a broad range of wavelengths.
experimental results
The experimental demonstration of the TIPTOE method in the multicycle regime was performed using the inline experimental setup depicted in Fig. 5. A segmented mirror, which consists of two concentric mirrors, separates the input laser beam into two beams. The beam reflected by the outer annular mirror is more tightly focused at the focus than the beam reflected by the inner mirror. Thus, the outer beam is the fundamental beam, which ionizes air molecules, and the inner part corresponds to the signal beam to be characterized. We found that the shape of the fundamental beam at the focus is important for the accurate measurement. It should be a well-defined single beam. If it produces multiple foci, ionization yield modulation may not represent the signal beam correctly. The intensity ratio between the two beams can be adjusted by the power and the size of the input beam. Their relative time delay is controlled by a piezo transducer attached to the inner mirror. The ionization yield (N S ) is measured by two metallic plates connected to a current measurement device. The ionization yield modulation (δ N ) is estimated by the ratio of the ionization yield with its mean value (N -S ) and subtracting 1 (i.e., δ N = N S /N -S − 1). The basic operation of TIPTOE requires these three parts (the segmented mirror, focusing mirror, and current measurement), which we call a single channel measurement.
For an unstable laser source, the ionization yield measurement is often very noisy due to the intrinsic power fluctuations. In such a case, an additional device to measure a reference current can be added, using a mirror with a through hole. This mirror is used to dump the signal (inner) beam and refocus the fundamental (outer) beam to measure a reference ionization yield without the signal beam (N R ). The differential ionization yield modulation (δ N ) is obtained as δ N = N S /N R −1. This differential measurement cancels out the noise originating from the power fluctuation, providing an accurate characterization even for an unstable laser source.
In the following experiments, the advantage of the differential measurement is demonstrated using two laser sources, one stable (266 nm) and the other unstable (1800 nm) (see the Methods section for more details on the light sources). The root-mean-square (RMS) power fluctuation of the 266 nm source was 2.7%, exhibiting decent stability. The 1800 nm source was extremely unstable with energy fluctuations of 9% RMS. Considering that the energy fluctuation of an ordinary commercial Ti:sapphire laser is approximately 1%, the 1800 nm source was very unstable. The ionization yield obtained from two current measurements (N S and N R ) is shown in Fig. 6 for both cases. While the ionization yield (N S ) using the 266-nm pulse clearly shows the modulation near zero time delay (Fig. 6a), the ionization yield (N S ) obtained with the unstable 1800-nm pulse does not clearly show the modulations due to the power fluctuations (Fig. 6d). The corresponding differential ionization yield modulations δ N obtained using the reference ionization yields N R shown in Fig. 6b,e are shown in Fig. 6c,f. While there are no significant changes in the case of the 266-nm pulse, the quality of the signal for the 1800-nm pulse is significantly improved. These measurements indicate that the TIPTOE method can be applied even for extremely unstable sources when the differential measurement is implemented.
For the experimental demonstration of the TIPTOE method at various central wavelengths, pulses are measured under different dispersion conditions using various light sources (266 nm, 1800 nm, 4 μm and 8 μm), as shown in Fig. 7. For all experiments, the intensity of the pulse was kept at the level of 10 12~1 0 13 W/cm 2 to maintain the higher nonlinearity of ionization, as shown in Fig. 1. The 266-nm beam obtained from sum frequency generation using a BBO crystal was reflected 12 times (6 pairs) on chirped mirrors (Ultrafast innovations) to impose a negative GDD of −1800 fs 2 . Then, a positive GDD was added using multiple quartz windows up to the thickness of 24 mm. The pulse durations and the GDDs are summarized in Fig. 7(a) and 7(b). The pulse duration decreases at the beginning and increases as the glass thickness increases. The pulse durations are longer than the transform-limited duration (50 fs) for the entire GDD range due to the higher-order dispersion. The measured third-and fourth-order dispersions were −5.5 × 10 3 fs 3 and 8 × 10 6 fs 4, respectively. The measured GDD as a Figure 5. Inline experimental setup for the TIPTOE measurement. A segmented mirror separates the input laser beam into two pulses. Both beams are focused in the middle of two metallic plates connected to the current measurement device. After that, the inner beam is dumped by a holey mirror, and the outer beam is refocused for the reference current measurement.
www.nature.com/scientificreports www.nature.com/scientificreports/ function of the glass thickness presented in Fig. 7b is in good agreement with the expected GDD. The temporal profiles and the spectra of the 266 nm pulses are shown in Supplementary Fig. S1. These measurements show the applicability of the TIPTOE method in the UV range.
For the unstable 1800 nm pulse source, we applied the differential measurements. The pulse duration and the GDD values are summarized in Fig. 7c,d. The dispersion was controlled by placing 0 to 8 mm thick windows in the beam. The GDD decreases as we increase the window thickness at this wavelength. The pulse duration is compared with the values measured by the second harmonic FROG technique. The two results obtained with TIPTOE and FROG show a similar trend. The minimum pulse durations retrieved were 19 fs for the TIPTOE and 16 fs for the FROG technique, as presented in Fig. 7c. The retrieved GDD values are well matched for the two measurements. The temporal and spectral profiles are shown in Fig. 8 for the 280 fs 2 (positively chirped), 10 fs 2 (chirp-free) and −170 fs 2 (negatively chirped) cases. The temporal and spectral phases follow quadratic curves due to the second-order dispersion imposed, confirming the accuracy of the TIPTOE measurement.
Similar measurements were carried out for 4000 nm pulses. As the wavelength increases, the beam size increases at the focus; thus, the intensity decreases. A short-focal-length mirror (f = 5 cm) was used to maintain a sufficiently high intensity (~10 13 W/cm 2 ) to ionize air molecules. The pulse durations and GDDs measured using the single channel measurement setup are summarized in Fig. 7e,f and compared with the results obtained using second harmonic FROG. The dispersion is controlled by calcium fluoride and sapphire plates that impose a negative GDD at 4000 nm. The shortest pulse duration (chirp-free) is 65 fs in both cases, and it increases as the dispersion increases. The GDD values measured by the TIPTOE and the FROG methods are well matched with the expected value calculated from the refractive index of materials. The temporal profiles and the spectra of the pulse are shown in Supplementary Fig. S2.
The TIPTOE technique is also applied for an 8000 nm pulse. The pulse contains spectral components from 6-10 μm. The dispersion is changed through calcium fluoride and zinc selenide crystals that impose a negative GDD. We used a 2.5-cm focal-length mirror with a f-number of f/2.8 to maintain the required peak intensity for ionization with the given pulse energy (40 μJ). The ionization yield modulation is obtained from the differential measurement. The pulse durations and GDDs for 8000 nm pulses are given in Fig. 7g,h. The pulse duration was 40 fs at the chirp-free condition. The GDD values are well matched with the expected values calculated by the refractive index.
For the 8000 nm pulses, the intrinsic fourth order dispersion (FOD) was very large (3 × 10 6 fs 4 ). Thus, the minimum pulse duration was obtained for negative GDD values. The temporal profile and the spectrum of the pulse are shown in Supplementary Fig. S3. The temporal and spectral phases are flat when the pulse duration is minimal. For a negative chirp, the temporal and spectral phases become convex up. For the positive chirp, the phase becomes slightly convex down. These measurements confirm the applicability of the TIPTOE method in the IR wavelength range.
Discussion
Because the TIPTOE technique utilizes the extreme nonlinearity of ionization, it can be applied for the temporal characterization of femtosecond laser pulses over a broad spectral range. One of the requirements for the TIPTOE measurement is that the pulse energy in the fundamental arm should be high enough for ionization. A pulse energy of 1 uJ was sufficient to implement the TIPTOE method for a 79 fs pulse at 266 nm with a long focal length (25.4 cm, f-number of 48) mirror. However, a pulse energy of 40 μJ was required to see an ionization signal for a 48 fs pulse at 8000 nm with a tight focusing (2.5 cm, f-number of 2.8). The required energy can be reduced by further reducing the f-number of a focusing optics. Alternatively, one can use different gases that have lower ionization potentials. Thus, the TIPTOE method can be generally applied for the temporal characterization of amplified laser pulses. The temporal shape of a laser pulse is extracted from an ionization modulation in the TIPTOE method. Because ionization occurs multiple times for a multicycle laser pulse, the ionization yield modulation may not directly represent the temporal shape of the test pulse if the pulse duration is much longer than the transform-limited duration. In this work, we used a simple reconstruction method to correct this multicycle effect in which the spectral amplitude is obtained from a separately measured spectrum. It is shown that this approach becomes inaccurate as the pulse duration is much longer than the transform-limited duration. The reconstruction error for the duration and GDD is estimated to be below 5% when τ < 4τ TL . An efficient reconstruction algorithm that mitigates this limitation will be developed in a future work.
We demonstrated the applicability of the TIPTOE technique over a broad spectral range using laser pulses at different wavelengths. These results support the applicability of the TIPTOE technique for a multi-octave laser www.nature.com/scientificreports www.nature.com/scientificreports/ pulse which has been shown theoretically 16 . The applicability of the TIPTOE method for a multi-octave laser pulse should be experimentally verified using a single multi-octave pulse in the future.
In summary, we demonstrated the universality of the TIPTOE technique for a broad wavelength range in the multicycle regime. We showed that the temporal profile of an original pulse can be found from the ionization yield modulation through a simple reconstruction process. The pulse durations and GDDs were in very good agreement with the expected values calculated from the refractive indexes of the material used for the dispersion control. The results obtained at 1800 nm and 4000 nm were in good agreement with those obtained with a second harmonic FROG. These measurements confirm the applicability of the TIPTOE method in the multicycle regime over a broad wavelength range.
Methods
Light sources. 266 nm source. The 266 nm pulses were obtained by sum frequency generation mixing an 800 nm laser pulse (from a Ti:sapphire laser) with its second harmonic. The beam energy for the measurement was 1.6 μJ and was focused with a 25.4-cm mirror (f-number of 48). The shot-to-shot power fluctuation (2.7% RMS) was estimated with a photodiode.
1800 nm source. The 1800 nm pulse is an idler beam obtained from an optical parametric amplifier (TOPAS, Light conversion). It is compressed using a stretched hollow core fiber system (Few-cycle.com). The FROG method is implemented using a 25-μm BBO crystal. 4000 nm source. The 4000 nm pulses are produced via differential frequency generation (DFG) obtained from a 180-μm LGSe crystal using the signal and idler beams generated from the TOPAS. The FROG method is implemented using a 200-μm AgGaS 2 crystal. 8000 nm source. The 8000 nm pulses are generated via a DFG beam obtained from a 500-μm-thick GaSe crystal using the signal and idler beams generated from the TOPAS. www.nature.com/scientificreports www.nature.com/scientificreports/ Other details. To estimate the ionization yield, a current was measured by two metallic plates connected to a current amplifier. A bias voltage of 500 V was applied between the two plates. For the pulse durations and the GDDs shown in Fig. 7, the error bar shows the standard deviation of 5 measurements (266, 1800, and 8000 nm) or 10 measurements (4 um). The ionization yield modulation for the differential measurement was calculated using the ionization yield at the first target (N S ) and the second target (N R ). Since the amount of the ionization yield N R would not be exactly the same as N S , N R is calibrated using the linear function (i.e., N R ′ = c R N R + b R ). The constants c R and b R are determined so that the ratio N S /N R ′ obtained without the signal pulse becomes unity. Then, the ionization yield modulation is obtained as δ N = N S /N R ′ −1. | 6,662.6 | 2019-11-05T00:00:00.000 | [
"Physics"
] |
Knockout of MIMP protein in lactobacillus plantarum lost its regulation of intestinal permeability on NCM460 epithelial cells through the zonulin pathway
Background Previous studies indicated that the micro integral membrane protein located within the media place of the integral membrane protein of Lactobacillus plantarum CGMCC 1258 had protective effects against the intestinal epithelial injury. In our study, we mean to establish micro integral membrane protein -knockout Lactobacillus plantarum (LPKM) to investigate the change of its protective effects and verify the role of micro integral membrane protein on protection of normal intestinal barrier function. Methods Binding assay and intestinal permeability were performed to verify the protective effects of micro integral membrane protein on intestinal permeability in vitro and in vivo. Molecular mechanism was also determined as the zonulin pathway. Clinical data were also collected for further verification of relationship between zonulin level and postoperative septicemia. Results LPKM got decreased inhibition of EPEC adhesion to NCM460 cells. LPKM had lower ability to alleviate the decrease of intestinal permeability induced by enteropathogenic-e.coli, and prevent enteropathogenic-e.coli -induced increase of zonulin expression. Overexpression of zonulin lowered the intestinal permeability regulated by Lactobacillus plantarum. There was a positive correlation between zonulin level and postoperative septicemia. Therefore, micro integral membrane protein could be necessary for the protective effects of Lactobacillus plantarum on intestinal barrier. Conclusion MIMP might be a positive factor for Lactobacillus plantarum to protect the intestinal epithelial cells from injury, which could be related to the zonulin pathway.
Background
It has been proved that gut flora homeostasis in human intestine is mediated largely by probiotics, including Lactobacillus plantarum (LP) and other microorganisms [1][2][3]. LP can improve intestinal pathological disorders through the modulation of intestinal functions [2]. Therefore, as one of the best-characterized probiotic bacteria, LP can be selected in clinical trials assessing the prevention and treatment of intestinal disorders, such as the complications after surgical operation [2,4,5]. However, which components mostly contribute to the protective effects of LP still remains an interesting question to be further investigated [6].
Recently, the lactobacillus surface layer protein (SLP) has been raised as a key component mediating the protection conferred by LP to intestinal epithelial cells (IECs) [7,8]. It is found that a 50-kDa protein extracted from the surface layer of lactobacillus could adhere to IECs and its mimic protein mucin [9]. SLP isolated from Lactobacillus crispatus showed the ability to inhibit adherence of enterotoxigenic E. coli to a synthetic basement membrane [10], and reduce both dextran flux and trans-epithelial electrical resistance (TER) [11]. SLP also reduced the number and rearrangement of α-actin foci, and attenuated bacterial colonization on IECs and pathogen-induced changes in cellular permeability [12]. Furthermore, soluble factors, including p75 and p40, extracted from Lactobacillus rhamnosus GG culture broth supernatant, showed protective effects on IECs, which was mediated by Akt pathway [13,14]. Only a few studies have, however, investigated the structure and functions of SLP [8,10,[15][16][17][18], due to the specific hydrophilic and hydrophobic properties and technical difficulties associated with SLP purification, thus limiting the investigation of SLP binding domains.
Zonulin has recently been discovered as a protein involved in tight junctions (TJ) between IECs [19]. Zonulin was originally discovered as the target of zonula occludens toxin, which has been reported with the increased gut permeability in the pathogenesis of coeliac disease [20]. Recent studies have indicated that zonulin got the regulative function of intestinal permeability and barrier [19,21], suggesting that high expression of zonulin may cause the increase of intestinal permeability [22].
Our previous studies have demonstrated that LP was able to prevent IECs from injury induced by EPEC [23], regulate dendritic cells maturation and T-lymphocytes differentiation [24]. In addition, a protective role of TJ microstructure both in vivo and in vitro was also evidenced [25,26]. In our study, the SLP (the integrated membrane protein, IMP) isolated from LP was extracted and purified [27]. To increase the specificity and valence of SLP, we further identified the small functional protein domain, the micro IMP (MIMP) [8], and confirmed the protective function of MIMP on IECs [16]. An additional study indicated that the molecular mechanism is related to the activation of protein kinase C-η and occludin phosphorylation [15]. Moreover, we identified the receptor of MIMP on NCM460 cells, and the mechanism of p38 MAPK signaling pathway [17], and showed that probiotics may exert its protective effects on the intestinal barrier through the zonulin pathway [28]. To verify the protective effects of MIMP, here we mean to establish MIMP-knockout LP (LPKM) and investigate the change of protective effects of LPKM on IECs against EPEC infection. Furthermore, we will also investigate the molecular signal transduction pathway during the interaction between LP and intestinal permeability.
IL-10 −/− mice breeding and grouping IL-10 −/− mice were generated on a wild-type 129 Sv/Ev genetic background, bred and raised in the animal facility at Shanghai Jiao Tong University, School of Medicine. Mice were housed under specific pathogen-free conditions until weaning (3 weeks) when they were moved to a conventional Animal Care Unit. Therefore, mice were housed in cages with a high-efficiency particulate air filter and fed with a standard mouse chow diet. IL-10 −/− mice at age of 3.5 weeks were randomized into three groups (n = 10 for each group), and were treated with oral gavage of milk alone or added with 1 × 10 9 cfu/mL LP or LPKM. Wildtype (WT) mice were treated with oral gavage of milk (n = 10 for each group). The volume of gavage was 0.5 mL. Mice were treated up to the age of 17 weeks and then sacrificed by cervical dislocation. The Animal Care and Use Committee and the Ethics Committee of Shanghai Jiao Tong University approved the experimental protocol in compliance with the Helsinki Declaration (G2012007).
Bacterial strains and culture conditions
The EPEC strain ATCC 43887 (O111:NM) (Shanghai Municipal Center for Disease Control and Prevention, Shanghai, China) was grown in DMEM at 37°C for 24 h. The LP strain (CGMCC 1258) was provided by the Institute of Bio-medicine, Shanghai Jiaoda Onlly Company Ltd, and cultured in MRS broth (Difco, Sparks, MD, USA) at 37°C. Quantification of bacteria was carried out by measuring the optical density at 600 nm using a Beckman DU-50 spectrophotometer to determine the colony forming units (CFU).
MIMP targeting
The mutant strain LPKM was constructed with a deletion of MIMP using standard integration and excision methods, tools, and strains, as previously described [29,30]. A pET32 deletion vector was constructed containing two targeting fragments, using the incision enzyme BglII and XhoI, which flank MIMP gene as previously described [8]. After a double crossover integration and excision event, LPKM was recovered harboring a 183-bp deletion of MIMP in the genome. PCR products over the IMP region in LPKM confirmed the loss of ∼ 200 bp in the genes surrounding the deletion [31].
Western blot analysis
Western blot was performed as previously described [17]. Visualization was performed using enhanced chemiluminescence according to the manufacturer's instructions (ECLkit; Pierce, IL, USA).
Binding assay of LP and the competitive inhibitive effect on MIMP Cells were cultured as monolayers (~1 × 10 7 for each well) and divided into experimental groups in triplicate as previously described [17]. In LP groups, LP (100 μL of 1.0 × 10 8 /mL) was added onto the monolayer of NCM460 cells simultaneously with EPEC infection. In LPKM groups, LPKM (100 μL of 1.0 × 10 8 /mL) was added onto the monolayer of NCM460 cells simultaneously with EPEC infection. In antibody groups, NCM460 cells were preincubated with the serum containing polyclonal anti-MIMP antibodies (100 μL of dilution 1:5000) prepared as previously described [8], prior to infection with EPEC which was simultaneously incubated with LP.
Measurement of transepithelial electrical resistance (TER) and dextran permeability
The methods were described previously [17]. The intestinal epithelial monolayers were divided into five different experimental groups in triplicate.
Measurement of the intestinal permeability and colonic damage in mice
The intestinal permeability was determined in treated or untreated IL-10 −/− and wild-type mice as previously described [17]. Final data were reported as either the fractional excretion (for sucralose) to determine the colonic permeability or a ratio of fractional excretion (for lactulose/mannitol) to quantify the small intestinal permeability. Fractional excretion was defined as the fraction of the gavaged dose recovered in the urine, and the ratio of fractional excretion was defined as the ratio of the fraction of the gavaged dose of lactulose recovered in the urine over the fraction of the gavaged dose of mannitol recovered in the urine.
Ussing chamber assay to determine the intestinal permeability measurement in isolated mice colons Treated or untreated IL-10 −/− and wild-type mice were sacrificed at 8 weeks and a Ussing chamber assay was performed as described previously [16,32]. Tissue ion resistance (1/G, where G represents the conductance) was calculated from the potential difference and short-circuit current according to Ohm's law.
Determination of zonulin protein expression levels by western blot
LP or LPKM treated NCM460/MIMP samples were subjected to SDS-PAGE and transferred onto PVDF membranes. Membranes were incubated with the antibodies raised against zonulin (Lsbio) at a dilution of 1:100 for 2 h at room temperature, washed in TBS and then incubated for 1 h with corresponding HRPconjugated secondary antibodies, and visualized using enhanced chemiluminescence.
Detection of zonulin mRNA expression by quantitative real time PCR
Quantitative real time PCR (qRT-PCR) was used to determine the expression of Zonulin at the level of mRNA [17,19]. Primers used in our study included: forward primer, 5′-TCATCACGGCGCGCCAGG-3′ reverse primer, 5′-GGAGGTCTAGAATCTGCCCGAT-3′.
Total RNA was isolated from NCM460/MIMP cells using Trizol reagent (Invitrogen) followed by DNase I treatment. The quantity and quality of RNA was verified by determining the absorbance ratio at 260 and 280 nm, and by visualization of respective bands on agarose gels. For each sample, 600 ng mRNA was used in the reverse transcription reaction according to the manufacturer's specifications (iScript kit, BioRad). mRNA was also detected by RT-PCR using a light-cycling system (LightCycler, Roche Diagnostics GmbH, Mannheim, Germany). The level of mRNA expression was expressed as the ratio of the mean reading of the experimental group over that of the control group for NCM460/MIMP cells.
Verification of the zonulin pathway by examination of intestinal permeability in vitro
A zonulin overexpressing adenovirus was constructed as previously described [33]. Briefly, human zonulin cDNA was cloned into KpnI and XhoI sites of the pENTR 2B vector (Invitrogen), and then transferred to the pAd/CMV/ V5-DEST vector (Invitrogen). The plasmids were linearized with PacI (Promega, Madison, WI) and transfected into 293A cells using Lipofectamine 2000. As a control, the pAd/CMV/V5-GW/lacZ vector (Invitrogen) was used to produce a lacZ-bearing adenovirus. NCM460 cells were transfected with Ad-zonulin or Ad-lacZ for 12 h. After transfection, the cells were washed with PBS and placed in fresh medium for western blot analysis and examination of intestinal permeability as described above.
Clinical verification of zonulin pathway
It has been reported that postoperative septicemia is associated with bacterial translocation, which may be caused by the increase of intestinal permeability and barrier injury [2]. We used the postoperative septicemia to evaluate the relationship between human serum zonulin level and intestinal permeability [34]. 121 patients with colorectal cancer staged asT2-T3, N1, M0, according to the TNM staging system, were enrolled in this study. All patients underwent a radical colectomy at the Shanghai Sixth People's Hospital, affiliated to Shanghai JiaoTong University in Shanghai or the Sixth Affiliated Hospital of Sun Yat-sen University in Guangzhou, between April 2009 and September 2012. The study design and protocols were reviewed and approved by the Human Research Review Committee in both the Shanghai Sixth People's Hospital and the Sixth Affiliated Hospital of Sun Yat-sen University, and written informed consent for participation was obtained from each patient before their enrollment into the study. Serum samples were collected 1 day preoperatively,and 3 and 10 days after the surgical procedure. The concentrations of zonulin were determined using an ELISA kit, as previously described [35]. Briefly, plastic microtiter plates (Costar, Cambridge, MA) were coated with rabbit zonulin cross-reacting Zot derivative ΔG IgG antibodies (10 μg/ml in 0.1 mol/l sodium carbonate buffer, pH 9.0), which were generated as previously described [35]. After an overnight incubation at 4°C, plates were washed four times in TBS and blocked by incubation for 1 h at 37°C with TBS. After four washes, five ΔG serial standards (50, 25, 12.5, 6.2, 3.1, and 0 ng/ml) and patient sera samples (1:10 dilution in TBS) were added and incubated overnight at 4°C. After four washes with TBS + 0.2% Tween 20, plates were incubated with biotinylated anti-ZotIgG antibodies for 4 h at 4°C. A color reaction was developed using a commercial kit (ELISA amplification kit; Invitrogen). The absorbance at 495 nm was measured with a microplate auto-reader (Molecular Devices Thermomax Microplate Reader).
Statistical analysis
The data were expressed as the mean ± standard deviation (SD) when normally distributed or as a median (range) when abnormally distributed. Statistical analyses were performed using the SPSS 13.0 software (SPSS Inc., Chicago, IL). SD between multiple groups was assumed to satisfy a normal distribution. Data were analyzed by one-way ANOVA when conditions of homogeneity of variance were present. P values <0.05 were considered to be statistically significant. Spearman's correlation was used to assess the relationship between zonulin level and postoperative septicemia using SPSS 13.0.
Loss of the MIMP sequence and expression in LPKM
PCR and western blot were performed to confirm the deletion of MIMP gene for LPKM. PCR amplicons over the IMP region indicated that compared with LP of near 1000 bp, LPKM (about 800 bp) lost about 200 bp in the genes surrounding the deletion of LPKM ( Figure 1A). Western blot showed a band of about 120 kd in the whole bacteria protein of LP butundetectable in the LPKM ( Figure 1B).
Decreased competitive inhibition of LPKM to EPEC adherence to NCM460 cells The adhesion rate of LP to NCM460 cells was significantly lower compared with the LPKM (P < 0.001, Figure 1C). Detection of EPEC adherence indicated that the adhesion rate of EPEC to NCM460 cells was reduced significantly when LP was added, while adding LPKM had no effects on the inhibition of EPEC adhesion. However, the anti-MIMP deprived the effect of LP on the reduction of EPEC adhesion (P < 0.05, Figure 1D).
LPKM had no effect on the EPEC-induced reduction of intestinal permeability TER in the NCM460 cell monolayers was found significantly decreased in response to EPEC infection, compared with uninfected control cells at 3-24 h (P < 0.05, Figure 2A). The EPEC-induced TER decrease was prevented by the simultaneous treatment of LP (P < 0.05, Figure 2A). However, treatment with LPKM showed a small effect on the EPEC-induced TER decrease (P > 0.05, Figure 2A). Anti-MIMP antibody also inhibited the effect of LP on EPEC-induced decrease in TER (P > 0.05, Figure 2A). Similar results were indicated for dextran permeability ( Figure 2B).
The intestinal permeability and colonic damage in vivo were investigated using a sugar probes. Increased small intestinal permeability was observed in the IL-10 −/− mice at age of 4 weeks and onwards. Compared with the wild-type mice (P < 0.05, Figure 2C). Oral daily administration of pure milk containing LP for 4-17 weeks decreased small intestinal permeability in IL-10 −/− mice, and the effect was more evident at the age of 8-14 weeks when the small intestinal permeability returned to the normal level (P < 0.05, Figure 2C). However, administration of pure milk containing LPKM for 4-17 weeks showed no help in decreasing small intestinal permeability in IL-10 −/− mice (P > 0.05, Figure 2C). Meanwhile, IL-10 −/− mice had a significantly increased colonic permeability, compared with the wild-type mice (P < 0.05, Figure 2D). Treatment of LP had no significant effect on the intestinal permeability until week 10. At week 10-15, administration of pure milk containing LP was effective in decreasing colonic permeability in IL-10 −/− mice (P < 0.05, Figure 2D). However, administration of pure milk containing LPKM for 4-17 weeks showed no effect in decreasing colonic permeability in IL-10 −/− mice (P > 0.05, Figure 2D). TER was found significantly decreased in the EPEC group, compared with control (P < 0.05), which was prevented by the simultaneous treatment of LP (P < 0.05). However, treatment with LPKM showed no effects on the EPEC-induced TER decrease (P > 0.05). Anti-MIMP antibody also inhibited the effect of LP on TER decrease (P > 0.05). (B) Similar findings were obtained for dextran permeability. (C) An increased small intestinal permeability was observed in the IL-10 −/− mice at age of 4 weeks and onwards, as compared with the wild-type mice (P < 0.05). LP could decrease small intestinal permeability in IL-10 −/− mice, and the effects were more evident at the age of 8-14 weeks when the small intestinal permeability returned to the normal level (P < 0.05). However, LPKM showed no decrease of small intestinal permeability in IL-10 −/− mice (P > 0.05). (D) IL-10 −/− mice had a significantly increased of colonic permeability, compared with control (P < 0.05). Treatment of LP had no significant effects on the intestinal permeability until week 10. At week 10-15, oral daily administration of pure milk containing LP effectively decreased colonic permeability of IL-10 −/− mice (P < 0.05). However, LPKM showed no help in decreasing colonic permeability of IL-10 −/− mice (P > 0.05). The intestinal epithelial monolayers were divided into five different experimental groups in triplicate. Each group used 10 animals for determination. *vs. EPEC group, P < 0.05; # vs. *, P < 0.05. The data were expressed as the mean ± standard deviation. Statistical analyses were performed using the SPSS 13.0 software (SPSS Inc., Chicago, IL). Data were analyzed by one-way ANOVA when conditions of homogeneity of variance were present.
Ussing chamber assay was performed to evaluate the intestinal and colonic permeability in 8-week old mice tissues. The small intestinal permeability to mannitol in the IL-10 −/− mice was increased with a corresponding decrease in TER, compared with the wild-type group. In LP group, the small intestinal permeability to mannitol significantly decreased whereas TER significantly increased, as compared with the mice in control group (P < 0.05, Figure 3A and B). However, in LPKM group, the small intestinal permeability to mannitol and TER did not change significantly, compared with the mice in control group (P > 0.05, Figure 3A and B). Similar results were observed in colonic tissues. Colonic permeability to mannitol increased in the IL-10 −/− mice with a corresponding decrease in TER, both of which were prevented by not LPKM but LP (P < 0.05, Figure 3C and D)
Loss of prevention of EPEC-induced increased zonulin expression level for LPKM
Semi-quantitative analysis of the western blots showed that zonulin protein expression level was higher in EPEC group, compared with the uninfected control group (P < 0.05, Figure 4A and B). After the simultaneous treatment of LP, the increased zonulin protein expression level was lost (P < 0.05, Figure 4A and B). However, simultaneous treatment of LPKM did not change the increased zonulin protein expression level (P > 0.05, Figure 4A and B). qRT-PCR also showed that EPEC enhanced the zonulin protein expression level (P < 0.05, Figure 4C), which could be prevented by not LPKM (P > 0.05, Figure 4C) but LP (P < 0.05, Figure 4C).
Loss of reduced intestinal permeability for LP after overexpression of zonulin protein
Zonulin protein expression level was higher in EPEC group, compared with the uninfected control group (P < 0.05, Figure 5A and B). After the simultaneous treatment of LP, the increased zonulin protein expression level was lost (P < 0.05, Figure 5A and B). The LP-induced increase level of zonulin was not changed after Ad-zonulin was transfected to NCM460 cells (P > 0.05, Figure 5A and B), The small intestinal permeability to mannitol in the IL-10 −/− mice was increased, compared with the wild-type group. In LP group, the small intestinal permeability to mannitol significantly decreased, compared with control (P < 0.05). However, in LPKM group, the small intestinal permeability to mannitol did not changed significantly (P > 0.05). (B) Similar results were indicated for intestinal resistance. (C & D) Similar results were observed in colonic tissues. Colonic permeability to mannitol increased in the IL-10 −/− mice with a corresponding decrease in TER, both of which were prevented by LP (P < 0.05). However, in LPKM group, colonic permeability to mannitol and TER did not changed significantly, compared with control (P > 0.05). Each group used 10 animals for determination. *vs. EPEC group, P < 0.05; # vs. *, P < 0.05. The data were expressed as the mean ± standard deviation. Statistical analyses were performed using the SPSS 13.0 software (SPSS Inc., Chicago, IL). Data were analyzed by one-way ANOVA when conditions of homogeneity of variance were present.
while the increase was continueing after Ad-lacZ was transfected (P < 0.05, Figure 5A and B).
TER in the NCM460 cell monolayers was found significantly decreased, and dextran permeability was found significantly increased in response to EPEC infection, as compared with uninfected control cells (P < 0.05, Figure 5C and D). The EPEC-induced change of TER and dextran permeability was prevented by the simultaneous treatment of LP (P < 0.05, Figure 5C and D). However, the effects of LP to the EPEC-induced change of intestinal permeability was inhibited with the transfection of Ad-zonulin to NCM460 cells (P > 0.05, Figure 5C and D), while the impact did not existed when Ad-lacZ was transfected instead (P < 0.05, Figure 5C and D).
Relationship between zonulin level and postoperative septicemia
A direct correlation was found between the serum zonulin level and the postoperative septicemia (r = 1.000, P < 0.001).
Discussion
In previous studies, we have identified and characterized MIMP as a domain of LP surface layer protein [8,16], and it showed a key role in conferring the protection against pathogenic bacteria. A recent study found that knockout of the specific gene in lactobacillus could be a good model for investigating the mechanism of lactobacillus components [31]. Therefore, we established MIMP-knockout LPKM bacteria to investigate the mechanism of MIMP on the regulation of intestinal permeability in the present study. PCR amplicons over the integrated membrane protein region indicated the loss of ∼ 200 bp in the genes surrounding the deletion of LPKM. Western blot confirmed that non-expression of MIMP protein section in the whole bacteria protein of LPKM. Our findings suggested that LPKM lost the MIMP sequence and did not expressed MIMP protein. It is proved that adhesion may be the first step of the interaction between lactobacillus and IECs, which could then exert its protective function against the intestinal barrier injury [27,36,37]. Our results indicated that after knockout of the MIMP [8,16], LPKM lost the adhesive effects to NCM460 cells. EPEC adhesive assay confirmed that the competitively inhibitive effects of LPKM decreased significantly compared with LP, and showed similar effects in LP group added with anti-MIMP, which might be due to the loss of adhesive MIMP protein. To investigate the effects of LPKM on intestinal permeability, we performed the assay of TER, dextran permeability in vitro, sugar probe permeability in vivo, and Ussing chamber ex vivo. The intestinal permeability assay of NCM460 cells indicated that LPKM lost preventive effects on the EPEC-induced TER decrease and dextran permeability increase, which was similar to the role of anti-MIMP antibody [16]. Therefore, we deduce that the sequence of MIMP have an inhibitive effect on the increase of permeability of NCM460 cell monolayer. Western blotting showed that zonulin protein expression level was higher in EPEC group, compared with the uninfected group. After the simultaneous treatment of LP, the increased zonulin protein expression level was lost. However, simultaneous treatment of LPKM did not change the increased zonulin protein expression level; (B) Semi-quantitative analysis of western blots results; (C) Detection of zonulin mRNA expression levels by qRT-PCR found that EPEC enhanced the zonulin protein expression level, which could not be prevented by LPKM but LP. *vs. EPEC group, P < 0.05; # vs. *, P < 0.05.
We also determined the intestinal permeability and colonic damage in vivo and ex vivo using the IL-10 −/− mice model, and found that LPKM lost the ability to lower both the small intestinal and colonic permeability of mice, as the permeability to mannitol and TER did not change significantly both in intestinal and colonic tissues. Importantly, our results suggested that LPKM lost the effects on the intestinal and colonic permeability, and MIMP might have inhibitive effects on the increase of intestinal and colonic permeability. Because it has been reported that the reduction of small intestinal permeability could attenuate the colitis and protect the intestinal barrier function in the IL-10 −/− mice [16,32], MIMP may alleviate the intestinal inflammation by reducing small intestinal and colonic permeability.
Zonulin is a recent discovered protein that participates in TJ between IECs in the digestive tract [19]. Zonulin was originally discovered as the target of zonula occludens toxin, which is secreted by cholera pathogen Vibrio cholerae [38]. It has been reported as a marker of the increased gut permeability in coeliac disease [20] and type 1diabetes mellitus [35]. High expression of zonulin could reflect the increase of intestinal permeability [22]. However, the relationship between probiotics and zonulin protein remains uninvestigated. The molecular mechanism underlying how LP can exert its effects on intestinal permeability has still not been clarified. Therefore, we promote our hypothesis that LP may exert the regulative effects on intestinal permeability via the zonulin pathway. Results indicated that LP could inhibit the EPEC-induced increase of zonulin expression, while LPKM could not, which suggested that MIMP may lower the intestinal permeability by inhibiting the expression of zonulin in IECs, and then alleviate the intestinal inflammation and protect the normal intestinal barrier function. To verify the zonulin pathway, during the interaction between LP and the intestinal permeability, the zonulin overexpressing adenovirus was constructed and transfected into the NCM460 cells. Zonulin protein expression level was higher in EPEC group, compared with the uninfected control group. After the simultaneous treatment of LP, the increased zonulin protein expression level was lost. The LP-induced increase level of zonulin was not changed after Ad-zonulin was transfected to NCM460 cells, while the increase was continue increased after Ad-lacZ was transfected; (B) Semi-quantitative analysis of western blots results; (C) TER in the NCM460 cell monolayers was found to be significantly decreased in response to infection with EPEC, as compared with uninfected control cells. The EPEC-induced change of TER was prevented by the simultaneous treatment of LP. However, the effects of LP to the EPEC induced change of intestinal permeability was inhibited with the transfection of Ad-zonulin to NCM460 cells, while that impaction did not existed when the Ad-lacZ was transfected instead of Ad-zonulin; (D) Dextran permeability was found to be significantly increased in response to infection with EPEC, as compared with uninfected control cells. The EPEC-induced change of dextran permeability was prevented by the simultaneous treatment of LP. However, the effects of LP to the EPEC induced change of intestinal permeability was inhibited with the transfection of Ad-zonulin to NCM460 cells, while that impaction did not existed when the Ad-lacZ was transfected instead of Ad-zonulin. *, P < 0.05, vs. LP + EPEC group; #, P < 0.05, vs. *.
Results indicated that overexpression of zonulin protein deprived the reduced effects of LP to intestinal permeability, including the increase of TER and decrease of macro molecular dextran permeability. Furthermore, we evaluated the correlation between serum zonulin levels and postoperative septicemia, which was found positive. In this study, we first verified the critical role of zonulin in the regulation of intestinal permeability.
Above all, MIMP might be an important protein with protective effects on intestinal barrier, which could be used as a new drug to prevent and treat intestinal barrier dysfunction [2,7,17]. Since lactobacillus may have a risk of translocation and could not be used combined with antibiotics, MIMP will show its own advantages [17,39,40]. Additionally, zonulin could also be used as a biomarker of intestinal dysfunction [41].
One limitation of our study is that we did not have a verification of zonulin using the human serum samples after administration of LP. And this test is now in progress in our hospital. Furthermore, recently, some studies on barrier function mediated by newly discovered molecules or cells of IEC itself are drawing more attention [42], such as intestinal villi brush border alkaline phosphatase (IAP) [43][44][45], intracytoplasmic protein phosphatase 2A (PP2A) [46,47], their interaction with MIMP should be further investigated in the following study [40]. | 6,650 | 2014-10-03T00:00:00.000 | [
"Biology"
] |
A Synopsis of Serum Biomarkers in Cutaneous Melanoma Patients
Many serum biomarkers have been evaluated in melanoma but their clinical significance remains a matter of debate. In this paper, a review of the serum biomarkers for melanoma will be detailed and will be discussed from the point of view of their practical usefulness. The expression of biomarkers can be detected intracellularly or on the cell membrane of melanoma cells or noncancer cells in association with the melanoma. Some of these molecules can then be released extracellularly and be found in body fluids such as the serum. Actually, with the emergence of new targeted therapies for cancer and the increasing range of therapeutic options, the challenge for the clinician is to assess the unique risk/response ratio and the prognosis for each patient. New serum biomarkers of melanoma progression and metastatic disease are still awaited in order to provide efficient rationale for followup and treatment choices. LDH as well as S100B levels have been correlated with poor prognosis in AJCC stage III/IV melanoma patients. However, the poor sensitivity and specificity of those markers and many other molecules are serious limitations for their routine use in both early (AJCC stage I and II) and advanced stages of melanoma (AJCC stage III and IV). Microarray technology and proteomic research will surely provide new candidates in the near future allowing more accurate definition of the individual prognosis and prediction of the therapeutic outcome and select patients for early adjuvant strategies.
Introduction
The incidence of cutaneous malignant melanoma (CMM) is still increasing in the western world despite early detection and prevention campaigns. Patients are mostly young and late diagnosis, which means thicker tumors (thicker than 1 mm, or Breslow index ≥1 mm: the Breslow index is the measurement in mm of the vertical thickness of the primary tumor) and/or involvement of regional lymph nodes, causes a greater risk of developing a disseminated disease. CMMs usually progress from an in situ proliferation to a radial growth pattern, and then to a vertical growth phase. This vertical growth phase represents a key event for the cell spread, since it allows the cells to migrate deeply in the dermis, in the lymphatics, and the bloodstream.
In the 7th revision of the American Joint Committee on Cancer (AJCC) for melanoma staging and classification (2009), patients can be divided in four stages, from stage I and II (local disease) to stage III (locoregional disease) and stage IV (metastatic disease). In this classification, the only marker which has been incorporated for clinical use is lactate dehydrogenase (LDH) since elevated serum LDH has been shown in multivariate analysis to be an independent and highly significant predictor of survival even after accounting for site and number of metastases.
Surgery remains the mainstay of the melanoma treatment. Actually, the major concern after the diagnosis by primary surgery or primary excision is to know whether this cancer has already metastasized or not. Indeed many arguments emphasize that early detection of melanoma metastasis could improve the prognosis of patients, at least for a part of them.
High-risk melanoma patients can be defined by a 50% risk of relapse despite initial optimal surgical treatment. This group of patients should be carefully followed and if possible treated by efficient adjuvant therapeutic strategies.
Interferon-α and more recently ipilimumab have been proposed as adjuvant treatments but their effect on survival is still a matter of debate. To date no predictive factor of response has been described. The process of metastasis involves the spread of neoplastic cells to locoregional or distant body sites via lymphatic vessels and/or bloodstream. In the case of melanoma, circulating cells may find a suitable microenvironment in the first draining lymph node, known as the sentinel lymph node, in other lymphnodes or in distant organs, leading to secondary tumor growth ( Figure 1). Melanoma may spread to almost all organs, with predilection for lymph nodes, liver, lungs, brain, and bones. Understanding the biology and the mechanism of metastasis provides new molecular targets and may help us to discover new biomarkers.
When metastatic disease is confirmed late and surgery can no longer be chosen, therapeutic options are limited and give disappointingly low responses. These options include specific or nonspecific immunotherapy, chemotherapy, radiotherapy, radiosurgery, radiofrequency ablation.
Towards the Definition of a Biomarker in Cutaneous Malignant Melanoma?
Biomarkers can be divided into diagnostic markers for screening and prognostic markers, which can be used once the cancer has been diagnosed and predictive markers, which should predict the likely response to a treatment. Cancer biomarkers include molecular tools such as proteins, peptides, DNA, mRNA, or processes which can be measured in a given cancer, with specific quantitative and qualitative tools. These markers can be found in tissues, cells, and/or body fluids. In addition viable melanoma cells can also be found in the peripheral blood of melanoma patients. The discussion will be limited here to serum biomarkers in melanoma patients.
The ideal serum biomarker should be a molecule detection which in the blood allows diagnosis of a growing tumor in a patient. The biomarker must exhibit sufficient sensitivity and specificity in order to minimize false negative and false positive results. The sensitivity refers to the proportion of patients with a confirmed disease who will have a positive test for a biomarker, while the specificity can be defined by the proportion of healthy individuals with a negative test.
Previous studies have shown that many molecules which may be involved in oncogenesis and cancer spread can be found in the serum of cancer patients in particular melanoma patients, but their sensibility and/or specificity is still questionable. These molecules can be produced and secreted or shed into the bloodstream directly by melanoma cells or indirectly through destruction of melanoma cells by chemotherapy, immunotherapy, or combined therapy. Biomarker discovery is a complex research process which involves scientific collaboration and data share. Early approaches rested on clinical and pathological findings, as illustrated by the CEA and the PSA stories, but now emerging technologies such as genomics and proteomics have influenced and changed the paradigm.
At this moment no ideal biomarker exists in the melanoma field, and additional markers (combined markers) provide probably more useful information as shown in some reports. Routine use of tumor markers is an important issue because it would allow early detection and definition of therapeutic strategy.
In addition, melanoma biomarker research is an open field for the understanding of molecular events in melanoma progression and should provide new molecular targets for therapeutic intervention.
Hereafter we detail the most important serum molecules which have been described as biomarker for CMM.
Common CMM Biomarkers
2.1.1. Lactate Dehydrogenase (LDH). As already discussed above, this enzyme has been considered as the main serum parameter in metastatic melanoma patients and identified as a "good" biomarker in metastatic patients. Many studies have presented LDH as the most predictive independent factor. This led to a stratification in the American Joint Committee of Cancer staging system. Metastatic melanoma patients with high LDH levels are designated as M1c whatever the site of metastases.
However, Hamberg et al. [1] recently indicated that in a series of 53 AJCC stage IV melanoma patients only 38% had high levels of LDH, suggesting that elevated LDH is not the ideal marker for this condition. Moreover, in a multivariate analysis of 64 AJCC stage IV melanoma patients, Hauschild et al. [2] failed to demonstrate the independent prognostic value of LDH. It should be also remembered that LDH dosage can be falsely positive due to haemolysis and other factors among them hepatitis. Others markers are thus needed. Survival probability (%) S100B
C-Reactive Protein (CRP).
CRP is a nonspecific inflammatory parameter which might have a role in the detection of melanoma progression. This protein is produced by hepatocytes as a nonspecific acute phase response of inflammation processes. High serum CRP levels have been linked to poor prognosis in various neoplasia. In a recent report, Deichmann et al. [3][4][5] analyzed the prognostic significance of Creactive protein (CRP) compared to LDH in AJCC stage IV melanoma patients. With a cut-off point of 3 mg/dL, serum analysis discriminated between stage IV and nonstage IV melanoma patients, with a sensitivity of 76.9% and a specificity of 90.4%. In another prospective study of 67 patients, Deichmann considered that CRP alone was the most relevant prognostic parameter [3]. These results are debated.
2.1.3. S100-β Protein (S100B). S100B protein is a 21-kd dimeric protein, consisting of two subunits β. This protein is a member of a 19 proteins family and was first isolated from bovine brain in the mid sixties. S100B protein is expressed by glial cells and melanocytes and has been shown to be produced by brain tumors and melanoma.
Roles of S100B are probably multiple and underestimated. It can interact with the p53 tumor suppressor gene in a calcium-dependent manner.
The serum S100B level is linked to the tumor burden and reflects clinical stage and tumor progression as reported by some.
There is increasing evidence that time-dependent evaluation of serial blood measurements of S100B is useful in order to follow melanoma patients ( Figure 2); many reports have shown that S100B levels are correlated with clinical stage (the higher the level, the more advanced the stage) and could be used to monitor the effectiveness of antitumoral treatment whatever the type of the treatment (surgical, chemotherapy, immunotherapy). Retsas et al. [6] have even suggested the use of S100B instead of LDH in the AJCC staging system while other authors consider that S100B does not have any added value when comparing its sensitivity and specificity to CRP and LDH. S100B has probably become the most useful tumor marker in clinical practice but seems limited to advanced stage III and stage IV melanoma patients: in stages I and II S100B does not provide independent prognostic information. Currently S100B is not used routinely despite a high predictive value for the recurrences in this group of patients.
Moreover one should remember that S100B is not melanoma specific and that its serum level can be elevated in healthy subjects, nonmelanoma skin cancer patients, in neurological disorders, in AIDS, in central nervous system tumours, and even in various gastrointestinal cancers.
Novel Molecules Which Can Be Considered as
Possible Biomarkers in CMM 2.2.1. Melanoma Inhibitory Activity (MIA). MIA is a 12 kDa soluble protein, the role of which has been characterized as an autocrine cell growth inhibitor. It can be expressed by melanoma cells as well as chondrocytes. The roles of this protein are multiple as the molecule may modulate cell growth and cell adhesion. MIA has been shown to be elevated in the serum of relapsing cases (Elisa test) and has been described as a useful marker to monitor melanoma patients after surgery. Some authors considered that sensitivity of both molecules MIA and S100B is equal. For some authors neither MIA nor S100B is superior to LDH and CRP on multiple logistic regression analysis. In children and pregnant women (after week 38), MIA is increased and MIA serum measurements should be thus avoided in these two groups [7].
Galectin-3.
Galectin-3 is the member of the family of lectins which can bind to β-galactosides. Many members of the galectin family are differentially expressed in cancer. Gal-3 is a molecule which contains an NH2-terminal domain, a COOH-terminal domain, and a collagen-like long sequence. In melanoma, Gal-3 has been shown to be overexpressed in malignant melanocytic lesions and also released in serum of melanoma patients by both melanoma cells and inflammatory cells. Gal-3 has been shown to play important roles in cell proliferation, cell differentiation, cell adhesion, cell migration, angiogenesis, and metastasis. Thus, Gal-3 deserves close attention, and clarifying the role of extracellular Gal-3 should help us to understand the significance of high serum levels of this molecule in advanced melanoma patients. molecules-so called melanoma associated antigens-which are more or less specifically associated with the malignant phenotype (Table 1). Sometimes these MAA can also be expressed in normal melanocytes but then can be observed in sequestered sites. These MAAs play important roles in triggering the antimelanoma immune response. These antigens have mostly been identified by immunological approaches, including in vitro and vivo reactions, and by serological tests. These antigens can be defined by their ability to interact with T or Bcells, and peptides derived from these antigens have been used to induce or sustain a specific antimelanoma immunological response. Mage-1 was the first MAA identified by a genetic approach in the Ludwig Institute for Cancer Research, Brussels, Belgium and this belongs to a broad family of at least 12 antigens differentially expressed by benign and malignant melanoma cells. Immune responses to these genes can be used as markers of disease and/or the existence of immune competence. Polymerase chain reaction technique (PCR) is a technique which allows the detection of 1 tumor cell among 10 7 -10 8 cells which is much more precise than light microscopy with a limit of detection of 1/100 -1/1000. PCR-based techniques rest on an exponential amplification of specific DNA or RNA molecules. With this technique, the identification of tumor-specific or tumor-associated genes leads to specific detection of tumor cells.
Melanoma-Associated Antigens and Circulating
In RT-PCR serum analysis, RNA of the sample is first extracted and reversely transcribed into cDNA (reverse transcription). A gene of interest is then amplified thanks to specific primers, and isolated on agarose gel, or hybridized after southern blotting. Sequencing of the product of PCR can be carried out in order to compare it with the gene of interest.
Serum tyrosinase activity or positive tyrosinase RT-PCR in melanoma patients has been shown to be correlated with higher risk of relapse, but only 55% of these patients will experience a clinical relapse [8]. Moreover the specificity of this technique has yet to be optimized [9]. When combined with a S-100 assay, Domingo-Domenech showed that tyrosinase RT-PCR adds valuable prognostic information in patients with S-100 < 0.15 μg/l, even if this team showed that S-100 had a higher predictive value.
Curry et al. [10] have suggested that RT-PCR detection of tyrosinase and MART-1 (Melanoma Antigen Recognized by T cells-1) positive circulating melanoma cells can be useful to determine a subgroup of patients with increased risk of metastasis.
Melanin-Related
Metabolites. 5-S-cysteinyldopa is a precursor of phaeomelanin and is produced by both melanocytes and melanoma cells, as the product of the binding of a highly reactive molecule, dopaquinone, to cysteine. 5SCD has been shown to be detectable in the serum and in urine of melanoma patients and to correlate with disease progression. In progressive patients it has been previously published that 5SCD levels increased significantly earlier than clinical signs. A comparative report has stated that together with LDH and S100B, 5SCD was an interesting biomarker even if the authors of this report concluded that S100B could be regarded as the most sensitive of the three markers. Because of the effect of UV exposure on melanin pigment pathways the use of this 5SCD as a biomarker may be limited in Caucasians, while its use in Japan is more extensive. Moreover, patients with amelanotic metastasis usually do not have increased serum levels of 5SCD.
3,4-dihydroxyphenylalanine (L-dopa) is the first metabolite involved in melanogenesis and its plasma levels have been correlated with melanoma progression and tumor burden, as well as the plasma L-dopa/L-Tyrosine ratio which Dermatology Research and Practice 5 represents an index of tyrosinase and tyrosine hydroxylase activity. Stoitchkov et al. have shown that this latter ratio has a predictive value, especially in stage III patients, and advocated the simultaneous use of several biomarkers.
Matrix Metalloproteinases (MMPs).
MMPs are a family of 24 structurally related endopeptidases. Theses zinc-dependent enzymes are defined by their own substrates and can lyse components of the ECM (for instance type IV collagen, which is a major component of basement membrane by gelatinases such as MMP-2 and MMP-9) and play a role in angiogenesis and turnover of the ECM. MMP may also cleave other molecules such as other proteinases, proteinase inhibitors, growth factors, adhesion molecules and in consequence modulate the inflammatory reaction, growth processes, tumor invasion, and metastasis.
A balance between MMP and tissue inhibitors metalloproteinases (TIMP) can be broken by an upregulation of MMPs or a downregulation of TIMPs as this can be shown in the acquisition of a malignant phenotype.
Another important role, angiogenesis, has been attributed to MMPs, and this could afford a possible therapeutic target. Batimastat (BB-94, a synthetic broad spectrum metalloproteinase inhibitor) for instance has been shown to inhibit angiogenesis of liver metastases in mouse models.
MMP expression has been reported during melanoma progression, and high serum levels of MMPs, namely, MMP-1 and MMP-3, have been correlated with poor survival.
Cytokines, Chemokines, and Their
Receptors. Chemokines are small signalling polypeptides that can bind to and activate G protein-coupled receptors, a family of seven transmembrane molecules. Multiple roles have been attributed to chemokines, and these molecules are implicated in the transformation and metastasis processes. Pattern expression of chemokines and their receptors could explain organ-specific metastasis.
Melanoma cells have been shown to express the chemokine CXCL8, also known as interleukin-8 (IL-8), and a report has established that high serum levels of IL-8 are associated with tumor burden and poor prognosis. This might be an interesting approach since in vivo studies have already demonstrated that anti-IL8 humanized antibodies are able to decrease tumor growth and angiogenesis.
Very recently an American team investigated 29 cytokines simultaneously (chemokines, angiogenic factors, growth factors, soluble receptors) with a high-throughput multiplex immunobead assay technology (Luminex Corp., Austin, Tex, USA) in the sera of 179 patients with high-risk melanoma and 378 healthy individuals. They were able to define a specific serum cytokine profile in the patients compared to healthy individuals-higher serum concentrations of interleukin (IL)-1alpha, IL-1beta, IL-6, IL-8, IL-12p40, IL-13, granulocyte colony-stimulating factor, monocyte chemoattractant protein 1 (MCP-1), macrophage inflammatory protein (MIP)-1alpha, MIP-1beta, IFN-alpha, tumor necrosis factor (TNF)-alpha, epidermal growth factor, vascular endothelial growth factor (VEGF), and TNF receptor II. Moreover they showed that IFN-alpha2b therapy resulted in a significant decrease of serum levels of immunosuppressive and tumor angiogenic/growth stimulatory factors and increased levels of antiangiogenic IFN-gamma inducible protein 10 (IP-10) and IFN-alpha. Finally they established a predictive value to the pretreatment levels of proinflammatory cytokines IL-1beta, IL-1alpha, IL-6, TNF-alpha, and chemokines MIP-1alpha and MIP-1beta which were found to be significantly higher in the serum of patients with longer RFS values [12]. Both IL-10 and soluble IL-2 receptors have been correlated to poor outcome [13,14].
Growth Factors and Angiogenesis Factor
Vascular Endothelial Growth Factor (VEGF). Angiogenesis is an important step in tumor growth since it allows the delivery of oxygen and substrates. This process is the result of complex interactions between proangiogenic and antiangiogenic factors released by either tumor cells, native endothelial, epithelial, mesothelial cells and leucocytes. VEGF has been described as a potent mitogen of endothelial cells and a chemotactic factor also for monocytes and Tumor-Associated Macrophages (TAM's) and plays a key regulatory role during neoangiogenesis. Moreover, this growth factor is a vasopermeability stimulant and was formerly known as the Vascular Permeability Factor (VPF). Its expression has been correlated with tumor progression and prognosis and can be increased by hypoxic conditions. Various VEGFs have been discovered and referred as VEGF-2, VEGF-3,. . .. Very recently, a tem has reported lower serum of VEGF-C levels in metastatic patients with skin/ subcutaneous metastasis compared to metastatic patients with other distant sites [15]. In another study, VEGF serum level was not considered as in independent prognostic factor in multivariate analysis [16].
Very recently, a team has reported lower serum VEGF-C levels in metastatic patients with skin/subcutaneous metastasis compared to metastatic patients with other distant sites. Integrins are heterodimeric cell adhesion receptors composed of two subunits, α and β. On the basis of their common subunit, integrin heterodimers can be subdivided into αv, β1 and β2 integrins. The main integrins involved in melanoma progression include αvβ3 (receptor for vitronectin and fibronectin), α2β1 (collagen), α4β1 (fibronectin), and α6β1 (laminin).
Others: Cell Surface and Adhesion
Some reports have shown that increased serum levels of β integrins have been associated with shorter survival. Clinical impact of this has yet to be defined.
CD44
. CD44 is a cell surface transmembrane glycoprotein, originally described as a homing receptor for lymphocytes. In the literature, this protein is described to play a role in tumor invasion and metastatic process. Multiple isoforms of CD44 are generated by alternative splicing of transcripts of its gene. CD44 is an important cell surface receptor for hyaluronan, and its downregulation 5CD44H, loss of CD44v3 expression has been correlated with poor outcome by some but not confirmed. Moreover serum level studies have also been conducted, which did not show any significance in defining the prognosis of melanoma patients.
ICAM-1.
ICAM-1 is an intercellular adhesion molecule which can be found in the cell membranes of leukocytes and endothelial cells. ICAM-1 is a ligand for LFA-1 (lymphocytes function-associated antigen-1, integrin family) of Tcells, Bcells, macrophages, and neutrophils. Migration of leukocytes is facilitated by ICAM-1/LFA-1 binding. Comparative measurements of serum levels of ICAM-1 and 5-S-CD have concluded that the level of 5-S-CD is a better marker for disease progression in melanotic melanoma, and another study showed that sICAM-1 was increased in metastatic patients but without independent prognostic value in multivariate analysis [17].
Discussion
Cancer is a major cause of morbidity and mortality in our society. It has exacted a tremendous price and has had many devastating effects. In particular, the rising incidence of cutaneous melanoma in western population is a major health problem.
Melanoma growth and progression are well defined in their clinical and histopathological patterns. The prognosis of CMM is strongly related to the stage at which it is diagnosed; with early diagnosis, a high proportion of lesions present a good prognosis.
If diagnosed late, melanoma can be a very aggressive malignancy. Moreover, melanoma may sometimes exhibit unpredictable clinical behaviour: the thickness of the primary lesion (Breslow index) is the most important prognostic factor but patients with thick melanoma (Breslow index > 4 mm) can be free of disease and some patients with thin melanoma (Breslow index < 1 mm) can die of their disease.
Current therapies have limited effectiveness, and surgery remains the mainstay of the treatment. Better treatments are certainly needed: the treatment results of advanced melanoma patients yields poor or even absence of survival benefit. Only a few of them will get a benefit from their systemic treatment: survival of AJCC stage IV melanoma patients with visceral/brain metastases can be estimated to 6-9 months. These poor therapeutic responses may be, at least in part, due to inadequate treatment or inclusion of patients in therapeutic protocols, linked to inappropriate staging.
In the past, the only prognostic factor used in melanoma patients was limited to histology (tumor thickness) and the localization of the primary tumor. These parameters remain important but have been further complemented by many clinical, pathological, and biological prognostic factors, particularly in advanced melanoma patients. Recently the use of serum markers, isolated or combined, has been suggested in order to refine the prognosis of a patient, to ensure adequate followup, and to predict the possible benefit from a therapy. Several melanoma-specific or non specific biomarkers can be found in the serum of advanced patients, and in most cases, these markers are directly correlated with tumor burden. Among all these biomarkers, S100B emerges as a protein with an independent prognostic value in advanced melanoma, more specific and sensitive than LDH, as illustrated by some studies, but not yet ideal.
Because biomarkers are a useful way to understand the biological diversity of melanoma, new biomarkers should be defined and further investigations should be carried out.
This biomarker research is important since it could improve patient monitoring, early detection, and treatment of secondary lesions and open new perspectives for targeted therapies. The multiple molecular modifications underlying melanoma progression are currently being intensely investigated. Nowadays there is a wealth of data generated by proteomic and genomic approaches, which is growing daily. These techniques allow the determination of molecular profiling of individual tumors and the study of expression of thousands of genes in order to isolate genes or family of genes of interest. It is important to recall that because of posttranslational modifications an activated gene does not mean a bioactive protein. These techniques give the possibility of classifying melanomas based on their complex biological diversity, and this will have surely a direct impact on the definition of new biomarkers and on their large scale studies.
Conclusions
In melanoma patients, still more than other diseases, there is a need for careful followup, and the question arises whether biomarkers can be useful in daily practice. So far serum tumor markers, specific for melanoma, have not routinely been used. Despite the fact that LDH is the only serum marker which have been included in the AJCC classification, we dare to think that S100B appears to be a promising protein which offers a reliable prognostic value in AJCC stage III and IV patients.
To a lesser extent because of ill-defined or poorer sensitivity/specificity, CRP, MIA, and Gal-3 can also be considered as interesting biomarkers. LDH and CRP maintain their important place in this field because of their easy availability. Other molecules such as melanin metabolites, cytokines, metalloproteinase, and adhesion proteins might be useful, but in any case, their clinical significance should be compared in prospective trials to other melanoma markers described.
Clinical research should probably now focus on combination of these molecules and distinguish their prognostic and predictive value.
Proteomic pattern study and genomic research will surely yield evidence in the next decade leading to more welldefined serum indicators of melanoma progression that can be used for early diagnosis and/or improved and tailored cancer therapy. | 5,976.6 | 2012-01-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
CIP2A Promotes T-Cell Activation and Immune Response to Listeria monocytogenes Infection
The oncoprotein Cancerous Inhibitor of Protein Phosphatase 2A (CIP2A) is overexpressed in most malignancies and is an obvious candidate target protein for future cancer therapies. However, the physiological importance of CIP2A-mediated PP2A inhibition is largely unknown. As PP2A regulates immune responses, we investigated the role of CIP2A in normal immune system development and during immune response in vivo. We show that CIP2A-deficient mice (CIP2AHOZ) present a normal immune system development and function in unchallenged conditions. However when challenged with Listeria monocytogenes, CIP2AHOZ mice display an impaired adaptive immune response that is combined with decreased frequency of both CD4+ T-cells and CD8+ effector T-cells. Importantly, the cell autonomous effect of CIP2A deficiency for T-cell activation was confirmed. Induction of CIP2A expression during T-cell activation was dependent on Zap70 activity. Thus, we reveal CIP2A as a hitherto unrecognized mediator of T-cell activation during adaptive immune response. These results also reveal CIP2AHOZ as a possible novel mouse model for studying the role of PP2A activity in immune regulation. On the other hand, the results also indicate that CIP2A targeting cancer therapies would not cause serious immunological side-effects.
Introduction
Despite a considerable improvement in cancer treatment efficiency in the last decade, infectious complications will lead to morbidity and mortality of many patients [1,2]. These infections, occurring in solid organ tumors and more frequently in hematological malignancies, are often related to treatment-induced immunosuppression [3]. Therefore, the impact of any cancer therapy on patient's immune system should be evaluated.
Our laboratory previously identified an endogenous PP2A inhibitor protein, CIP2A (Cancerous Inhibitor of PP2A) [4]. To date, numerous studies have demonstrated that CIP2A is a critical inhibitor of PP2A tumor suppressor activity in various human cancer types [5]. CIP2A binds to the PP2A complex, and inhibition of CIP2A in cultured cells increases the catalytic activity of PP2A [4,6,7]. Moreover, CIP2A inhibits PP2A-mediated dephosphorylation of several signaling proteins including Akt, DapK and Rictor [6,8,9] as well as transcription factors MYC and E2F1 [4,5,10]. Importantly, a number of recent studies have demonstrated that the effects of CIP2A depletion on various cellular phenotypes could be rescued via inhibition of PP2A. [10,11]. However, despite its emerging role as a human oncoprotein and future target for cancer therapies, very little is known about the physiological role of CIP2A. The only physiological function identified thus far by using CIP2A-deficient genetrap mouse model (CIP2A HOZ ) is CIP2A´s involvement in spermatogenesis [12]. Utilizing a variety of murine models, several independent laboratories have highlighted a potentially important role for PP2A in the regulation of immune responses involved in both autoimmunity and tumor immunity [13][14][15][16]. Consequently, we decided to investigate the possible function for CIP2A in the regulation of immune responses. To identify novel physiological functions for CIP2A, CIP2A HOZ mouse model was subjected to a systemic phenotyping screen at the German Mouse Clinic [17,18], including thorough immunological characterization. Based on that, we report here that CIP2A expression is a novel mechanism important for optimal in vivo adaptive immune response after Listeria monocytogenes (L.m.) re-infection. These data may also have importance in the evaluation of potential immunological effects of future CIP2A-targeted cancer therapies
Results
CIP2A is expressed in lymphoid organs but is dispensable for immune system homeostasis Comparison of CIP2A mRNA expression between different mouse tissues shows the highest level of expression in testis ( Fig 1A) [4,12]. However the lymphoid organs also prominently express CIP2A, both at the mRNA and protein level (Fig 1A and 1B and S1A Fig). As expected, CIP2A expression was not detected in lymphoid tissues from CIP2A HOZ mice (Fig 1A and 1B and S1A Fig) indicating that CIP2A is efficiently deleted in this genetrap mouse model. Immunohistochemical (IHC) staining of WT animals indicate a broad expression of CIP2A in bone marrow and thymus, whereas in spleen and mesenteric lymph nodes, CIP2A was expressed selectively in germinal centers (Fig 1C).
Despite the high expression of CIP2A in primary lymphoid tissues, no pathological alterations were detected in the bone marrow, spleen or thymus from CIP2A deficient mice ( Fig 1D). In fact, our histological analysis of 42 different organs revealed no pathological alterations in any of the CIP2A deficient tissues examined (Table 1).
Moreover, CIP2A deficiency in the lymphoid organs did not induce changes in distributions of the main leukocyte populations. In peripheral blood, the frequency of T-cells, B -cells, natural killer cells, granulocytes and monocytes were similar between WT and CIP2A HOZ mice in both sexes (Fig 1E). In a more detailed analysis, only a slight increase in the frequency of CD3 +-gammadeltaTCR + and CD44 + T-cells were detected in CIP2A HOZ mice (Table 2), whereas no significant differences between CIP2A HOZ mice and controls were observed in the pattern of B-cell subtypes (S1B Fig). Consistent with unaltered B-cell populations, unchallenged CIP2A-HOZ and WT mice had similar serum immunoglobulin (Ig) levels ( Fig 1F). We conclude that in spite of a high expression of CIP2A in primary lymphoid tissues, unchallenged CIP2A HOZ mice present normal immune system function.
CIP2A is required for optimal systemic immune response
However, the analysis of the gene expression profile of the spleen from CIP2A HOZ mice revealed differential expression of genes related to the regulation of immune response and/or autoimmune diseases (Table 3). Motivated by these findings, we performed functional in vivo assays to elucidate the role of CIP2A within an immune response. To start with, mice were immunized with ovalbumin in order to measure the T-cell-dependent antibody response. Compared to WT mice, the induction of peanut agglutinin (PNA) positivity and proliferation in germinal centers was attenuated in CIP2A HOZ mice (Fig 2A and 2B) indicating a role of CIP2A within the T-cell dependent B-cell response.
As innate and adaptive immune responses are required for the clearance of intracellular bacteria L.m. To assess the role of CIP2A within the course of an adaptive immune response, mice were injected with a sub-lethal infection dose of L.m.-OVA, followed 4 weeks later by a challenge with a lethal dose of the same bacteria (S2A Fig). Primary infection induced a protective memory response leading to bacteria clearance as no abscesses were detectable histologically in spleen and a decrease in the number of necrotic lesions and an increase in the number of small granulomatous lesions in the liver of infected mice in both genotypes was observed. Notably, after the second infection with a lethal dose, when compared to WT mice, CIP2A HOZ mice did present a significant increase in large bacterial abscesses in the liver (Fig 2E and 2F) (p = 0.03), as well as a trend for higher splenic bacterial load (S2C Fig), indicating for a reduced adaptive immune response in CIP2A-deficient mice.
We conclude that, despite CIP2A being dispensable for normal development and function of the immune system in unchallenged conditions, CIP2A does contribute to a certain extent to the T-cell dependent immune response in vivo.
Impaired in vivo T-cell activation in CIP2A-deficient mice
During L.m. infection, both CD4 + and CD8 + T-cells are activated and these cells comprise most of the adaptive immune response [19]. Notably, on day 5 after L.m. reinfection, a significant reduction in number of both CD4 + and CD8 + T cells was detected among CIP2A HOZ splenocytes ( Fig 3A and 3B and S3A Fig). Moreover, immunized challenged CIP2A HOZ mice presented a reduced number of L.m-specific CD8 + T cells when compared to WT counterparts (S3B Fig). However, we did not observe a statistically significant decrease in the percentage of L.m-specific CD8 + T cells in immunized challenged CIP2A deficient mice (Fig 3C and 3D). Nevertheless, the characterization of T cell subsets revealed a decreased frequency of the CD8 + T cells with a secondary effector phenotype (CD127 -CD62L -) in immunized challenged CIP2A deficient mice (Fig 3E and 3F). Concomitantly, we observed a reduction of the frequency of IFN-γ producing CD8 + T cells in these immunized challenged CIP2A HOZ mice ( Fig 3G and 3H), whereas TNF-α secretion of CD8+ T-cells or IFN-γ and TNF-α co-production of CD4+ T-cells were not affected by CIP2A deficiency (S3C and S3D Fig). Notably, CD8 + T cell populations with secondary effector memory phenoptype (CD127 + CD62L -) and central memory phenotype (CD127 + CD62L + ) were not reduced in immunized challenged CIP2A HOZ mice (S3E Fig). We conclude that CIP2A influences T cell responses in vivo, possibly by promoting the generation of CD8 + effector T-cells and the generation of IFN-γ producing CD8 + T-cells. Leukocyte subset frequency (% of CD45 + cells or respective subset) mean, standard deviation and p-value calculated by a linear model (parameterg enotype x sex); * number based on 10 WT or CIP2A HOZ males and 9 WT or CIP2A HOZ females.
doi:10.1371/journal.pone.0152996.t002 Table 3. Molecular function of significantly regulated genes identified via GO term enrichment analysis of CIP2A HOZ versus WT spleens. CIP2A is the only gene downregulated in CIP2A HOZ samples, all the other genes were found to be up-regulated in mutant samples when compared to WT.
Category
Regulated genes
Cell autonomous function for CIP2A in T-cell activation
Due to complex nature of interplay between immune cells during the immune response in vivo, it is essential to study whether CIP2A regulates T-cell activation in a cell autonomous fashion. Therefore, we first assessed whether CIP2A expression is induced in activated T-cells in vitro. Indeed, CIP2A protein expression was induced in WT CD8 + T-cells, which was not observed in T-cells from CIP2A deficient mice (Fig 4A). To further investigate the importance of CIP2A on T-cell antigen receptor (TCR) mediated T-cell activation, we isolated murine CD4 + T-cells from mice in which one allele of TCR signal transducer Zap70 is deleted (Zap70 +-/-), but which display normal immune response, and compared these to cells from Zap70(AS) mice in which ATP binding pocket of Zap70 is engineered to be selectively inhibited by ATP analogue HXJ [21,22]. This allows selective inhibition of Zap70 activity during T-cell activation upon HXJ treatment and thus allows the assessment of whether TCR-mediated T-cell activation is linked with CIP2A gene regulation. Treatment with combination of anti-CD3 and anti-CD28 induced CIP2A mRNA expression at 48 hours in both Zap70 +/and control Zap70 (AS) CD4 T-cells ( Fig 4B). Importantly, CIP2A mRNA induction was completely prevented by HXJ treatment of Zap70 (AS) cells, whereas the compound did not affect CIP2A expression in Zap70 +/cells ( Fig 4B). These results demonstrate that CIP2A expression is intimately linked with T-cell activation in a cell autonomous fashion. Role of Zap70 in CIP2A regulation is further confirmed by significantly decreased CIP2A mRNA expression in Zap70 negative JurkaT-cells (S4A Fig).
To assess the functional relevance of CIP2A induction for cell autonomous T-cell activation, T-cells isolated from WT or CIP2A HOZ mice were treated with anti-CD3 and anti-CD28 and activation was assessed by ratio of CD69 positive and negative cells. Notably, loss of CIP2A resulted in significant inhibition of T-cell activation (Fig 4C), possibly by reducing proliferation of activated T cells as suggested by CFSE staining (S4B and S4C Fig). To assess the long-term effects of CIP2A loss on T-cell activation, we compared the number of viable cells after in vitro activation of WT and CIP2A HOZ splenocytes by IL-2 and anti-CD3. As shown in Fig 4D, number of CIP2A HOZ splenocytes was significantly impaired 7 days post-stimulation.
These results demonstrate that CIP2A promotes T-cell activation in a cell autonomous fashion. Importantly, these results can be also extended to human T-cells as siRNA-mediated inhibition of CIP2A expression significantly inhibited their activation (Fig 4E & S4D and S4E Fig).
Discussion
Since its original characterization in 2007 as an oncogenic PP2A inhibitor protein [4], CIP2A has been documented to be a clinically relevant oncoprotein in the vast majority of solid and hematological human cancers tested [5]. However, our understanding of CIP2A's function thus far has been limited to its reported role in promoting mouse spermatogenesis [12]. In the present work, we have thoroughly characterized both CIP2A expression and function in lymphoid tissues and immune cells. CIP2A is expressed in all lymphoid tissues analyzed with highest expression in the bone marrow and thymus. Based on the normal repertoire of all immune cells analyzed, CIP2A expression appears to be dispensable for immune system homeostasis or lymphocyte differentiation. Thus, developmental defects can be excluded as the main cause for impaired lymphocyte activation in CIP2A HOZ mice. Using L.m. infection as an in vivo model to study the impact of CIP2A in immune responses, our results would suggest that CIP2A is [23]. Therefore, the reduced secretion of IFN-γ by L.m.-specific CD8 + T-cells (Fig 3G and 3H) can be phenotypically correlated to the significant increase of large abscesses in the liver of re-challenged CIP2A-HOZ mice when compared to WT mice (Fig 2E and 2F).
Together these results firmly establish CIP2A as a novel signaling protein important for Tcell activation in vivo. Also, our results demonstrate that CIP2A is downstream of Zap70 activity during T-cell activation (Fig 4B and S4A Fig), which is interesting considering the role for Zap70 in human immune diseases [24]. Moreover, this decreased activation could be due to a reduced proliferation of CIP2A deficient T cells as suggested by CFSE dilution staining (S4D and S4E Fig). To date, numerous studies have demonstrated that CIP2A is a critical inhibitor of PP2A tumor suppressor activity in various human cancer types [5]. Importantly, CIP2A inhibition increases the catalytic activity of PP2A [4,6,7], and several phenotypes induced by CIP2A inhibition can be reversed by co-inhibition of PP2A [10,11]. Therefore, even though it is impossible to completely rule out potential PP2A-independent mechanisms in mediating the immune response defects in CIP2A HOZ mice, it is very likely that they are linked to PP2A regulation. This conclusion is further supported by previous data that PP2A limits CD28-elicited signal transduction during IL-2-induced T-cell activation [25], which are in perfect concert with observed decrease in IL-2-induced activation in CIP2A deficient T-cells (Fig 4D). On the other hand, our results are in good agreement with recent data that PP2A is inhibited in human autoimmune diseases [15,26]. Mechanistically, CIP2A-mediated PP2A inhibition could support T-cell activation via its effects on MYC, which is a well-characterized CIP2Aregulated PP2A target [4,5,11] and important for T-cell activation [27,28]. Other CIP2A-regulated effectors previously implicated in T-cell activation include CREB [29], Akt/PKB [6] and mTOR [8,30].
In summary, our analysis of the potential physiological role for the recently discovered human oncoprotein CIP2A, identifies CIP2A as a novel factor involved in activation of both CD4 + and CD8 + T-cells and in adaptive immune regulation. In the light of these results, it is well reasoned to further explore the functional relevance of mechanisms regulating PP2A activity for host response to microbial infections. Use of the CIP2A mouse model presented here will provide a feasible tool to advance this understanding. On the other hand, these results give important guidelines to estimate potential immunological side-effects of future CIP2A targeted cancer therapies, which probably would not be as limiting with comparison to reported effects of many existing cancer treatments [1,2]. characterization of secondary T-cell subsets: central memory phenotype (T CM , CD127 + CD62L + ) and effector memory phenotype (T EM , CD127 + CD62L -) can be observed in immunized, unchallenged mice; whereas immunized challenged mice present two main populations of effector T cells differentiated by their CD127 expression. (F) Bar chart numbers indicate percentages of effector phenotype CD8 + T-cells in regards of total splenocytes or OVA-specific CD8 + T lymphocytes (right). * p < 0.05, Two-tailed T-test. (G) Cytokine production by antigen-specific T-cells from CIP2A HOZ
Mouse strain
The CIP2A hypomorphic mouse model was previously described [10,12]. All animal studies were conducted in accordance with the guidelines of the Provincial Government of Southern Finland and handled in accordance with the institutional animal care policies of the University of Turku (license ESLH-2007-08517).
At GMC, mice were maintained according to the GMC housing conditions and German laws. All tests performed at the GMC were approved by the responsible authority of the Regierung von Oberbayern.
Zap70 +/and Zap70 (AS) mice were housed in the specific pathogen-free facility at the University of California, San Francisco, and were treated according to protocols approved by the Institutional Animal Care and Use Committee in accordance with NIH guidelines. Zap70 +/and Zap70 (AS) mice were described previously [21,22].
Tissues sample homogenization and RNA extraction
Liquid nitrogen frozen murine samples were homogenized using the MagNA Lyser and MagNA Lyser Green Beads (Roche Diagnostics). Briefly, RNA and protein samples were homogeneized in respective lysing buffers, RA1 from Macherey Nagel for RNA; RIPA buffer (PBS; 1% Nonidet P-40; 0.5% sodium deoxycholate; 0.1% SDS) for protein samples. 1 to 4 cycles (6500 rpm, 50 sec) were used to homogenize the tissues, an ice-cooling step of 2 min being done between each cycle. Total RNA was extracted and cleaned up from the lysate using the Nucleospin kit (Macherey Nagel), including a DNAse treatment step.
RT-PCR analysis
For cDNA synthesis, 1 μg total RNA was incubated with 250 ng random hexamer for 5 min at 70°C, then cooled down on ice for another 5 min. Total RNA was reverse transcribed in a final volume of 25 μL containing enzyme buffer, 10 units of RNAse inhibitor, DTT, 0,5 mM deoxynucleotide triphosphate, and 200 units MMLV reverse transcriptase (M5301, Promega). The samples were incubated at room temperature for 10 min, then at 42°C for 50 min. The reverse transcriptase was finally inactivated by heating at 70°C for 15 min PCR amplification: The quantification was based on the standard curve method. The data were normalized using ß-actin. Oligonucleotides were obtained from Proligo. For quantitative real-time PCR, 2 μL of diluted reverse transcription reaction samples (1/10) were added to 8 μL of a PCR mixture made up of 5 μL of PCR Master Mix (Applied Biosystems), 1 μL of each primer at a concentration of 3 μM, and 1 μL of specific probe at a concentration of 2 μM. The thermal cycling conditions comprised an initial step at 50°C for 2 min and a denaturation step at 95°C for 10 min followed by 40 cycles at 95°C for 15 seconds and 60°C for 1 min. All PCRs were carried out using an ABI Prism 7000 Sequence Detection System (Applied Biosystems). The specificity of each primer couple was shown by a dissociation curve analysis. Results are derived from the average of at least two independent experiments. Gene expression was reported relative to housekeeping gene ß -actin. Following primers were used: CIP2A Fwd: 5'-GAACA GATAAGGAAAGAGTTGAGCA -3', CIP2A Rev: 5'-ACCTTCTAATTGAGCCTTGTGC -3'; ß-actin Fwd: 5'-TGGCTCCTAGCACCATGAAGA -3', ß-actin Rev: 5'-GTGGACAGTGAGGC CAGGAT -3'.
For real-time PCR analysis from ZAP70 mice, RNA was harvested from unstimulated or stimulated CD4+ cells using the RNeasy kit (Qiagen), and reverse transcribed using the Super-Script III first strand reverse transcriptase kit (Invitrogen). Real-time PCR was performed using the QuantStudio 12K analyzer (Applied Biosystems). Relative quantity (ΔΔCT) values, normalized to actin and Zap70+/-unstimulated cells are shown.
Immunization
A stock solution of chicken egg ovalbumin (Sigma), at 1mg/ml in PBS, was prepared. Then 100 μl of this solution was mixed with 100 μl of Complete Freunds Adjuvant (CFA) per mouse. These 200 μl were intraperitoneally injected using a 30 gauge needle, after sterilizing with alcohol pads the site of injection.
Immunoglobulins
The plasma levels of IgM, IgG1, IgG2b, IgG3, and IgA were determined simultaneously in the same sample using a bead-based assay [32] with monoclonal anti-mouse antibodies conjugated to beads of different regions (Biorad, USA), and acquired on a Bioplex reader (Biorad). IgE was measured by ELISA. In brief, microtiter plates (96-well) were coated with 10 μg/ml anti-mouse IgE rat monoclonal IgG (clone-PC284, The Binding Site) to detect total IgE.
Listeria monocytogenes infection
Infection experiments were performed with recombinant Listeria monocytogenes expressing ovalbumin (L.m.-Ova, [33]) or the parental wild type Listeria monocytogenes strain 10403s (L. m.-wt). Brain Heart Infusion (BHI) medium was inoculated with listeria stock solution and incubated at 37°C until an OD600 of 0.05-0.1. After dilution with PBS to an appropriate concentration the infection of mice was performed with the indicated dose by intravenous (i.v.) injection into the lateral tail vein.
Pathology and Histological analysis of Listeria monocytogenes infected spleen or liver
A total of 40 mice were analysed at the age of 17-20 weeks. The mice were sacrificed with CO2 and visceral organs weighted. 42 Organs were fixed in 4% neutral buffered formalin and embedded in paraffin. Two-μm-thick sections were cut and stained with haematoxylin and eosin (H&E) for histological examination. In a select group of mice challenged with L m. infection (12 CIP2A HOZ and 12 WT female mice) the analysis was complemented by quantification of the area of inflammatory lesions (abscesses) in spleen and liver. Images of tissue sections were acquired with an automated slide scanner (NanoZoomer 1 -HT Hamamatsu, Japan). All slides were independently reviewed and interpreted by two pathologists experienced in mouse pathology. Statistical analysis was performed in an area of 5mm 2 using the Fisher's exact test. A statistical significance was considered at p-value (<0,05).
Isolation and activation of murine CD4 + T-cells
Mouse naive T-cells were isolated from spleens and lymph nodes of gender and age matched wild type or CIP2A HOZ mice by using CD4 + CD62L + T-cell isolation kit (Miltenyi Biotech). Cells were activated with plate-bound anti-CD3 (1μg/ml) and anti-CD28 (2 μg/ml, both from BD Pharmingen) and cultured in IMDM medium (Gibco, Life Technologies) supplemented with penicillin (100 U/ml)/streptomycin (100 μg/ml) and 10% FCS (Biowest, Nuaillé, France). At 24 hr, cells were collected and stained with anti-mouse CD69-FITC or isotype control antibodies (both are from eBioscience) followed by flow cytometry analysis by FACS Calibur (BD Biosciences). CFSE dilution experiment was performed as previously described [35].
Isolation and activation of human CD4 + T cell
Human CD4 + T Cells were isolated from the umbilical cord blood collected from healthy neonates in Turku University Hospital, Turku Finland with approval from the Finnish Ethics Committee. Participants provided their verbal informed consent to participate in this study. This was considered more feasible and practical than a written consent. However, midwifes kept records of this consent. Moreover, this consent procedure has been approved by the Finnish Ethics Committee.
Cell viability of in vitro stimulated murine splenocytes
Spleen tissue from six male Cip2a-mutant and six control mice was used in this study. Splenocytes were isolated and cultivated in 96-well plates with 100,000 cells per well. Total splenocytes were incubated with 20 U/ml Il-2 and 5, 2.5, and 1.25 μg/ml antiCD3. After 7 days of cultivation, the number of viable cells was determined by CellTiter-Glo Assay (Promega).
Transcriptome analysis
Total RNA of spleen from of 17 weeks old male animals (CIPA HOZ n = 4, wild type n = 4) was extracted according a standardized protocol (RNAeasy mini kit, Qiagen). For gene expression profiling, Illumina Mouse Ref8 v2.0 Expression BeadChips were performed as previously described [38]. Illumina Genome studio 2011.1 was applied for data normalization (cubic spline) and background corrections. Statistical analysis identified differential gene expression between mutant and wild type tissue comparing single mutant values with the mean of four wild types (FDR < 10% in combination with mean fold change > 1.4) (16). Overrepresented functional annotations within the data set were provided as GO (Gene Ontology) terms of the category 'Disease and Function Annotations' (Ingenuity Pathway Analysis, IPA). Expression data are available at the public repository database GEO, accession number GSE56401 [39]. | 5,460 | 2016-04-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
Deformed Jarzynski Equality
The well-known Jarzynski equality, often written in the form $e^{-\beta\Delta F}=\langle e^{-\beta W}\rangle$, provides a non-equilibrium means to measure the free energy difference $\Delta F$ of a system at the same inverse temperature $\beta$ based on an ensemble average of non-equilibrium work $W$. The accuracy of Jarzynski's measurement scheme was known to be determined by the variance of exponential work, denoted as ${\rm var}\left(e^{-\beta W}\right)$. However, it was recently found that ${\rm var}\left(e^{-\beta W}\right)$ can systematically diverge in both classical and quantum cases. Such divergence will necessarily pose a challenge in the applications of Jarzynski equality because it may dramatically reduce the efficiency in determining $\Delta F$. In this work, we present a deformed Jarzynski equality for both classical and quantum non-equilibrium statistics, in efforts to reuse experimental data that already suffers from a diverging ${\rm var}\left(e^{-\beta W}\right)$. The main feature of our deformed Jarzynski equality is that it connects free energies at different temperatures and it may still work efficiently subject to a diverging ${\rm var}\left(e^{-\beta W}\right)$. The conditions for applying our deformed Jarzynski equality may be met in experimental and computational situations. If so, then there is no need to redesign experimental or simulation methods. Furthermore, using the deformed Jarzynski equality, we exemplify the distinct behaviors of classical and quantum work fluctuations for the case of a time-dependent driven harmonic oscillator dynamics and provide insights into the essential performance differences between classical and quantum Jarzynski equalities.
I. INTRODUCTION
Work fluctuation theorems constitute one key topic in modern non-equilibrium statistical mechanics [1][2][3][4][5]. Of particular interest here is the Jarzynski equality (JE) e −β ∆F = e −βW [3], which links free energy differences ∆F at the same inverse temperature β with the ensemble average of exponential work e −βW , where work W here refers to the inclusive form [1]. The JE holds regardless of the details of a specific work protocol, so long as the initial and final system configurations are fixed. JE has stimulated vast interests in theory and experiments because it gives a mean of direct measurement of free energy difference ∆F by a finite sampling of work values W .
Note that JE involves the ensemble of an exponential function of W . As such, rare events with large and negative W could dominate the sample average [17][18][19][20][21][22][23]. This motivated people to study the error of ∆F through var e −βW and some important insights have been obtained. For example, assuming a given precision to be reached for ∆F, the corresponding required number of work realizations, N, can be estimated by the central limit theorem (CLT) from the variance of exponential work, i.e., var e −βW . Intuitively, a larger var e −βW requires more realizations to reach the same precision in predicting ∆F. Suppression of work fluctuations by some control mechanism is hence desirable before applying JE. For example, in Refs. [24,25] we studied some classical and quantum control scenarios in efforts to minimize var e −βW .
Somewhat surprisingly, we recently found the possibility of obtaining a systematic divergence in var e −βW in classical systems, as verified computationally in simple models isolated from a bath [26]. This divergence has immediate implications for the applicability of JE in measuring ∆F, but was seldom mentioned previously except in some general discussions made in Ref. [27] and a specific result on a one-dimensional gas undergoing an adiabatic work protocol [20]. That the divergence in var e −βW is not accidental can be understood as follows. For systems where the principle of minimal work fluctuations [28] applies, if an adiabatic protocol yields a diverging var e −βW , the same quantity is expected to diverge as well with increasing non-adiabaticity in the work protocols. We stress that a divergent var e −βW makes CLT no longer applicable, and the converging rate of ∑ N i=1 e −βW i /N towards e −β ∆F with respect to increasing N is not obvious. In one particular class of models [26], a generalized version of CLT indicates that the converging rate is much slower than the conventional N −1/2 scaling law. Instead, the error is found to scale as N −γ , with the scaling exponent γ being arbitrarily close to zero in extremely non-adiabatic cases.
The lesson learned is that the average ∑ N i=1 e −βW i /N from experiments or simulations may barely converge to the expected value e −β ∆F as N increases.
Our further study reveals even more severe divergence problems in quantum systems isolated from a bath [29]. This indicates that quantum effects can play a crucial role in work fluctuations.
Quantum JE and classical JE can thus have much different domains for meaningful applications.
For example, a work protocol could still lead to divergence in quantum var e −βW even if its classical counterpart has a finite var e −βW . In particular, as the temperature characterizing a quantum system initially at thermal equilibrium decreases, quantum effects become more appreciable and then var e −βW tends to diverge purely due to nonclassical effects [29]. This finding should have an important impact on future experimental studies of the quantum JE.
Our early results hence motivate us to consider the following situation. Suppose an experiment or a computer simulation has been carried out, and the ensemble average ∑ N i=1 e −βW i /N based on finite sampling does not seem to converge. A further checking on the quantity var e −βW hints that it is probably a diverging quantity as N increases. Given such a situation, is there a scheme to reprocess the data to extract useful predictions about equilibrium properties of the system (throughout this study, we assume no bath is involved during the work protocol)? The aim of this work is to give a partially positive answer to this question. We do so by deforming the definition of the physical work to some quantity tunable, which in turn then yields a deformed JE. So long as the variance of the exponential function of the newly defined quantity can be suppressed to a finite value, then the deformed JE will work effectively. We argue that our deformed JE proposed here is relevant to existing experiments [6][7][8][9][10][11][12][13][14][15][16] and computational methods [23,[30][31][32] motivated by JE. As learned from our model studies, the deformed JE can eliminate the above-mentioned divergence issue in classical cases effectively, but may not work well in the deep quantum regime.
This observation itself also exposes again the intrinsic difference between classical and quantum JEs in terms of their potential applications. This paper is organized as follows. We propose in Sec.2 a deformed classical JE. The model for illustration is a simple classical parametric harmonic oscillator (because such a model already suffices to show the divergence in var e −βW ). In Sec. 3 we present a parallel deformed JE in the quantum domain and discuss why its usefulness is different from the classical version. Sec. 4 concludes this paper. Throughout, our notation follows closely our early studies [26,29].
A. General Discussion
We consider a general closed system whose Hamiltonian is given by H(p, q; λ ), with phase space coordinates (p, q) ∈ Γ and a time-dependent control parameter λ = λ (t). The protocol starts at t = 0 and ends at t = τ, i.e. t ∈ [0, τ]. Given an initial phase space coordinate (p 0 , q 0 ) as well as an arbitrary work protocol λ (0) = λ 0 → λ (τ) = λ τ , the inclusive work [1] is obtained with (p τ , q τ ) = p(p 0 , q 0 , τ), q(p 0 , q 0 , τ) being the final phase space coordinate. Note also that a proper gauge for the Hamiltonian H(p, q; λ ) is assumed here such that its value does equal to the energy of the system with no additional time-dependent gauge relying on λ (t) [1]. Moreover (see below), we assume that this Hamiltonian is bounded from below while not being bounded from above. Let Z(λ ; β ) = Γ exp − β H(p, q; λ ) dpdq be the partition function with parameter λ and inverse temperature β , which we assume to exist with β > 0. Then the JE, which is valid for any protocol λ 0 → λ τ , assumes the following form Here • represents taking average w.r.t. the initial Gibbs distribution P 0 (p, q; λ 0 ; β ) = exp − β H(p, q; λ 0 ) /Z(λ 0 ; β ). We also assume that this ensemble average itself does exist (which may not be always the case [33]. From the JE above, it is seen that e −βW over the initial Gibbs state (at inverse temperature β ) will yield the same e −β ∆F once H(p, q; λ 0 ) and H(p, q; λ τ ) are fixed. It is also clear that, the variance of exponential work, namely, var e −βW = e −2βW − e −2β ∆F , can be readily obtained through the second moment; i.e., To better illustrate the relation between classical and quantum cases, we choose to use the adiabatic invariant Ω(E, λ ) [34][35][36][37][38][39][40] as an analogue of a quantum number indexing quantum energy levels.
The adiabatic invariant is defined as the phase space volume up to an energy E where Θ denotes the step function. Given that Ω(E, λ ) equals the positive valued integrated density of states, Ω(E, λ ) is monotonically growing with increasing energy E [41]; therefore, the inverse function of Ω(E; λ ) could be found from E(Ω; λ ) in principle. Eqn. (3) could be rewritten as where Ω 0 and Ω τ act like the initial and final "energy index" while P(Ω τ |Ω 0 ) is the transition probability between the states under a certain protocol, which is defined in [26] and is proven to be bi-stochastic. Note here that the lower bounds Ω 0 , Ω τ = 0 correspond to minimum of the lower bounded energy under fixed λ 0 , λ τ . Likewise, Eqn. (5) can now be written as One can note from Eq. (8) that e −2βW (and hence var e −βW ) diverges if, for example, we have E(Ω τ ; λ τ ) < E(Ω 0 ; λ 0 )/2 for all Ω τ . To see this possibility clearly, consider an adiabatic work protocol applied to a system with a constant density of states, with Ω(E, λ ) ∼ λ E.
In order to overcome the divergence issue illustrated above, we now propose a deformed version of JE for H(p, q; λ ) not bounded from above. The main idea is to treat the statistics of an exponential function of W g (with W g = W if g = 1) as a deformed W . Specifically, for an arbitrary value g ∈ (0, 1], we define W g as follows: The motivation to introduce the g-factor is to make the quantity W g less negative as compared with W itself when it applies to transitions from high-energy initial states to low-energy final states. Then, for positive β values (which is assumed throughout this study) [42], the exponential function e −βW g would yield less dominating rare events and as a result, we hope that the variance in e −βW g could be finite even when var e −βW diverges [43].
Consider then the ensemble average of e −βW g over the same initial Gibbs state as used in the standard JE. We have In obtaning the second equality above we have used the bi-stochastic nature of P(Ω τ |Ω 0 ). Equation (10) indicates the following useful deformed JE: As seen from above, by calculating e −βW g , we do not directly arrive at a free energy difference at the same inverse temperature β . Rather, we would obtain a class of relations between the free energy F(λ 0 ; β ) at the inverse temperature β and a free energy F(λ τ ; β /g) at inverse temperature β /g. This result for the special case g = 1 recovers the original JE. The potential benefit is that the second moment of e −βW g , i.e., can be finite for a range of g values even if it diverges for g = 1.
In a typical classical experiment setup, the inclusive work W is usually measured along the work protocol in various ways [13][14][15][16]. So is it feasible that W g defined in Eqn. (9) can be indirectly measured along a classical trajectory? The answer is yes under certain conditions. We suppose that in an experiment each individual value of W is already measured. Then we may calculate W g as if additionally the initial energy E(Ω 0 ; λ 0 ) of each trajectory can be measured [44]. That is, our knowledge of the initial energy values is the additional cost we have to pay in order to process our experimental data through W g and e −βW g . In addition, since all the initial energy values sampled from the Gibbs state are known, it is natural to assume that F(λ 0 ; β ) is already known. Then from Eq. (11) we can obtain F(λ τ ; β /g). Under these assumptions, we barely need to change an experiment setup to make use of our deformed JE. Note also that in non-equilibrium numerical simulations [19,27,30], the situation is even more obvious, because all the initial energy values of each sampling trajectory are by default registered in the simulations.
Let us now outline how to actually use our deformed JE for free energy measurements. We assume that the aim is to measure the free energy F(λ τ ;β ) withβ being the target inverse temperature. As discussed above, this task may not be easily solved by use of JE because of the divergence in the second moment of exponential work. We hence first choose a trial g. Then, before applying a non-equilibrium work protocol, we prepare the system at thermal equilibrium at the inverse temperature β =β g. Finally, we use the relation to obtain F(λ τ ;β ). We stress that the above procedure is mainly about a new way of reprocessing the experimental or simulation data, with experimental or simulation details untouched. Regarding the g value to be determined, it can be in a range of values so long as the second moment of e −βW g is not too large. This would be most appreciated, when JE suffers from the efficiency issue due to a diverging e −2βW . Next we illustrate the method by using the classical harmonic oscillator as an example to show how a proper choice of g indeed eliminates divergence in e −2βW g .
B. Classical Harmonic Oscillator
We investigate a 1-dimensional (1D) harmonic oscillator with angular frequency ω > 0 The equilibrium partition function for this system is given by Z(ω; β ) = 2π/ωβ . The phase space volume is given by and E(Ω; ω) = ωΩ/2π. Clearly, this system belongs to the class of systems with a constant density of states we used earlier for discussions. Work is done to the system as ω is forced to change with time, from ω 0 to ω τ . Under an arbitrary time-dependent ω(t), t ∈ [0, τ], the transition probability P(Ω τ |Ω 0 ) can be calculated analytically. Particularly, the transition probability under adiabatic driving is known to be P(Ω τ |Ω 0 ) = δ (Ω τ − Ω 0 ) [26], thus by choosing 2ω τ < ω 0 , Since adiabatic protocols minimize e −2βW , we conclude all work protocols produce divergent e −2βW as long as 2ω τ < ω 0 .
For an arbitrary non-adiabatic protocol, the transition probability can be calculated explicitly for the harmonic oscillator (see Appendix A) , yielding the following expression Here, µ ± are dimensionless constants satisfying µ + µ − = 1 and 0 < µ − < µ + , which are determined merely by the protocol λ (t). Eqn. (18) indicates that, given an initial Ω 0 , the final Ω τ always fall in the interval [µ − Ω 0 , µ + Ω 0 ]. One can also verify the bi-stochastic property, i.e. ∞ 0 P(Ω τ |Ω 0 )dΩ 0 = ∞ 0 P(Ω τ |Ω 0 )dΩ τ = 1. With Eqn. (18), the second moment of e −βW g can be found from The above inequality shows that if we choose g < 2ω τ µ − /ω 0 (a sufficient but not necessary condition), then e −2βW g becomes finite. For such g values, we can safely look into whose second moment e −2βW g must be finite. We have thus offered an explicit example where the practical issue in applying JE due to a diverging second moment can be overcome by considering a deformed JE.
III. DEFORMED QUANTUM JE
A. General discussion The inclusive work in quantum cases is obtained by two-time measurements [1], with where E λ 0 i and E λ τ j are the energies of projected eigenstates after measurements before and after the work protocol. With the initial canonical distribution P 0 i (λ 0 ; β ) and transition probability P i→ j , the quantum JE can be obtained as follows: where the bi-stochastic nature of P i→ j , i.e. ∑ i P i→ j = ∑ j P i→ j = 1, has been used. The second moment in exp(−βW ) is given by As shown recently [29], this quantum second moment can also diverge in systems with an infinitedimensional Hilbert space. As a matter of fact, the divergence in the quantum case occurs more frequently than in the classical case.
Driven by the same motivation as outlined in Sec. 2, we now define the corresponding quantum W g as According to the two-time measurement scheme of quantum work, the energy values of both the initial and final states need to be measured first. Hence, there is no problem in obtaining W g from Eq. (24), based on known values of E λ τ j and E λ 0 i . One may then proceed to treat the ensemble average of e −βW g and arrives at This deformed quantum JE assumes precisely the same form as our previous classical result summarized by Eqn. (10). Equation (11) hence also applies to the quantum case here. The corresponding second moment in e −βW g is given by It is hoped that by also choosing proper values of g, e −2βW g may merge as finite even when e −2βW diverges. However, as we will show next, the situation in quantum cases can be much more challenging than in classical cases due to some intrinsic differences between classical and quantum state-to-state transition probabilities.
As a side note, in Appendix B we have also presented a deformed quantum Crooks relation [4] based on W g . This will also help us understand better quantum deformed JE while motivating more interests in possible extensions of known fluctuation theorems.
B. Quantum Harmonic Oscillator
A closed quantum harmonic oscillator is described by the following Hamiltonian where ω is still changing with time under a general work protocol. For the quantum work statistics, it is convenient to examine the so-called characteristic function [1,29,45]G g (µ), which is the Fourier transformation of the probability distribution function P(W g ) for W g . That is, we have G g (µ) = dW g e iµW g P(W g ) .
By choosing µ = 2iβ , we find (as a straightforward extension of the result for inclusive work W in [29]) where Q * is the Husimi coefficient determined solely by the protocol ω(t) [46]. The deviation of Q * from unity describes the non-adiabaticity [46] of a work protocol. Note that the case of g = 1 reproduces our previous result [29] for the standard quantum work characteristic function.
Because our previous work provides sufficient details regarding the precise quantum-classical correspondence in terms of P(W g=1 ) in the high temperature limit [29], here we will not dive into the technical details regarding the quantum-classical correspondence for W g with g = 1. Instead, we just briefly mention that the quantum P(W g ) should also reduce to the corresponding classical distribution in the high temperature limit. which is applied to a quantum harmonic oscillator. The Husimi coefficient is given by Here βhω 0 and βhω τ denote the initial and final dimensionless angular frequencies βhω 0 , βhω τ of the oscillator scaled by βh, which is prepared initially at thermal equilibrium at the inverse temperature β . Remarkably though, the situation in the low temperature regime is much different. Figure 1 depicts the domain of finite e −2βW g for g = 1 (grey area) and g = 0.1 (grey area plus patterned in suppressing the divergence domain. We will explain this finding in the next subsection.
C. Persistent Divergence of e −2βW g
To gain analytical insights we stick with the quantum harmonic oscillator case. Without loss of generality, we will focus on the even-parity states. The transition probabilities between even-parity states under an arbitrary frequency-driving protocol are given by We next look into the asymptotic behaviour of the transition probabilities between the initial 2ith state to the final ground state ( j = 0) for very large i: where we have used Stirling's formula n! ∼ √ 2πnn n /e n for very large n.
We are now ready to examine the contribution made by an individual transition P 2i→0 to the second moment e −2βW relevant to applications of the standard JE. It is found to be Clearly, even this individual contribution diverges with increasing i if the following condition is met: Dramatically, such a diverging contribution to e −2βW has no classical analog. Indeed, according to Eqn. (18), the classical transition probability from an initial highly excited state to a final low energy state is always strictly zero. That is, it is the quantum non-vanishing transition probabilities that open ups a possibility for highly rare transitions to make contributions to exponential work fluctuations. Put differently, the nonzero quantum transition probability P 2i→0 , though decreasing very fast with increasing i, can nevertheless make a diverging contribution to e −2βW due to the exponential increasing factor arising from negative work values. As a consequence, the more rare the initial state sampled from the Gibbs distribution is, the more it contributes to e −2βW . This mechanism for quantum divergence in the second moment of exponential work can take effect, irrespective of whether or not the corresponding classical second moment of exponential work is finite. This uncovers the potential difficulty in applying the standard quantum JE without first suppressing quantum work fluctuations [29].
Inspecting the parallel situation for e −2βW g reveals a similar problem. The contribution made by the P 2i→0 to e −2βW g is given by This expression is essentially the same as Eq. This theoretical insight is confirmed by our computational results in Fig. 1. In particular, from can occur outside the domain given by Eq. (34) (for example, due to many other classical-like transitions). For the latter cases, W g is effective in removing divergences (see Fig. 1(b)).
IV. CONCLUSIONS
Exponential work fluctuations characterized by var e −βW or e −2βW may systematically diverge [26,29]. This presents an obstacle for a direct application of JE without effectively suppressing work fluctuations. To meet this challenge in connecting non-equilibrium statistics with equilibrium properties, we propose in this work a deformed work expression, denoted as W g and obtained deformed JE for both classical and quantum cases. This deformed JE is based on an ensemble average of exponential quantities e −βW g and connects this average with free energy values at different temperatures. Using the parametric harmonic oscillator as a test example, we show that the classical deformed JE exhibits improved convergence features as compared to the case with the standard JE (with g = 1), because a possible divergent second moment e −2βW g can be be rendered convergent (i.e. finite) by a proper choice of g ∈ (0, 1]. This tailored modification does not require a different design of simulation methods, but constitutes a beneficial possibility to reprocess the experimental data based on a finite number of work realizations, yielding better performance. As to the quantum deformed JE, it is shown that its performance may not be improved as effectively as compared with the standard quantum JE. This is because the divergence in e −2βW g may not be lifted by introducing a reduced positive g value smaller than unity. This feature reflects a fundamental difference between classical and quantum work statistics over exponential work functions.
While in classical cases, state-to-state transition probabilities can have very sharp cutoffs suppressing effectively the contributions from rare events; the state-to-state transition probabilities in quantum cases, though already exponentially suppressed, are not cut off sharply enough and can still create a scenario where more rare events make even larger contributions to var e −βW .
These findings indicate that the efficiency of employing the quantum (standard or deformed) JE in predicting equilibrium properties is more limited than in the classical regime. Given the insights gained from this study, it may serve as an inspiration to seek other variants of deformed JEs and to apply those to different physical quantities that intrinsically make use of the conventional JE, classical or in its quantum form [47][48][49][50][51][52][53][54]. As discussed in the main text, we consider the classical harmonic oscillator with a general time-dependent angular frequency ω(t) > 0, whose Hamiltonian is given by The phase space volume Ω(E; ω) as defined in the main text is given by with any ω(t) = ω at fixed time t. We define the transition probabilities as in Ref. [26]: δ Ω τ − Ω H(p τ , q τ ; ω τ ); ω τ δ E(Ω 0 ; ω 0 ) − H(p 0 , q 0 ; ω 0 ) ω E(Ω 0 ; ω 0 ); ω 0 dp 0 dq 0 (A3) where (p τ , q τ ) = (p(p 0 , q 0 , τ), q(p 0 , q 0 , τ)) denotes the time evolution starting with (p 0 , q 0 ) and ending at t = τ, while ω E(Ω 0 ; ω 0 ); ω 0 represents the density of states. ω E(Ω 0 ; ω 0 ); ω 0 is also the normalization constant for a micro-canonical ensemble at E(Ω 0 ; ω 0 ), with Note that here there is no essential difference between using Ω and using E for a harmonic oscillator. We however still stick to using Ω since this notation is general. | 6,315.4 | 2017-07-24T00:00:00.000 | [
"Physics"
] |
Young Marx's argument on communism in Paris manuscript
In Paris Manuscript , Marx gradually broke away from his previous criticism and vigilance towards communism through further research on political economy and deep exploration of Feuerbach's philosophy. At that time, Marx proposed that communism is a scientific theory opposite to political economics, regarded actual communist action as a positive transcendence of human self-estrangement. Furthermore, communist theory was endowed with deep attribute of new materialism.
excerpted relevant content from the four links of production, distribution, exchange, and consumption in the Mill's economic theory, and focused on discussing issues such as currency as an intermediary, alienated labor, and social interaction in the exchange and consumption links.The second manuscript is relatively short in length, Marx focused on discussing the opposition between labor and capital from the perspective of private property.As for the third manuscript as a whole, apart from critique of Hegelian philosophy in the latter half (Parts IV and VI), the "Preface" (Part Ⅷ), and the analysis of monetary attributes (Part IX), which are all relatively independent, the remaining parts (Parts II and III) are basically supplementary to the "XXXIX" of the second manuscript, and it is in this part that Marx wrote 7 parts of the "Communism" thematic discourse from (1) to (7).A simple list is as follows: By embracing this relation as a whole, communism is: (1) In its first form only a generalisation and consummation of this relation.( 2) Communism (α) still political in nature -democratic or despotic; (β) with the abolition of the state, yet still incomplete, and being still affected by private property, i.e., by the estrangement of man.(3) Communism as the positive transcendence of private property as human self-estrangement, and therefore as the real appropriation of the human essence by and for man.(4)Man appropriates his comprehensive essence in a comprehensive manner, that is to say, as a whole man.Each of his human relations to the world, like those organs which are directly social in their form, are in their objective orientation, or in their orientation to the object, the appropriation of the object, the appropriation of human reality.Their orientation to the object is the manifestation of the human reality.(5) Communism is the position as the negation of the negation, and is hence the actual phase necessary for the next stage of historical development in the process of human emancipation and rehabilitation.Communism is the necessary form and the dynamic principle of the immediate future, but communism as such is not the goal of human development, the form of human society.(6) Feuerbach is the only one who has a serious, critical attitude to the Hegelian dialectic and who has made genuine discoveries in this field.He is in fact the true conqueror of the old philosophy.The extent of his achievement, and the unpretentious simplicity with which he, Feuerbach, gives it to the world, stand in striking contrast to the opposite attitude.(7) We have seen what significance, given socialism, the wealth of human needs acquires, and what significance, therefore, both a new mode of production and a new object of production obtain: a new manifestation of the forces of human nature and a new enrichment of human nature.[3:294-306] After discussing the alienation of human needs within the scope of private ownership and the so-called "morality" in political economics, Marx's seven arguments on communism came to a sudden halt.Overall, just like the overall style of Paris Manuscript, the completed connotations and internal logical connections of the seven arguments are not obvious.For example, Article (1), (2), and (3) clearly discusses the three forms and stages of communist development, but why did Marx choose to use the relationship with "private property" as the dividing and narrative latitude?While Article (4) (7) discussed the liberation of human perception, while Article (6) discussed the critique of Hegelian dialectics, but what is the relationship between communism and these topics?What are the similarities and differences between communism as a necessary intermediary stage and socialism in Article (5)?To solve these problems, it is necessary to go back to the original text and delve deeper step by step according to Marx's narrative order.
Communism as a diametrical theory to political economics
As is well known, the Economic and Philosophical Manuscript of 1844 is written with the critique of political economics as the beginning.Marx proudly stated in the Preface that, "it is hardly necessary to assure the reader conversant with political economy that my results have been attained by means of a wholly empirical analysis based on a conscientious critical study of political economy.It goes without saying that besides the French and English socialists I have also used German socialist works."[3:231] But few people raise the question of why it is necessary to cite the works of socialists and communists when studying political economy?As Marx said in his manuscript, "This new formulation of the question already contains its solution."[3:281] This is also true for the above-mentioned problem, and the internal relationship between communism and political economy can be clarified accordingly.Although Marx's involvement in the study of political economy was not long, he quickly discovered the contradictions and problems of political economy with his keen sense of problems and excellent ability to dissect.
On the one hand, the national economic system is full of contradictions.In the perspective of national economists, the richness of land often magically becomes a characteristic of land owners."To the economist, production, consumption and, as the mediator of both, exchange or distribution, are separate.[3:221] According to the political economists, it is solely through labour that man enhances the value of the products of nature, whilst labour is man's active possession.[3:240] However, in real life, the working class directly engaged in labor has become increasingly destitute, and only poverty emerges from the essence of modern labor according to labor value theory.The ultimate conclusion of modern political economy is actually "social misfortune" while it claimed to be committed to national prosperity and strength, which.The system of national economy is filled with unresolved contradictions, and the biggest one is to accept the relationships of private property as human and rational, political economy operates in permanent contradiction to its basic premise, private property.[4:32] On the other hand, political economics cannot explain its theoretical premise."All treatises on political economy take private property for granted.But this basic premise is for them an incontestable fact to which they devote no further investigation.It expresses in general, abstract formulas the material process through which private property actually passes, and these formulas it then takes for laws.It does not comprehend these laws.[4:31-32] Moreover, Marx reminded us not go back to a fictitious primordial condition as the political economist does, when he tries to explain.Such a primordial condition explains nothing; it merely pushes the question away into a grey nebulous distance.The economist assumes in the form of a fact, of an event, what he is supposed to deduce -namely, the necessary relationship between two things -between, for example, division of labor and exchange.[3:271] The laziness and incompetence of theory make it difficult for itself to understand its own premise and can only see it as self-evident.
Facing with the problems of political economics, Marx diametrically pointed out in Paris Manuscript that, "we proceed from an actual economic fact," [3:271] and take this fact as the starting point for exploring the relationship with communism.In The Holy Family, Marx clearly pointed out that all communist and socialist writers proceeded from the observation that, on the one hand, even the most favourably brilliant deeds seemed to remain without brilliant results, to end in trivialities, and, on the other, all progress of the Spirit had so far been progress against the mass of mankind, driving it into an ever more dehumanised situation.[4:84] In fact, Engels' account in another part of the same book can serve as a footnote to Marx's statement.Engels said, "the French Socialists maintain that the worker makes everything, produces everything and yet has no rights, no possessions, in short, nothing at all.[4:19] In other words, communism and socialism are exploring the historical formation and laws of modern economic life from the reality ignored by political economics, and the empirical path completely opposite to the abstract and one-sided methods of political economics.Communism or socialism is the direct opposite of political economics.
Although Marx just then had not yet explicitly stated the similar views that "the development of a science such as political economy is connected with the real movement of society, or is only its theoretical" [4:267]in Draft of an Article on Friedrich List's Book Das nationale System der politischen Oekonomie, he still clearly realized in Paris Manuscripts that national economists were no more than economic spokespersons for "civil society"."In general it is always empirical businessmen we are talking about when we refer to political economists, (who represent) their scientific creed and form of existence."[3:308] National economists are theorists of capitalists and theoretical manifestations of the private property movement.It goes without saying that communism, on the other hand, is a theoretical awareness of the movement to abolish private property in reality and a spokesperson for the proletariat.
On the basis of the above, the analysis of communism immediately presents two contents that need to be elaborated on -the essence of private property and the formation of sensuous reality.In fact, it is precisely in the description of these two aspects that Marx fully expressed his understanding of communism and the reasons for it.
Actual communist action: positive transcendence of private property
Modern communism is a movement to abolish private property rather than a simple cancellation of private property.Marx took communism as the positive transcendence of private property as human self-estrangement, and therefore as the real appropriation of the human essence by and for man.[3:297] This statement implies at least three reflections of Marx on this issue: (1) Marx, as well as communism, examine real movement from the true origin of private property and national economic reality, which are treated as political economists' own premise but not actually understood by themselves.(2) Marx transformed the issue of the origin of private property into the issue of the origin of labor alienation, which is directly related to human labor, and proposed a "new formulation" different from political economists.(3)The alienation of human beings is directly the alienation of labor, and the essence of human beings must be directly embedded in labor.
In analysis on the contradiction system of political economics, Marx started from its various premises and used the language and laws of political economics itself to draw the inevitable conclusion of social polarization.Actually speaking, political economists always intentionally ignored this conclusion and fact, but Marx continued to trace the meaning behind this reality, and lead the arguments to the concept of "alienated labor": "This fact expresses merely that the object which labor produces -labor's product -confronts it as something alien, as a power independent of the producer.The product of labor is labor which has been embodied in an object, which has become material: it is the objectification of labor.Labor's realization is its objectification.Under these economic conditions this realization of labor appears as loss of realization for the worker; objectification as loss of the object and bondage to it; appropriation as estrangement, as alienation.[3:272] After analyzing the four forms or consequences of labor alienation, Marx reached an important conclusion in the manuscript -"Private property is thus the product, the result, the necessary consequence, of alienated labor, of the external relation of the worker to nature and to himself.[3:279] In this way, we have already gone a long way to the solution of this problem by transforming the question of the origin of private property into the question of the relation of alienated labor to the course of humanity's development.[3:281] And alienated (externalized) labor has become the most critical dimension and perspective for Marx to analyze private property.
From the perspective of alienated labor, division of labor and exchange are only different forms of private property, while categories such as buying, selling, competition, and currency are only specific and unfolding manifestations of alienated labor and private property.In modern society, the separation of capital, land, and labor is also an inevitable result of the inherent nature of alienated labor, The national economic facts that cannot be explained by political economics and the unresolved contradictions within them are immediately gotten understood.In the movement of alienated labor, "private property, as the material, summary expression of alienated labor, embraces both relationsthe relation of the worker to work and to the product of his labor and to the non-worker, and the relation of the nonworker to the worker and to the product of his labor.[3:281] Clash of mutual contradictions Inevitably appeared before worker and capital.From the perspective of the subject of private property, it can be seen that "the subjective essence of private property -private property as activity for itself, as subject, as person -is labour."[3:290] However, labor products, as the object of private property, are the manifestation of the alienated life of workers.At this point in the analysis, the next natural question is the origin of alienated labor, that is, "how, we now ask, does man come to alienate, to estrange, his labor?How is this estrangement rooted in the nature of human development?[3:281] Due to the "manuscript" nature of the text, although Marx raised this question near the end of the first manuscript, it seemed that he did not directly answer it, leading some scholars to believe that this was a "unsolved question" left by Marx, and it was not until Marx replaced the concept of "alienated labor" with the concept of "surplus value" in Capital that this problem was truly solved.But the author believes that Marx actually provided a direct answer in Paris Manuscript, and the key word lies in the concept of "labor" (or "practice") that contains the concept of "objectivity" in the manuscript.It is precisely in the description of the concept of labor that Marx found the way to transcendent alienated labor.The concept of "objectiveness" is an important category used by Feuerbach to support "sensuous contemplation ", and it is an important tool for him to give the reality and priority status of the sensuous world (humans and nature), and to criticize Hegel's speculative philosophy.If young Marx during the "copy critique" period did not fully realize the utility of this concept, it would be greatly different in Paris Manuscript.Feuerbach mentioned in The Essence of Christianity that without an object, a person becomes nothing, the object with which the subject inevitably has an essential relationship is nothing more than the inherent and objective essence of this subject.In other words, the object is the most important means by which the subject expresses and confirms its essence."Objectivity" has also become an important attribute of human beings as subjects.Marx strongly agreed with that, "Any relationship between humans and themselves can only be realized and manifested through their relationship with others, furthermore, it is only when the objective world becomes everywhere for man in society the world of man's essential powers -human reality, and for that reason the reality of his own essential powers -that all objects become for him the objectification of himself, become objects which confirm and realize his individuality, become his objects.Thus man is affirmed in the objective world not only in the act of thinking, but with all his senses.[3:301] And the object oriented activities used to affirm oneself are nothing but labor: on the one hand, as far as the individual worker is concerned."It is true that labour was his immediate source of subsistence, but it was at the same time also the manifestation of his individual existence."[3:220] On the other hand, from the perspective of human history, "all human activity hitherto has been labour -that is, industry -activity estranged from itself.[3:303] Labor, as a self-generated action of human beings, is an important way to confirm human essence through the externalization of labor.Man's self-estrangement, the alienation of man's essence, man's loss of objectivity and his loss of realness is exactly man's self-discovery, manifestation of his nature, objectification and realisation.[3:342] Although private property was shaped due to the combination of various historical factors, at the same time, it has continuously accelerated and strengthened alienated labor in turn, and finally exacerbates the alienation, separation, and distortion of human nature.Therefore, in order to abolish alienated labor, it is necessary to start by abolishing private property.From the perspective of labor, "just as private property is only the perceptible expression of the fact that man becomes objective for himself and at the same time becomes to himself a strange and inhuman object; just as it expresses the fact that the manifestation of his life is the alienation of his life, that his realisation is his loss of reality, is an alien reality.[3:299] It is the estranged insight into the real objectification of man, into the real appropriation of his objective essence through the annihilation of the estranged character of the objective world, through the supersession of the objective world in its estranged mode of being.[3:341] Just as the realization of human essence requires objects, the abolish of alienation also requires objects.Human essence is precisely reflected in the objective labor practice and the sensuous contemplation of externalization (alienation).Therefore, logically speaking, only through the positive transcendence of human self-estrangement-communism -can one achieve a return to their essence.This is precisely as what Marx said, the transcendence of self-estrangement follows the same course as self-estrangement.This kind of actual communism is not a return to a simple state of poverty nor is it a generalization of private property through equal distribution of wealth.It serves as a negation of negation and is consciously realized and within the scope of all wealth previously developed, directly focusing on labor itself, the free production and enjoyment that people engage in as human beings.Eliminating the alienated nature of private property and achieving the restoration and affirmation of human essence just started to become real.In such way, communism is the position as the negation of the negation, and is hence the actual phase necessary for the next stage of historical development in the process of human emancipation and rehabilitation.Communism is the necessary form and the dynamic principle of the immediate future.[3:306]
Communist as new mass-type materialist
When Marx regarded labor as an objective activity of human beings and believed that the entire so-called history of the world is nothing but the creation of man through human labour, nothing but the emergence of nature for man."[3:305] Marx actually established two important starting points based on the arguments of national economists and speculative idealism.First, it is not the absolute spirit of nihility, but the true sensuous human-being are real entities.Both spirit and philosophy are based on experience.Secondly, both humans and nature are a generative process mediated by human objective labor practices.They are not in a constant and static situation, nor the product of so-called spiritual activities.The significance of establishing these two arguments is enormous.It indicates that Marx has to some extent opened up a historical materialist narrative and standpoint in Paris Manuscript.It thus also showed the initial expression of the relationship between communism and materialism at another level.
Marx believed that "sense-perception (see Feuerbach) must be the basis of all science.Only when it proceeds from sense-perception in the two-fold form of sensuous consciousness and sensuous need -is it true science.[3:303]Marx pointed out that the existence of nature and human beings through their own existence is incomprehensible to the consciousness of the people.However, if the history of the formation of nature is combined with the history of industry, if industry is conceived as the exoteric revelation of man's essential powers, we can also gain an understanding of the human essence of nature or the natural essence of man.And not only in the natural world, the formation of the five senses comes to be by virtue of its object, by virtue of humanised nature.In this way, for the answer of the so-called antitheses between such as subjectivism and objectivism, spiritualism and materialism, activity and passivity, it is only possible in a practical way, by virtue of the practical energy of man.Their resolution is therefore by no means merely a problem of understanding, but a real problem of life, which philosophy could not solve precisely because it conceived this problem as merely a theoretical one.[3:302] On this basis, Marx excitedly believed that complete naturalism or humanitarianism is different from both idealism and materialism, and at the same time, it is the truth that combines the two, and communism is the completed naturalism or humanitarianism.It is from this standpoint that Marx and Engels later explicitly pointed out in the preface of The Holy Family that "Real humanism has no more dangerous enemy in Germany than spiritualism or speculative idealism."[4:7] And in the specific process of criticism, they sorted out the materialistic origin and practical propositions of communist thought.Specifically, in terms of practical origins, the revolutionary movement which began in 1789 in the Cercle Social, gave rise to the communist idea which Babeuf's friend Buonarroti re-introduced in France after the Revolution of 1830.This idea, consistently developed, is the idea of the new world order.In terms of theoretical development schools, the school of French materialism originating from Locke has a clear "socialist tendency", it leads directly to socialism and communism.There is no need for any great penetration to see from the teaching of materialism on the original goodness and equal intellectual endowment of men, the omnipotence of experience, habit and education, and the influence of environment on man, the great significance of industry, the justification of enjoyment, etc., how necessarily materialism is connected with communism and socialism.In short, materialism is the teaching of real humanism and the logical basis of communism.[4:131] Communism in France and Britain embodies this materialism in practice with practical and concrete measures, while Feuerbach in Germany embodies materialism that is in line with humanism in theory.Correspondingly, the bipolar opposition and worker suffering in reality cannot only rely on critical criticism, which is a "pure theoretical liberation", but also requires fully tangible material conditions.
Conclusion
Compared to the period before, Marx's perception of communism during the Paris Manuscripts had a significant change.Marx did not oppose communism and social ideology as early media debates in 1842, nor did he relatively promote socialism and belittle communism as in the Deutsch-Französische Jahrbücheras.Compared to ideal socialism, communism is only a medium leading to the return of human essence, but it has become an essential link, an inevitable form and effective principle of the near future, the last form before human liberation.Marx began to take communism and socialism as important themes into his theoretical vision and framework, he also started trying to understand the necessity of modern communism through a dialectical perspective of practice and labor generation.The hostile opposition between worker and capital is not only a natural result of Marx's exploration of civil society, but also a starting point for Marx to criticize the real world by using "alienation" as a too.Although Marx didn't treat "human liberation" with "actual communism" as the same in Paris Manuscripts and The Holy Family, but the inherent connection between two concepts is already rather clear.Marx refused to discuss human liberation from a purely theoretical perspective, he turned to emphasize the transformation of material forces and practical actions more than ever before.This change is not only a further development and extension of the "criticism of weapons" advocated in Critique of Hegel's Philosophy of Right, it is also a reaffirmation of the complete naturalism humanitarianism of communism.Marx clearly demonstrated his exploration of living and emotional individuals (who have pain, emotions, thoughts, and actions, his conscious rejection of abstract and otherworldly personalities), showed his emphasis on the self-generating of humans and nature through labor.All above indicates that Marx is increasingly distant from the philosophical base of dialectical philosophy and Hegel, and the horizon of the new worldview also vaguely reveals its initial appearance.
Overall, although Marx's communist views at this time had the characteristic of "new" materialism, dialectically, it was precisely due to the obvious Feuerbach characteristic that communism in Paris Manuscripts and Holy Family also presented a different temperament from later scientific communism-a strong philosophical temperament.This is not only reflected in Marx's belief that German communism mainly started from philosophy, it is also reflected in Marx's wish for the "completed communism" to become the "answer to the mystery of history", as well as in his non historical division of the development stages of communism based on the concepts of "private property" and "alienation".Even though when Marx proposed that the entire movement of history, just as its actual act of genesis, is also for its thinking consciousness, [3:297] Marx's major logic was actually the philosophical principle of "negation of negation" rather than the material power of history.When Feuerbach thought his new philosophy makes man, together with nature as the basis of man, the exclusive, universal, and highest object of philosophy; it makes anthropology, together with physiology, the universal science; [5:184] Marx also placed a similar position on his communism, believing that this communism, as fully developed naturalism, equals humanism, and as fully developed humanism equals naturalism.Communist theory has been fully regarded as a future philosophical principle.And talking about the clearing up and critique of such philosophical attribute, it still needs appropriate opportunity until later philosophical revolution. | 5,871.6 | 2023-11-30T00:00:00.000 | [
"Political Science",
"Philosophy"
] |
An evaluation of the economic and green market utility in a circular economy
. The article focuses on the problem of evaluation of new markets in a circular economy, substantiating the need to assess not only economic performance, but also green utility in the process of making managerial decisions. The article proposes an algorithm for calculating the complex Index of Economic and Green Market Utility (IEGMU) for the region in the field of car sharing. It covers such indicators as market capacity and dynamics, the level of impact on the traditional automotive sector, as well as the green effect, which includes the reduction of carbon dioxide emissions, product (vehicle) consumption, and pressure on transport infrastructure. The advantage of the proposed algorithm is the possibility of using matrix analysis to determine the stage of development of each market. As an approbation, the article presents the calculation of the index for such regions as North America, Europe and Asia.
Introduction
Recently, the world community has sharpened its attention to global environmental problems, overproduction and depletion of natural resources. Management strategic and marketing decisions are made based on their impact on the achievement of the goals of sustainable development of society. Consumer behaviour is becoming more conscientious, and enterprise management must respond to these new trends. In this regard, the demand for new business models that provide circularity and decoupling has increased. A circular business model responds the question how a company creates and delivers value through logic that increases resource efficiency by extending the useful life of products and parts (for example, through durable design, repair, and refurbishment) and completing the product lifecycle through recycling and capitalizing on their residual value [1][2][3]. This leads to the gradual separation of economic activity from the consumption of limited resources and the elimination of waste from the system [4]. Product or service systems, which focus on selling service and functionality instead of products, are one of the primary vehicles for implementing a circular economy (CE) in which economic growth is decoupled from resource consumption. [5].
A bibliometric analysis of the literature has shown that the development of a circular economy is assessed at three levels: micro, meso and macro levels [6,7]. Macro-level indicators are needed for evaluation and monitoring to improve various programs at the state level [8,9]. Micro-level indicators, as a rule, cover 9R-imperatives [12][13][14] and analyze environmental friendliness as well as wasteless production and consumption. [10][11][12]. At the same time, most of the studied indicators are focused only on one and (or) several specific environmental problems, which introduces evaluative subjectivity.
The purpose of this article is to form and propose an algorithm for assessing the economic and green utility of the market, based on the example of such a segment of the circular economy as car sharing.
The analysis of the literature on the problems of circularity in industries showed that they are mainly of an applied nature and focus on solving specific technological and organizational issues. At the same time, the issue of creating new value for the consumer in the products of the circular economy remains unaddressed. New trends in consumer behaviour in the post-COVID-19 economy indicate a more pragmatic nature of consumption as well as a desire to preserve the nature and resources of the planet. [13]. These motives in behaviour will dictate new values to producers. Thus, in assessing the prospects of markets, in addition to economic criteria, it is necessary to take into account the parameters of green utility.
Many indicators determine market prospects, as a rule, but in general, they can be divided into three main groups: 1) the physical and monetary volume of the market; 2) growth dynamics; 3) main segments and their dynamics; level of competition [14][15]. However, in connection with global informatization and the development of new business models in the IT sector, the importance of assessing the life cycle of goods and services, as well as the level of applied technologies and potential servitization, has increased in market analysis.
Moreover, large investors paid attention to the problems of the ecology of the planet, which led to a revision of many principles of production, increased requirements, the emergence of new business models and markets associated with a circular economy. In this regard, in order to create long-term business models, it is necessary to understand by what parameters society, the state and investors will assess their value.
Obviously, one can apply individual approach to different markets. In our research, we focused on analyzing the automotive industry market.
According to the most recent estimates, the world's population is a (quickly growing) 7.6 billion, and with an estimated 1.4 billion cars on the road, that puts the vehicle saturation at around 18% [16,17]. At the same time, the global automotive industry market has been increasing annually and, according to some forecasts, will continue to grow in the post-crisis period (due to with an annual growth of + 5% (for the period from 2021 to 2030 [18]).
It is no secret that a car as a product causes significant damage to the environment consuming significant resources for its production, polluting the air with exhaust gases, stimulating the development of road construction, which in turn also has a number of negative consequences for nature. At the same time, in the modern world, a car is one of the main attributes of a comfortable life as well as is a key element of the transport and logistics infrastructure of the global economy. Thus, one cannot refuse this good, but it is obvious that the requirements for the automotive market should be increased. Since people cannot completely abandon autos, but they can "consume" fewer cars and also use ones that cause minimal damage to the environment. One of the main business areas designed to solve these problems is the development of the car sharing market. Car sharing or short-term rental is an option to rent a car from specialized companies (B2C) or individuals (P2P) for any period and distance of travel by agreement (most often for intracity and / or short trips). This car rental model is convenient for occasional use of a motor vehicle or in the case when people need a car that is different from the brand, body type and carrying capacity from the commonly used one.
Car sharing is one of the global trends in the development of the Sharing Economy, when the population refuses to acquire goods in property, so as not to bear responsibility and costs, but continues to have access to all the achievements of scientific progress, using their joint consumption [19]. Car sharing services are available in more than 1000 cities in dozens of countries around the world [20]. Analysis of the market and analytical sources [21] revealed that at the moment the following main business models in this market can be distinguished (Fig. 1). [20][21] This is a far from complete list of existing models. At the same time, in some countries and regions more and more new business models and companies appear, while in others the existing market operators are being transformed. Therefore, there is a need to assess their effectiveness, not only by traditional socio-economic parameters, but also by their contribution to the creation of a circular economy.
Model and methods
It is proposed to consider the approach of the market evaluation based on the integrated Index of economic and green utility of the market ( ) for the region. This index will be subdivided into two sub-indices: 1) sub-index of economic attractiveness of the market; 2) sub-index of circularity and green utility of the market.
In turn, each of these sub-indices includes a number of coefficients that differ depending on the industry and the specifics of the activity of its market players. (1) where S Econ is a sub-index of market economic attractiveness, is a specific weight of this sub-index;
I EG =S E *a E +S Green *a Green
is a sub-index of green utility of the market, is a specific weight of this sub-index.
The sub-index of the economic attractiveness of the market includes a set of indicators characterizing its capacity, development dynamics, as well as the level of penetration / replacement of the traditional (non-circular market) (in our case, this is the share of car sharing cars in the total vehicle fleet of the region).
The economic efficiency of the market is a very important factor in the development of the industry, because many new business models, which are very promising in terms of circularity and innovation, do not stand up to the market test due to the imbalance of economic indicators and, in the end, disappear or transform (example Autolib [22], City Carshare [23]). Therefore, one of the main economic indicators of the market for new circular technologies is its rapid growth.
With regard to the sub-index of green utility of the market, it is important to ensure a focus on reducing environmental risks and sustainable development without environmental degradation. It is closely related to environmental economics, but has a more politically applied focus. [24].
Thus, the following most important parameters have been identified for this sub-index: 1) reducing the impact on the environment; 2) a decrease in the consumption of traditional products (for example, cars in our case); 3) reducing the impact on other industries (road and urban planning, fuel production, etc.).
The system for filling sub-indexes with indicators is shown schematically in the Figure 2 below. Each sub-index, in our opinion, is very important, so they will have equal shares. As for the indicators that they include, they can have different weights, for example, among the green indicators of the car sharing market, it would be very influential to reduce carbon dioxide emissions and therefore its share of influence should be maximum.
It should be noted that CO 2 does not pollute the air people breathe, but it is a main contributor to Global Warming and therefore has to be reduced. [17]. Other emissions from vehicle traffic include: 1) carbon monoxide (CO) is produced by incomplete combustion of carbonaceous fuels and is toxic in high concentrations and therefore needs to be controlled; 2) hydrocarbons (HC) are unburned fuel and can be a problem for people with breathing difficulties, and also contribute negatively to "photochemical smog" in certain climatic conditions; 3) nitrogen oxides (NOx) are formed when the air is heated and contributes to both photochemical smog and acid rain and can irritate the lungs.
Indicators for sub-indexes also have their share in the Index, depending on their importance.
Thus, formula can be rewritten (1) by decomposing the sub-index into indicators that are included in them.
where Е is an indicator of market economic attractiveness, is a specific weight of this indicator; is an indicator of green utility of the market, is a specific weight of this indicator.
Below is a table with a scale and shares for calculating the Index of Market Economic and Green Utility (IEGMU) and its sub-indices.
CAGR -Compound Annual Growth Rate is calculated by formula (3):
The Index value range (IEGMU) determines the degree of investment attractiveness of the market in terms of economic and green indicators and provides a good tool for comparing this parameter across different regions. Thus, the higher the Index, the more investment attractive is the market of a given region in comparison with other territories. At the same time, an important component is the ratio of the contribution of sub-indices: after all, the market can actively develop in terms of economic parameters, but bring a relatively low green effect, and vice versa, with moderate market growth, but qualitative transformations, its green utility can increase. In this regard, it would be purposeful to use matrix analysis. The market is in its infancy. Experimenting with business models. High risk of bankruptcy. The market is fragmented and does not have a significant green effect.
Medium High Low
The market is in the stage of active growth. Extensive market development is observed, which, however, creates a minimal green effect
Medium Low High
The market has entered the stage of saturation or is simply growing slowly compared to other regions (perhaps there are certain obstacles, for example, the car-sharing business model is not supported at the local level). At the same time, its green indicators, such as reducing CO 2 emissions per capita, are significantly ahead of other regional markets.
High
High High The regional market is leading in terms of growth and economic development as well as creates the maximum green effect compared to other countries (regions).
Evaluating local markets according to these criteria makes it possible to classify them according to the degree of economic and green utility and predict changes.
Results and discussions
To test the proposed methodological approach, world statistics on the car sharing market have been collected. It should be noted that due to the fact that the industry is young, statistical data, in a more or less complete and homogeneous structure, are available on the Shared Mobility website only for the period from 2007 to 2016 [19]. However, given that the purpose of the article is not to identify modern trends, but to propose an algorithm for assessing new markets in the circular economy, we decided to take the statistical data of this resource as a basis. As of the end of 2016, the car-sharing market was actively developing in Asia, Europe and North America. In general, the same trend continued in the next four years [25]. The calculation results are presented in Table 3 below.
It is necessary to dwell separately on the issue of calculating green indicators, in particular, those that are responsible for reducing carbon dioxide emissions. According to the data of United States Environmental Protection Agency [26], a "typical passenger vehicle emits about 4.6 metric tons of carbon dioxide per year. This number can vary based on a vehicle's fuel, fuel economy, and the number of miles driven per year". Considering that the majority of cars in the cashiering market are passenger cars, and, accordingly, they also replace passenger vehicles in personal ownership, this value was taken as a starting point. Sub-index of market economic attractiveness (S E ) Carsharing members (ths) [19] (1) 1 838 4 371 8 722 14 931 Population (mln) [16] ( Based on the calculations obtained, a rating of the regions for each of the indicators have been compiled. The region with the best score out of three received 3 points, second place -2 points and third place -3 points. Then, applying formula (2), it was possible to calculate of Index of Market Economic and Green Utility and its sub-indices for each of the regions (Table 4). Thus, the Index of Market Economic and Green Utility calculation showed that the most developed car-sharing market in Europe, moreover, it provides the highest level of green indicators. It is important to understand that this is achieved due to the relevance of reducing the number of car owners, because here is one of the highest levels in terms of per 1000 inhabitants. In second place is the car-sharing market in Asia, where it demonstrates a high level of growth, but does not have such a high effect in terms of green indicators, the main reason is that personal vehicles are not as affordable here as in Europe. The USA has the lowest index. Despite the fact that the largest P2P car sharing platforms are located here, the overall level of penetration and growth of the car sharing market remains one position lower than in previous regions. Based on the calculated data, a matrix for evaluating of the Index of Market Economic and Green Utility index by region was built.
Fig. 3. Matrix for Evaluation of Index of Market Economic and Green Utility by region Source: developed by authors
As one can see, the European car-sharing market is in the quadrant of high overall economic and green utility, while the Asian market is in the quadrant of high economic attractiveness, but low green return, and North America is in the lowest zone for both parameters.
Conclusions
The ideas of a circular economy and changes in consumer behavior have impacted virtually all product markets. Currently, the car sharing market, as one of the examples of business models of the circular economy, is actively developing in different regions. To analyze market development and assess its investment attractiveness, we need tools that will be based not only on economic efficiency, but also on green utility.
The Index of Economic and Green Market Utility proposed by the authors allows to make such a comparative analysis of local car-sharing markets. It can be calculated annually in order to have an idea of the progress that each country is making in building its own market. The advantage of the proposed algorithm is the possibility of using matrix analysis to understand at what stage of development each market is, and most importantly, in which direction it is possible to improve circular business models. In addition, this algorithm is flexible enough to include other green indicators. For example, in the future, as the statistical base is filled, the sub-index of green utility can be supplemented with such indicators as: 1) participants who sold private vehicle after joining carcharing (%), participants who postponed or avoided vehicle purchase because of carsharing (%), vehicle kilometers reduced because of carsharing (%). | 4,047.4 | 2021-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both “bimodal” data of NL-PL pairs and “unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NLPL probing.
Introduction
Large pre-trained models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2018), XLNet (Yang et al., 2019) * Work done while this author was an intern at Microsoft Research Asia. 1 All the codes and data are available at https:// github.com/microsoft/CodeBERT and RoBERTa have dramatically improved the state-of-the-art on a variety of natural language processing (NLP) tasks. These pre-trained models learn effective contextual representations from massive unlabeled text optimized by self-supervised objectives, such as masked language modeling, which predicts the original masked word from an artificially masked input sequence. The success of pre-trained models in NLP also drives a surge of multi-modal pre-trained models, such as ViLBERT (Lu et al., 2019) for language-image and VideoBERT (Sun et al., 2019) for language-video, which are learned from bimodal data such as language-image pairs with bimodal self-supervised objectives.
In this work, we present CodeBERT, a bimodal pre-trained model for natural language (NL) and programming language (PL) like Python, Java, JavaScript, etc. CodeBERT captures the semantic connection between natural language and programming language, and produces general-purpose representations that can broadly support NL-PL understanding tasks (e.g. natural language code search) and generation tasks (e.g. code documentation generation). It is developed with the multilayer Transformer (Vaswani et al., 2017), which is adopted in a majority of large pre-trained models. In order to make use of both bimodal instances of NL-PL pairs and large amount of available unimodal codes, we train CodeBERT with a hybrid objective function, including standard masked language modeling (Devlin et al., 2018) and replaced token detection (Clark et al., 2020), where unimodal codes help to learn better generators for producing better alternative tokens for the latter objective.
We train CodeBERT from Github code reposito-ries in 6 programming languages, where bimodal datapoints are codes that pair with function-level natural language documentations (Husain et al., 2019). Training is conducted in a setting similar to that of multilingual BERT (Pires et al., 2019), in which case one pre-trained model is learned for 6 programming languages with no explicit markers used to denote the input programming language. We evaluate CodeBERT on two downstream NL-PL tasks, including natural language code search and code documentation generation. Results show that fine-tuning the parameters of CodeBERT achieves state-of-the-art performance on both tasks. To further investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and test CodeBERT in a zero-shot scenario, i.e. without fine-tuning the parameters of CodeBERT. We find that CodeBERT consistently outperforms RoBERTa, a purely natural language-based pre-trained model. The contributions of this work are as follows: • CodeBERT is the first large NL-PL pretrained model for multiple programming languages.
• Empirical results show that CodeBERT is effective in both code search and code-to-text generation tasks.
• We further created a dataset which is the first one to investigate the probing ability of the code-based pre-trained models.
Pre-Trained Models in NLP
Large pre-trained models (Peters et al., 2018;Radford et al., 2018;Devlin et al., 2018;Yang et al., 2019;Raffel et al., 2019) have brought dramatic empirical improvements on almost every NLP task in the past few years. Successful approaches train deep neural networks on large-scale plain texts with self-supervised learning objectives. One of the most representative neural architectures is the Transformer (Vaswani et al., 2017), which is also the one used in this work. It contains multiple self-attention layers, and can be conventionally learned with gradient decent in an end-to-end manner as every component is differentiable. The terminology "self-supervised" means that supervisions used for pre-training are automatically collected from raw data without manual annotation. Dominant learning objectives are language modeling and its variations. For example, in GPT (Radford et al., 2018), the learning objective is language modeling, namely predicting the next word w k given the preceding context words {w 1 , w 2 , ..., w k−1 }. As the ultimate goal of pretraining is not to train a good language model, it is desirable to consider both preceding and following contexts to learn better general-purpose contextual representations. This leads us to the masked language modeling objective used in BERT (Devlin et al., 2018), which learns to predict the masked words of a randomly masked word sequence given surrounding contexts. Masked language modeling is also used as one of the two learning objectives for training CodeBERT.
Multi-Modal Pre-Trained Models
The remarkable success of the pre-trained model in NLP has driven the development of multi-modal pre-trained model that learns implicit alignment between inputs of different modalities. These models are typically learned from bimodal data, such as pairs of language-image or pairs of languagevideo. For example, ViLBERT (Lu et al., 2019) learns from image caption data, where the model learns by reconstructing categories of masked image region or masked words given the observed inputs, and meanwhile predicting whether the caption describes the image content or not. Similarly, VideoBERT (Sun et al., 2019) learns from language-video data and is trained by video and text masked token prediction. Our work belongs to this line of research as we regard NL and PL as different modalities. Our method differs from previous works in that the fuels for model training include not only bimodal data of NL-PL pairs, but larger amounts of unimodal data such as codes without paired documentations. A concurrent work (Kanade et al., 2019) uses masked language modeling and next sentence prediction as the objective to train a BERT model on Python source codes, where a sentence is a logical code line as defined by the Python standard. In terms of the pre-training process, CodeBERT differs from their work in that (1) CodeBERT is trained in a cross-modal style and leverages both bimodal NL-PL data and unimodal PL/NL data, (2) CodeBERT is pre-trained over six programming languages, and (3) CodeBERT is trained with a new learning objective based on replaced token detection.
CodeBERT
We describe the details about CodeBERT in this section, including the model architecture, the input and output representations, the objectives and data used for training CodeBERT, and how to fine-tune CodeBERT when it is applied to downstream tasks.
Model Architecture
We follow BERT (Devlin et al., 2018) and RoBERTa , and use multi-layer bidirectional Transformer (Vaswani et al., 2017) as the model architecture of CodeBERT. We will not review the ubiquitous Transformer architecture in detail. We develop CodeBERT by using exactly the same model architecture as RoBERTa-base. The total number of model parameters is 125M.
Input/Output Representations
In the pre-training phase, we set the input as the concatenation of two segments with a special separator token, namely [CLS], w 1 , w 2 , ..w n , [SEP ], c 1 , c 2 , ..., c m , [EOS]. One segment is natural language text, and another is code from a certain programming language.
[CLS] is a special token in front of the two segments, whose final hidden representation is considered as the aggregated sequence representation for classification or ranking. Following the standard way of processing text in Transformer, we regard a natural language text as a sequence of words, and split it as WordPiece (Wu et al., 2016). We regard a piece of code as a sequence of tokens.
The output of CodeBERT includes (1) contextual vector representation of each token, for both natural language and code, and (2) the representation of [CLS], which works as the aggregated sequence representation.
Pre-Training Data
We train CodeBERT with both bimodal data, which refers to parallel data of natural language-code pairs, and unimodal data, which stands for codes without paired natural language texts and natural language without paired codes.
We use datapoints from Github repositories, where each bimodal datapoint is an individual function with paired documentation, and each unimodal code is a function without paired documentation. Specifically, we use a recent large dataset provided by Husain et al. (2019), which includes 2.1M bimodal datapoints and 6.4M unimodal codes across six programming languages (Python, Java, JavaScript, PHP, Ruby, and Go). Data statistics is shown in Table 1. 2 The data comes from publicly available opensource non-fork GitHub repositories and are filtered with a set of constraints and rules. For example, (1) each project should be used by at least one other project, (2) each documentation is truncated to the first paragraph, (3) documentations shorter than three tokens are removed, (4) functions shorter than three lines are removed, and (5) function names with substring "test" are removed. An example of the data is given in Figure 1 3 .
Pre-Training CodeBERT
We describe the two objectives used for training CodeBERT here. The first objective is masked language modeling (MLM), which has proven effective in literature (Devlin et al., 2018; Figure 2: An illustration about the replaced token detection objective. Both NL and code generators are language models, which generate plausible tokens for masked positions based on surrounding contexts. NL-Code discriminator is the targeted pre-trained model, which is trained via detecting plausible alternatives tokens sampled from NL and PL generators. NL-Code discriminator is used for producing general-purpose representations in the finetuning step. Both NL and code generators are thrown out in the fine-tuning step. 2019; Sun et al., 2019). We apply masked language modeling on bimodal data of NL-PL pairs. The second objective is replaced token detection (RTD), which further uses a large amount of unimodal data, such as codes without paired natural language texts. Detailed hyper-parameters for model pre-training are given in Appendix B.1.
Objective #1: Masked Language Modeling (MLM) Given a datapoint of NL-PL pair (x = {w, c}) as input, where w is a sequence of NL words and c is a sequence of PL tokens, we first select a random set of positions for both NL and PL to mask out (i.e. m w and m c , respectively), and then replace the selected positions with a special [M ASK] token. Following Devlin et al. (2018), 15% of the tokens from x are masked out.
The MLM objective is to predict the original tokens which are masked out, formulated as follows, where p D 1 is the discriminator which predicts a token from a large vocabulary.
Objective #2: Replaced Token Detection (RTD) In the MLM objective, only bimodal data (i.e. datapoints of NL-PL pairs) is used for training. Here we present the objective of replaced token detection. The RTD objective (Clark et al., 2020) is originally developed for efficiently learning pre-trained model for natural language. We adapt it in our scenario, with the advantage of using both bimodal and unimodal data for training. Specifically, there are two data generators here, an NL generator p Gw and a PL generator p Gc , both for generating plausible alternatives for the set of randomly masked positions.
The discriminator is trained to determine whether a word is the original one or not, which is a binary classification problem. It is worth noting that the RTD objective is applied to every position in the input, and it differs from GAN (generative adversarial network) in that if a generator happens to produce the correct token, the label of that token is "real" instead of "fake" (Clark et al., 2020). The loss function of RTD with regard to the discriminator parameterized by θ is given below, where δ(i) is an indicator function and p D 2 is the discriminator that predicts the probability of the i-th word being original.
There are many different ways to implement the generators. In this work, we implement two efficient n-gram language models (Jurafsky, 2000) with bidirectional contexts, one for NL and one for PL, and learn them from corresponding unimodel datapoints, respectively. The approach is easily generalized to learn bimodal generators or use more complicated generators like Transformerbased neural architecture learned in a joint manner. We leave these to future work. The PL training data is the unimodal codes as shown in Table 1, and the NL training data comes from the documentations from bimodal data. One could easily extend these two training datasets to larger amount. The final loss function are given below.
Fine-Tuning CodeBERT
We have different settings to use CodeBERT in downstream NL-PL tasks. For example, in natural language code search, we feed the input as the same way as the pre-training phase and use the representation of [CLS] to measure the semantic relevance between code and natural language query, while in code-to-text generation, we use an encoderdecoder framework and initialize the encoder of a generative model with CodeBERT. Details are given in the experiment section.
Experiment
We present empirical results in this section to verify the effectiveness of CodeBERT. We first describe the use of CodeBERT in natural language code search ( §4.1), in a way that model parameters of CodeBERT are fine-tuned. After that, we present the NL-PL probing task ( §4.2), and evaluate Code-BERT in a zero-shot setting where the parameters of CodeBERT are fixed. Finally, we evaluate Code-BERT on a generation problem, i.e. code documentation generation ( §4.3), and further evaluate on a programming language which is never seen in the training phase ( §4.4).
Natural Language Code Search
Given a natural language as the input, the objective of code search is to find the most semantically related code from a collection of codes. We conduct experiments on the CodeSearchNet corpus (Husain et al., 2019) 4 . We follow the official evaluation metric to calculate the Mean Reciprocal Rank (MRR) for each pair of test data (c, w) over a fixed set of 999 distractor codes. We further calculate the macro-average MRR for all languages as an overall evaluation metric. It is helpful to note that this metric differs from the AVG metric in the original paper, where the answer is retrieved from candidates from all six languages. We fine-tune a languagespecific model for each programming language 5 . We train each model with a binary classification loss function, where a sof tmax layer is connected to the representation of [CLS]. Both training and validation datasets are created in a way that positive and negative samples are balanced. Negative samples consist of balanced number of instances with randomly replaced NL (i.e. (c,ŵ)) and PL (i.e. (ĉ, w)). Detailed hyper-parameters for model fine-tuning are given in Appendix B.2. Table 2 shows the results of different approaches on the CodeSearchNet corpus. The first four rows are reported by Husain et al. (2019), which are joint embeddings of NL and PL (Gu et al., 2018;Mitra et al., 2018). NBOW represents neural bag-of-words. CNN, BIRNN and SELFATT stand for 1D convolultional neural network (Kim, 2014), bidirectional GRU-based recurrent neural network (Cho et al., 2014), and multi-head attention (Vaswani et al., 2017), respectively.
Model Comparisons
We report the remaining numbers in Table 2. We train all these pre-trained models by regarding codes as a sequence of tokens. We also continuously train RoBERTa only on codes from Code-SearchNet with masked language modeling. Results show that CodeBERT consistently performs better than RoBERTa and the model pre-trained with code only. CodeBERT (MLM) learned from scratch performs better than RoBERTa. Unsurprisingly, initializing CodeBERT with RoBERTa improves the performance 6 .
NL-PL Probing
In the previous subsection, we show the empirical effectiveness of CodeBERT in a setting that the parameters of CodeBERT are fine-tuned in downstream tasks. In this subsection, we further investigate what type of knowledge is learned in Code-BERT without modifying the parameters.
Task Formulation and Data Construction
Following the probing experiments in NLP (Petroni et al., 2019;Talmor et al., 2019), we study NL-PL probing here. Since there is no existing work towards this goal, we formulate the problem of NL-PL probing and create the dataset by ourselves. Given an NL-PL pair (c, w), the goal of NL-PL probing is to test model's ability to correctly predict/recover the masked token of interest (either a code token c i or word token w j ) among distractors. There are two major types of distractors: one is the whole target vocabulary used for the masked language modeling objective (Petroni et al., 2019), and another one has fewer candidates which are filter or curated based on experts' understanding about the ability to be tested (Talmor et al., 2019). We follow the second direction and formulate NL-PL probing as a multi-choice question answering task, where the question is cloze-style in which a certain token 6 We further give a learning curve of different pre-trained models in the fine-tuning process in Appendix C. is replaced by [M ASK] and distractor candidate answers are curated based on our expertise.
Specifically, we evaluate on the NL side and PL side, respectively. To ease the effort of data collection, we collect data automatically from NL-PL pairs in both validation and testing sets of Code-SearchNet, both of which are unseen in the pretraining phase. To evaluate on the NL side, we select NL-PL pairs whose NL documentations include one of the six keywords (max, maximize, min, minimize, less, greater), and group them to four candidates by merging first two keywords and the middle two keywords. The task is to ask pre-trained models to select the correct one instead of three other distractors. That is to say, the input in this setting includes the complete code and a masked NL documentation. The goal is to select the correct answer from four candidates. For the PL side, we select codes containing keywords max and min, and formulate the task as a two-choice answer selection problem. Here, the input includes complete NL documentation and a masked PL code, and the goal is to select the correct answer from two candidates. Since code completion is an important scenario, we would like to test model's ability in predicting the correct token merely based on preceding PL contexts. Therefore, we add an additional setting for PL side, where the input includes the complete NL documentation and preceding PL codes. Data statistics is given in the top two rows in Table 3. Table 3. We report accuracy, namely the number of correctly predicted instances over the number of all instances, for each programming language. Since datasets in different programming languages are extremely unbalanced, we report the accumulated metric with the same way. We use CodeBERT (MLM) here because its output layer naturally fits for probing. Results show that CodeBERT performs better than baselines on almost all languages on both NL and PL probing. The numbers with only preceding contexts are lower than that with bidirectional contexts, which suggests that code completion is challenging. We leave it as a future work.
Model Comparisons Results are given in
We further give a case study on PL-NL probing. We mask NL token and PL token separately, and report the predicted probabilities of RoBERTa and CodeBERT. Figure 3 illustrates the example of a python code 7 . We can see that RoBERTa fails in both cases, whereas CodeBERT makes the correct prediction in both NL and PL settings.
Code Documentation Generation
Although the pre-training objective of Code-BERT does not include generation-based objectives , we would like to investigate to what extent does CodeBERT perform on generation tasks. Specifically, we study code-to-NL generation, and report results for the documentation generation task on CodeSearchNet Corpus in six programming languages. Since the generated documentations are short and higher order n-grams may not overlap, we remedy this problem by using smoothed BLEU score (Lin and Och, 2004 Figure 3: Case study on python language. Masked tokens in NL (in blue) and PL (in yellow) are separately applied. Predicted probabilities of RoBERTa and Code-BERT are given.
Model Comparisons
We compare our model with several baselines, including a RNN-based model with attention mechanism (Sutskever et al., 2014), the Transformer (Vaswani et al., 2017), RoBERTa and the model pre-trained on code only. To demonstrate the effectiveness of CodeBERT on code-to-NL generation tasks, we adopt various pre-trained models as encoders and keep the hyperparameters consistent. Detailed hyper-parameters are given in Appendix B.3. Table 4 shows the results with different models for the code-to-documentation generation task. As we can see, models pre-trained on programming language outperform RoBERTa, which illustrates that pre-trainning models on programming language could improve code-to-NL generation. Besides, results in the Table 4 show that CodeBERT pre-trained with RTD and MLM objectives brings a gain of 1.3 BLEU score over RoBERTa overall and achieve the state-of-the-art performance 8 .
Generalization to Programming Languages NOT in Pre-training
We would like to evaluate CodeBERT on the programming language which is never seen in the pretraining step. To this end, we study the task of generating a natural language summary of a C# code snippet. We conduct experiments on the dataset of CodeNN (Iyer et al., 2016) 9 , which consists of 66,015 pairs of questions and answers automatically collected from StackOverflow. This dataset is challenging since the scale of dataset is orders of magnitude smaller than CodeSearchNet Corpus. We evaluate models using smoothed BLEU-4 score and use the same evaluation scripts as Iyer et al. could generalize better to other programming language which is never seen in the pre-training step. However, our model achieve slightly lower results than code2seq . The main reason could be that code2seq makes use of compositional paths in its abstract syntax tree (AST) while Code-BERT only takes original code as the input. We have trained a version of CodeBERT by traversing the tree structure of AST following a certain order, but applying that model does not bring improvements on generation tasks. This shows a potential direction to improve CodeBERT by incorporating AST.
Conclusion
In this paper, we present CodeBERT, which to the best of our knowledge is the first large bimodal pre-trained model for natural language and programming language. We train CodeBERT on both bimodal and unimodal data, and show that finetuning CodeBERT achieves state-of-the-art performance on downstream tasks including natural language code search and code-to-documentation generation. To further investigate the knowledge embodied in pre-trained models, we formulate the task of NL-PL probing and create a dataset for probing. We regard the probing task as a cloze-style answer selection problem, and curate distractors for both NL and PL parts. Results show that, with model parameters fixed, CodeBERT performs better than RoBERTa and a continuously trained model using codes only. There are many potential directions for further research on this field. First, one could learn better generators with bimodal evidence or more complicated neural architecture to improve the replaced token detection objective. Second, the loss functions of CodeBERT mainly target on NL-PL understanding tasks. Although CodeBERT achieves strong BLEU scores on code-to-documentation generation, the CodeBERT itself could be further improved by generation-related learning objectives.
How to successfully incorporate AST into the pretraining step is also an attractive direction. Third, we plan to apply CodeBERT to more NL-PL related tasks, and extend it to more programming languages. Flexible and powerful domain/language adaptation methods are necessary to generalize well.
A Data Statistic
Data statistics of the training/validation/testing data splits for six programming languages are given in Table 6. We train CodeBERT on one NVIDIA DGX-2 machine using FP16. It combines 16 interconnected NVIDIA Tesla V100 with 32GB memory. We use the following set of hyper-parameters to train models: batchsize is 2,048 and learning rate is 5e-4. We use Adam to update the parameters and set the number of warmup steps as 10K. We set the max length as 512 and the max training step is 100K. Training 1,000 batches of data costs 600 minutes with MLM objective, 120 minutes with RTD objective.
B.2 CodeSearch
In the fine-turning step, we set the learning rate as 1e-5, the batch size as 64, the max sequence length as 200 and the max fine-tuning epoch as 8. As the same with pre-training, We use Adam to update the parameters. We choose the model performed best on the development set, and use that to evaluate on the test set.
B.3 Code Summarization on Six Programming Languages
We use Transformer with 6 layers, 768 dimensional hidden states and 12 attention heads as our decoder in all settings. We set the max length of input and inference as 256 and 64, respectively. We use the Adam optimizer to update model parameters.
The learning rate and the batch size are 5e-5 and 64, respectively. We tune hyperparameters and perform early stopping on the development set.
B.4 Code Summarization on C#
Since state-of-the-art methods use RNN as their decoder, we choose a 2-layer GRU with an attention mechanism as our decoder for a comparison. We fine-tune models using a grid search with the following set of hyper-parameters: batchsize is in {32, 64} and learning rate is in {2e-5, 5e-5}. We report the number when models achieve best performance on the development set.
C Learning Curve of CodeSearch
From Figure 4, we can see that CodeBERT performs better at the early stage, which reflects that CodeBERT provides good initialization for learning downstream tasks.
D Late Fusion
In section §4.1 , we show that CodeBERT performs well in the setting where natural languages and codes have early interactions. Here, we investigate whether CodeBERT is good at working as a unified encoder. We apply CodeBERT for natural language code search in a later fusion setting, where CodeBERT first encodes NL and PL separately, and then calculates the similarity by dotproduct. In this way, code search is equivalent to find the nearest codes in the shared vector space. This scenario also facilitates the use of CodeBERT in an online system, where the representations of codes are calculated in advance. In the runtime, a system only needs to compute the representation of NL and vector-based dot-product. We fine-tune CodeBERT with the following objective, which maximizes the dot-product of the ground truth while minimizing the dot-product of distractors.
Results are given in Table 7. We just do this setting on two languages with a relatively small amount of data.
We can see that CodeBERT performs better than RoBERTa and the model pre-trained with codes only. And late fusion performs comparable with the standard way. What's more, late fusion is more efficient and this setting could be used in an online system.
E Case Study
To qualitatively analyze the effectiveness of Code-BERT, we give some cases for code search and code documentation generation tasks.
Considering the limited space, we only give the top2 results of the query for python programming language. As show in Figure 5, search results are very relevant with query. Figure 6 and Figure 7 show the outputs with different models for the code documentation generation task. As we can see, CodeBERT performs better than all baselines. | 6,412 | 2020-02-19T00:00:00.000 | [
"Computer Science"
] |
Enhancing eMMC using Multi-Stream technique
Multi-stream for SSD is a concept where the host writes data with similar expected lifetimes to contiguous blocks of NAND memory. Consequently, this data has higher chances of being invalidated together. During garbage collection (GC), there will be minimal valid pages to copy, resulting in improved endurance and decreased write-amplification factor. Implementing Multi-stream in eMMC devices presents challenges due to lower DRAM, lower computational resource available. Maintenance of stream related information requires increase in DRAM usage, transfer buffer usage as well as addition computation for GC of stream related memory area. In this paper, we examine implementation of Multi-Stream concept in eMMC despite its low resource constraints. We have experimented with an implementation that supports up to 4 streams which uses additional ~108 bytes of DRAM and ~104 bytes of code section, and improves WAF by ~50% on FIO (File Input/Output) benchmarking setup where lifetimes are simulated by multiple instances of FIO.
Introduction
In this paper we are suggesting how to use multi-stream with eMMC to improve endurance of device.eMMC (Embedded Multimedia Card) is an embedded storage solution; with an MMC interface, flash memory & controller, all in a BGA package.This NAND flash based eMMC is widely used for high performance applications such as mobile phone, smart phone, tablet computer, notebook computer, and automotive segment etc. eMMC provides fast scalable performance with interface speeds of up to 400Mbps.
Background
eMMC chip is made up of NAND flash-memory, which is architecturally partitioned into blocks, and each block contains a fixed number of pages.A page, which consists of a data area to store user data & a spare area to store housekeeping data such as error correction code, status flags, etc. Due to Hardware characteristics of flash memory, data cannot be directly updated to their residing pages unless their belonging blocks are erased first.As a result, invalid pages (page having out-of-date data) should not be read & should be recycled when the number of free pages is low.Because of this out-of-place update policy, there is a need to map the logical block address (LBA) to its corresponding physical block address (PBA) over flash memory.This mapping is done by FTL (Flash Translation Layer) that runs on a controller.Each block can survive over a limited number of program/erase cycles (P/E).
The flash memory space of eMMC consists of Blocks and each block consists of a fixed number of Pages.A Block is the smallest unit for erase operations, while a Page is the basic unit of reads and writes.A basic size of a page is 4KB, and the size of a block is 48MB, which is, dependent on NAND.The current single level cell NAND (SLC) generally supports 100,000 P/E cycles (depending upon the used flash technology), and multiple-level cell NAND (MLC) supports approximately one tenth of what SLC can achieve.
NAND flash memory constraints can be summarized as: (1) Write/erase granularity asymmetry: writes are performed on pages while erase operations are executed on blocks.(2) Erase-before-write rule: one of the most important constraints as one cannot modify data in-place.A costly erase operation must be achieved before data can be modified in case one needs to update data on the same location.(3) Limited number of Write/Erase (W/E) cycles.After the maximum number of erase cycles is achieved, a given memory cell becomes unusable.
Design Idea
A stream is an abstraction of eMMC capacity allocation that stores a set of data with the same lifetime expectancy.Multistream is a concept where we can group data stream into multiple streams, where all the data belonging to a particular stream will have almost same lifetime.Below example will describe how this stream can address the eMMC aging problem.Below figure gives the example where two NAND block (4 pages per block) have been filled up and new data are written to fill Block 2. These examples demonstrate that an eMMC's GC overheads depend not only on the current write pattern, but also on how data have been already placed in the eMMC.
An eMMC that implements the proposed multi-stream interface allows the host system to specify the lifetime of data in the form of stream-id.A multi-streamed eMMC allocates physical capacity carefully to place data in a stream together and not to mix data from different streams.
Figure 2. Application is passing stream id for eMMC, through system stack
All the data associated with a stream is expected to be invalidated at the same time, e.g., updated, trimmed, unmapped and de-allocated.
Implementation
The challenge for implementing multi-stream in eMMC is due to its limited computational resource & DRAM and transfer buffers.Transfer buffers are an area of memory that hold data in transit, either from the host or from the NAND.Data is moved to transfer buffers by way of DMA.
In any implementation of multi-stream, we need to maintain stream information for each block.This stream information needs to be maintained alongside the data for as long as it lasts, until it is unmapped or overwritten.The stream information determines which blocks in NAND that data associated with it will be written to.Both for hostwrites as well as card initiated writes such as GC.Maintenance of this information for each block, however, increases on-disk meta size as well as increasing complexity of GC.
Our approach to address these requirements is by limiting maintenance of the stream information only during host write operations.In this, we allocate 'Active Block' for each stream separately and assign stream-ids for them respectively.Active Block is a NAND block with "write operation in progress" state.Incoming data is written into these active blocks according to their stream-id.A transfer buffer will be associated with each Active block to hold the 3 data in transit.Once active block is completely written, we add it into list of data blocks.Data structure for Active block is part of global context of FTL and it contains block index, page offset, program level etc. total of 27 bytes, which are needed for writing data into NAND pages.Global context information will be written into NAND over period, so that in case of Sudden Power off scenario we can recover to previous state.
As number of stream-ids increases, there will be corresponding increase in DRAM usage size by same number of active block data structure as well as the number of transfer buffers required.This limits the number of stream-ids supported by the FTL.Currently based on the number of Active blocks we can have in memory; our implementation sets the number of streams supported to be 4.During GC, stream-id information is ignored.Data belongs to all stream-ids are considered as a single pool of data blocks and GC victim block is chosen as if there is no change because of multi-stream.Same mechanism is used for free block list as well, hence irrespective of difference in amount of total data belonging to any of stream, we can allocate blocks as per existing policy.Since data with similar lifetime is residing in a NAND block, it is expected that all the data are going to be erased almost at same time and hence GC operation will have minimal overhead for copying of valid pages to a new NAND block.However, this is not ideal.It is possible in this implementation that during GC, data from different streams will get written to same GC destination block, decreasing the efficiency of multi-stream implementation.
To improve efficiency, the application or host needs to judiciously use stream-ids, either by applying application specific intelligence as to know what data will have similar lifetimes or by applying machine learning to classify data to different stream-ids.
To associate data with a stream-id, JEDEC eMMC SPEC 5.1 has no support.Hence, we are using context-id field (4 bits: B25-B28) of CMD23 for passing stream-id information from host to device while write operation.To distinguish between context-id and stream-id, we are using reserved field in EXT_CSD register (byte 133).With this interface, host will send stream-id to the device, and can this filed can support up to 15 streams from interface point of view where id "0" is considered as normal data which has not assigned any stream-id number.
To associate data with a stream-id, JEDEC eMMC SPEC 5.1 has no support.Hence we are using context-id field (4 bits: B25-B28) of CMD23 for passing stream-id information from host to device while write operation.To distinguish between context-id and stream-id, we are using reserved field in EXT_CSD register (byte 133).With this interface, host will send stream-id to the device, and can this filed can support up to 15 streams from interface point of view where id "0" is considered as normal data which has not assigned any stream-id number.
Limitations in eMMC
Performance of eMMC also depends on fill factor of device, again it is dependent on firmware design policies and implementation.GC operations are high when the device is almost full with data.This is mainly because all the data blocks will have many valid data pages and few invalid pages and number of overprovisioned blocks affects this ratio.Active blocks uses these overprovisioned blocks and hence increasing number of streams degrades worst-case Iglobal context and when the fill factor is above 90% then we can de-activate multi-stream feature through which worst-case performance is kept intact.
The eMMC device uses Transfer buffers to cache the data written by host, which accumulates random pages and does an interleaved program to NAND to boost the performance.In case of multi-stream, we have to accumulate the data in separate buffers attached with each stream-id since each transfer buffer is attached to active blocks.Having more stream-id hinders performance by creating buffer crunch for cache operation.With our experiments, we could observe gain in performance up to 4 stream-ids and after which buffers are insufficient for read-write operations.
Benchmarking Environment
Benchmarking numbers taken by running FIO for both multi-streamed eMMC and without multi-streamed eMMC.For multi-stream support FIO ran for three different streamids.
Real Time Use cases
Benchmarking results shows that an intuitive data to stream mapping can lead to consistent latency and better NAND flash lifetime on the multi-streamed eMMC.We further believe that many applications and use cases can get similarly large benefits using multi-streamed eMMC.Consider log bases Database management system like Cassandra, SQLite4, RocksDB and many more.Similarly, some multi-head log-structured file system like F2FS maintains data separation based on their update frequencies.These are applications that explicitly manages data streams & orient their IO to be sequential.
Flash Friendly File System
Flash Friendly File System (F2FS) supports hot and cold data separation in File System.At runtime, F2FS manages six active logs inside the "Main Area:" Hot/Warm/cold for node and data; depends on their update frequencies.The detailed definition of hotness in F2FS is listed below:
Figure 3 .
Figure 3. Multi-streamed eMMC writes data into a related NAND block according to stream ID regardless of LBA.Three streams are introduced to store different types of host system data
Figure 4 .
Figure 4. Mapping Data with same lifetime to corresponding stream-id blocks Enhancing eMMC using Multi-Stream technique EAI Endorsed Transactions on Cloud Systems 12 2018 -03 2019 | Volume 5 | Issue 14 | e7
Table 2 .
Block Allocation policy | 2,573.2 | 2019-03-15T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Japanese Encephalitis Virus and Schizophyllum commune Co-Infection in a Harbor Seal in Japan
Simple Summary Japanese encephalitis virus (JEV) infections in seals are limited, with only two cases reported. Here, we report a case of meningoencephalitis and bronchopneumonia caused by the co-infection of JEV and Schizophyllum commune in a dead seal (Phoca vitulina) housed in an aquarium in Japan. The JEV isolate from the seal was classified as genotype GIb based on E gene sequences, which included recent Japanese human and mosquito isolates. Abstract The Japanese encephalitis virus (JEV), a mosquito-borne flavivirus, has a wide host range, extending from pigs and ardeid birds to opportunistic dead-end hosts, such as humans and horses. However, JEV encephalitis infections in aquatic mammals are rare, with only two cases in seals reported to date. Here, we report a lethal case of JEV and Schizophyllum commune co-infection in an aquarium-housed harbor seal in Japan. We isolated JEV from the brain of the dead seal and characterized its phylogeny and pathogenicity in mice. The virus isolate from the seal was classified as genotype GIb, which aligns with recent Japanese human and mosquito isolates as well as other seal viruses detected in China and Korea, and does not exhibit a unique sequence trait distinct from that of human and mosquito strains. We demonstrated that the seal isolate is pathogenic to mice and causes neuronal symptoms. These data suggest that seals should be considered a susceptible dead-end host for circulating JEV in natural settings.
Introduction
Japanese encephalitis (JE) is an acute, severe viral encephalitis associated with high mortality and morbidity rates.Annually, over 60,000 JE cases are reported across Asian countries, including the Southeast Asia and West Pacific regions [1].Although >99% of human infections are asymptomatic, the mortality rate reaches approximately 30% when symptoms manifest [2][3][4].
The etiological agent of JE, the Japanese encephalitis virus (JEV), is a member of the family Flaviviridae and the genus Orthoflavivirus.JEV is maintained throughout the Vet.Sci.2024, 11, 215 2 of 12 transmission cycle between the amplifying vertebrate hosts and Culex spp.mosquitoes.Pigs and ardeid birds develop high-titer viremia and serve as the amplifying hosts of JEV [5][6][7][8][9].Although JEV does not cause encephalitis in adult pigs, it triggers reproductive disorders such as stillbirth, abortion, and congenital malformations in the fetus [10,11], causing significant economic losses to the livestock industry.Considering there is no specific treatment for JE, vaccination is the only way to effectively control JEV infection in both humans and animals [12,13].
The serological evidence of JEV infection has been reported in a range of domestic and wildlife species.Although most JEV infections are subclinical, horses often develop fatal encephalitis [14,15].Similarly, JEV infection in cattle is common in endemic regions, and infection during pregnancy is frequently associated with miscarriage [16,17].Horses and cattle are considered dead-end hosts because of their low-titer viremia [18,19].Additionally, JEV genome or JEV-specific antibodies have been detected in various forms of terrestrial wildlife, including bats, monkeys (Macara fuscata), wild boars (Sus scrofa), feral raccoons (Procyon lotor), and raccoon dogs (Nyctereutes procyonoides) [20][21][22][23].The pathophysiology of JEV infection in wild animals and their ecological role remain largely unknown.In aquatic animals, JEV infection is unlikely to occur in their natural habitats because of limited exposure to mosquito vectors.However, they may become susceptible to JEV in artificial settings where they are exposed to mosquitoes.Notably, two JE cases involving three animals were reported in aquarium-kept speckled seals (Pusa hispida: former Phoca hispida) in China [24] and a spotted seal (Phoca largha) in the Republic of Korea [25] in 2017.
Schizophyllum commune is a common basidiomycete fungus found on all continents except Antarctica.It typically grows on decaying vegetation such as rotten wood.Although rare, it occasionally causes diseases in humans.As of 2013, 71 cases of the disease have been reported globally.However, the actual number of cases may be considerably higher because it is often misdiagnosed as aspergillosis.Human infection with S. commune often manifests as bronchopulmonary diseases and sinusitis.Recently, the involvement of S. commune in ophthalmic disease has also been indicated [26,27].Reports of S. commune infection in animals are even rarer, with only four cases documented in dogs [28][29][30][31] and one case in seals [32] to date.The pathological manifestations in dogs differ from those in humans, which include three cases of osteomyelitis and one case of subcutaneous granuloma.Notably, a harbor seal (Phoca vitulina) in a zoo in Japan died after presenting with corneal opacity and labored breathing.Severe necrotizing and granulomatous inflammation with fungal organisms were observed in the eyes, lungs, heart, and lymph nodes [32].
In the present study, we report a lethal case of an aquarium-kept harbor seal co-infected with JEV and S. commune.The aim of our study is a post-mortem examination to discover the cause of death of this animal.
Pathological Examination
For the necropsy, tissue samples were collected, fixed in 10% phosphate-buffered formalin, and embedded in paraffin.Formalin-fixed paraffin-embedded (FFPE) tissues were sliced into 4 µm thick sections and stained with hematoxylin and eosin.Selected tissue sections of the lungs and tracheobronchial lymph nodes were stained with Periodic acid-Schiff (PAS) and Grocott's methenamine silver (GMS) stains.An immunohistochemistry (IHC) of the brain tissue sections was performed using the primary antibodies listed in Table 1.Deparaffinized tissue sections were immersed in 10% hydrogen peroxide (H 2 O 2 ) in methanol at 23 • C for 5 min.For the JEV envelope protein, antigen retrieval was performed via an autoclave pretreatment conducted at 121 • C for 10 min using a Target Retrieval Solution, at a pH of 9.0 (Dako, Santa Clara, CA, USA).Following this, the tissue sections were incubated in 8% skim milk in Tris-buffered saline at 37 • C for 40 min to prevent non-specific reactions.Each tissue section was subsequently incubated with the primary antibody at 4 • C overnight.An anti-rabbit IgG polymer labeled with horseradish peroxidase (Envision, Agilent Technology, Santa Clara, CA, USA) was applied at 37 min and tissue sections were then rinsed with Tris-buffered saline.The reactions were visualized with 0.05% 3-3 ′ -diaminobenzidine containing 0.03% H 2 O 2 in Tris-hydrochloric acid buffer, followed by a counterstain with Mayer's hematoxylin.For IHC of JEV, a brain tissue section from a JEV-infected pig was used as the positive control.
Virus Isolation
Brain sample from the dead harbor seal, frozen at −80 • C, was thawed and finely chopped into small pieces.The pieces were then transferred into a 2 mL tube along with 5 mm stainless steel beads (Qiagen).Subsequently, DMEM with 1% FBS was added to the tube to obtain a final brain homogenate concentration of 10%.The tube was then processed using a Tissue Lyser II (Qiagen).The resultant homogenate was filtered through a 450 nm membrane filter and diluted 1:100 with DMEM containing 1% FBS.BHK21 cells (70% confluent) in a 6-well plate were inoculated with 200 µL of the diluted homogenate and incubated for 60 min.The cells were then washed once with DMEM containing 1% FBS, overlaid with 2 mL of the same medium, and incubated until cytopathic effects (CPEs) were observed.The culture supernatant (designated P1) was harvested on day 4 when the CPEs became clear.The virus was cloned via two rounds of limiting dilution in BHK21 cells (P2 and P3).Subsequently, the viral stock was then divided into aliquots (P4).
Plaque Formation
BHK21, Vero, and HmLu-1 cell monolayers were tested for plaque formation.The cells were transferred to 6-well plates and incubated with 200 µL of viral inoculum for 1 h at 37 • C.After removing the inoculum, the cells were overlaid with DMEM supplemented with 2% FBS and 0.8% SeaKem GTG agarose (Lonza, Chiba, Japan).Subsequently, the cells were incubated at 37 • C for 3-7 days until the plaque grew to a countable size.Cells were fixed in formalin and stained with crystal violet for plaque visualization.
Mouse Study
Viral titers (plaque forming units, PFU) were determined using plaque assays in BHK21 cells.Four-week-old female ICR mice (Japan SLC, Shizuoka, Japan) were intraperitoneally injected with 1.0 (n = 2) or 10 3 pfu (n = 2) of the seal isolate (JEV/Seal/UT1/2020) or 10 4 PFU of live vaccine m and at strains (each n = 4).Following injection, the body weights of the mice were recorded, and the mice were examined for any signs of disease daily for 14 days.The mice were euthanized if their body weight decreased to less than or equal to 80% of their initial weight.In another set of experiments, mice (n = 3) were intramuscularly injected with various titers of JEV/Seal/UT1/2020, and the viral titers in brain samples were subsequently measured in euthanized mice.Furthermore, the brains were examined histologically according to the procedure described above.
Sequencing
Viral particles were purified from the supernatants of infected cell cultures via ultracentrifugation at 10,986× g for 3 h using a P32ST rotor (Himac, Tokyo, Japan) with a 20% sucrose cushion.Viral RNA was extracted from the purified virus using ISOGEN-LS (Nippon Gene, Tokyo, Japan), following the manufacturer's instructions.The wholegenome sequence of the seal isolate was determined using next-generation sequencing and deposited in GenBank under accession number LC687612.For live vaccine strains, RNAs were reverse-transcribed using ReverTra Ace (Toyobo) with random 6-mers.Two overlapping segments comprising the envelope (E) gene were PCR-amplified with KOD FX Neo (Toyobo).The amplified products were purified using agarose gel electrophoresis and subjected to Sanger sequencing.We deposited the E gene sequences for the at and m strains to GenBank under the accession numbers LC687613 and LC701057, respectively.The primer sequences for PCR amplification are available upon request.
Phylogenetic Analysis
The phylogenetic tree of E gene sequences of JEV strains since 2015 was constructed using MEGA version 7 [37] employing maximum likelihood analysis with 1000 bootstraps.Vaccine and seal strains were incorporated into the dataset.Additionally, one representative strain of a known phylogeny was chosen to ensure the inclusion of every genotype or sub-genotype.
The Case
A wild harbor seal originally captured in the sea area of Hokkaido, northern Japan, in 2016 was later transferred to an aquarium in Chiba, eastern Japan.In April 2019, the animal developed anorexia.Blood tests revealed a high white blood cell (WBC) count of 43,500/µL, a β-D-glucan value of 45.6 pg/mL, and an endotoxin value of 3.7 pg/mL, indicating a potential fungal and/or bacterial infection.Following treatment with the antifungal drug itraconazole, the seal's appetite improved.Two weeks after the recovery in September 2019, the animal developed a fever (38.8 • C) with an elevated WBC count of 23,700/µL.Another antifungal drug, voriconazole, appeared to be effective and was administered until 15 September 2020.However, on 1 October the seal once again showed a high fever (39.1 • C) and an elevated β-D-glucan value of 112 pg/mL.Although a fungal infection was suspected, the Aspergillus antigen test result was negative, suggesting the involvement of another fungal pathogen.Voriconazole treatment was resumed, but unfortunately, the animal died on 7 October.No apparent neuronal symptoms were observed before death.No other animals in the same pool exhibited any signs of illness.
Pathological Findings
Upon necropsy, nodular lesions were observed in the lungs: a single large nodule (10 cm in diameter) in the left cranial lobe, a single large nodule (6 cm in diameter) in the right cranial lobe, and multiple small nodules (<3 cm in diameter) in both the caudal lobes.The nodules were firm and well demarcated (Figure 1A).Yellowish-white material was found in the center of the nodules upon sectioning.In addition, tracheobronchial lymph nodes were enlarged (6.5 × 3.5 cm), and the cut surface showed multiple nodules similar to those in the lung.The meningeal blood vessels were severely congested (Figure 1B).
The Case
A wild harbor seal originally captured in the sea area of Hokkaido, northern Japan, in 2016 was later transferred to an aquarium in Chiba, eastern Japan.In April 2019, the animal developed anorexia.Blood tests revealed a high white blood cell (WBC) count of 43,500/µL, a β-D-glucan value of 45.6 pg/mL, and an endotoxin value of 3.7 pg/mL, indicating a potential fungal and/or bacterial infection.Following treatment with the antifungal drug itraconazole, the seal's appetite improved.Two weeks after the recovery in September 2019, the animal developed a fever (38.8 °C) with an elevated WBC count of 23,700/µL.Another antifungal drug, voriconazole, appeared to be effective and was administered until 15 September 2020.However, on 1 October the seal once again showed a high fever (39.1 °C) and an elevated β-D-glucan value of 112 pg/mL.Although a fungal infection was suspected, the Aspergillus antigen test result was negative, suggesting the involvement of another fungal pathogen.Voriconazole treatment was resumed, but unfortunately, the animal died on 7 October.No apparent neuronal symptoms were observed before death.No other animals in the same pool exhibited any signs of illness.
Pathological Findings
Upon necropsy, nodular lesions were observed in the lungs: a single large nodule (10 cm in diameter) in the left cranial lobe, a single large nodule (6 cm in diameter) in the right cranial lobe, and multiple small nodules (<3 cm in diameter) in both the caudal lobes.The nodules were firm and well demarcated (Figure 1A).Yellowish-white material was found in the center of the nodules upon sectioning.In addition, tracheobronchial lymph nodes were enlarged (6.5 × 3.5 cm), and the cut surface showed multiple nodules similar to those in the lung.The meningeal blood vessels were severely congested (Figure 1B).Histologically, the pulmonary nodules were characterized by multifocal granulomatous inflammation with fibrosis, leading to the replacement of the alveolar structures (Figure 2A).Within these nodules, numerous epithelioid macrophages, including multinucleated giant cells, infiltrated and frequently contained phagocytosed fungi.Staining with PAS and GMS clarified the characteristic morphology of the fungi: filamentous septate hyphae (3-5 µm wide) with Y-shaped branching (Figure 2B).These findings were also observed in tracheobronchial lymph nodes (Figure 2C).Diffuse meningeal congestion and perivascular cuffing were observed in the cerebrum (Figure 2D).Inflammatory cells include numerous mononuclear cells and small numbers of neutrophils and eosinophils.Thrombus formation was observed in several blood vessels.Glial nodules and neuronal necrosis were observed in the cortex.Necrotic neurons have eosinophilic cytoplasm and nuclear debris, often surrounded by phagocytes (i.e., neuronophagia).The phagocytes consisted mainly of mononuclear cells, occasionally accompanied by neutrophils immunopositive for myeloperoxidase (Figure 2E).These findings were also observed in the olfactory bulb, thalamus, midbrain, and medulla oblongata.Upon the IHC, the JEV envelope protein was detected within the cytoplasm of neurons of the olfactory bulb, piriform cortex, entorhinal cortex, nucleus of the thalamus, and midbrain (Figure 2F).clude numerous mononuclear cells and small numbers of neutrophils and eosinophils.Thrombus formation was observed in several blood vessels.Glial nodules and neuronal necrosis were observed in the cortex.Necrotic neurons have eosinophilic cytoplasm and nuclear debris, often surrounded by phagocytes (i.e., neuronophagia).The phagocytes consisted mainly of mononuclear cells, occasionally accompanied by neutrophils immunopositive for myeloperoxidase (Figure 2E).These findings were also observed in the olfactory bulb, thalamus, midbrain, and medulla oblongata.Upon the IHC, the JEV envelope protein was detected within the cytoplasm of neurons of the olfactory bulb, piriform cortex, entorhinal cortex, nucleus of the thalamus, and midbrain (Figure 2F).A necrotic neuron surrounded by phagocytes including neutrophils (i.e., neuronophagia) in the magnocellular reticular nucleus.Some phagocytes were immunopositive for myeloperoxidase (inset; dorsal motor nucleus of the vagus nerve).Stained with HE. (F) Cerebrum.JEV antigens detected in the cytoplasm of cortical neurons.Note that the antigens were distributed along the neurites.Staining was observed using immunohistochemistry.
Molecular Identification of Pathogens
The panfungal PCR successfully amplified a DNA fragment of approximately 400 bp from lung and tracheobronchial lymph node samples.The sequenced PCR products exhibited >99% homology with S. commune reference sequence (GenBank accession number: MT466518.1).The RT-PCR for the detection of mosquito-borne flaviviruses amplified a DNA fragment of approximately 820 bp from the cerebrum sample, and the sequenced PCR products exhibited >99% homology with the JEV reference sequence (GenBank accession number: MK095782.1).
Molecular Identification of Pathogens
The panfungal PCR successfully amplified a DNA fragment of approximately 400 bp from lung and tracheobronchial lymph node samples.The sequenced PCR products exhibited >99% homology with S. commune reference sequence (GenBank accession number: MT466518.1).The RT-PCR for the detection of mosquito-borne flaviviruses amplified a DNA fragment of approximately 820 bp from the cerebrum sample, and the sequenced PCR products exhibited >99% homology with the JEV reference sequence (GenBank accession number: MK095782.1).
Virus Isolation
The cell cultures were inoculated with brain homogenate samples to isolate viruses.In BHK21 cells, CPE-containing round cells were observed locally on day 2 post-inoculation, and by day 4, an expansion was noted.In Vero cells, CPE was observed as cell detachment on days 6-7.In contrast, no apparent CPE was observed in C6/36 cells until day 7, although the virus was detected through the appearance of CPE upon the inoculation of the supernatant into BHK21 cells.These findings revealed the successful isolation of seal-origin JEV.We collected the supernatant of the infected BHK21 cells and designated the strain as JEV/seal/UT1/2020.
Sequence Analysis
We determined the E gene sequence of JEV/seal/UT1/2020 and conducted a phylogenetic analysis.The analysis showed that the isolate belonged to the genotype GIb, consistent with the most recent Japanese isolates (Figure 3).A BLAST search identified strains ZJ-YW-11-15 [38] and LYG-2 [39] as the closest match, sharing a 99.3% identity, both isolated from Culex tritaeniorhynchus in China in 2015 and 2016.Moreover, two seal isolates in China and Korea also belonged to the genotype GIb, whereas the two live vaccine m and at strains and one inactivated vaccine seed Beijing-1 strain used in Japan were classified as genotype III.A previous study indicated that eight amino acid residues of the E protein-107L, 138E, 176I, 177T, 244E, 264Q, 279K, 315A, and 439K-are linked to viral neurovirulence, of which 138E is the most critical residue [40].All these eight amino acids were conserved in JEV/seal/UT1/2020, whereas live vaccine strains possessed possible attenuated mutations at E138K and I176T, suggesting that the virus obtained from the seal might exhibit a virulent phenotype.
The cell cultures were inoculated with brain homogenate samples to isolate viruses.In BHK21 cells, CPE-containing round cells were observed locally on day 2 post-inoculation, and by day 4, an expansion was noted.In Vero cells, CPE was observed as cell detachment on days 6-7.In contrast, no apparent CPE was observed in C6/36 cells until day 7, although the virus was detected through the appearance of CPE upon the inoculation of the supernatant into BHK21 cells.These findings revealed the successful isolation of seal-origin JEV.We collected the supernatant of the infected BHK21 cells and designated the strain as JEV/seal/UT1/2020.
Sequence Analysis
We determined the E gene sequence of JEV/seal/UT1/2020 and conducted a phylogenetic analysis.The analysis showed that the isolate belonged to the genotype GIb, consistent with the most recent Japanese isolates (Figure 3).A BLAST search identified strains ZJ-YW-11-15 [38] and LYG-2 [39] as the closest match, sharing a 99.3% identity, both isolated from Culex tritaeniorhynchus in China in 2015 and 2016.Moreover, two seal isolates in China and Korea also belonged to the genotype GIb, whereas the two live vaccine m and at strains and one inactivated vaccine seed Beijing-1 strain used in Japan were classified as genotype III.A previous study indicated that eight amino acid residues of the E protein-107L, 138E, 176I, 177T, 244E, 264Q, 279K, 315A, and 439K-are linked to viral neurovirulence, of which 138E is the most critical residue [40].All these eight amino acids were conserved in JEV/seal/UT1/2020, whereas live vaccine strains possessed possible attenuated mutations at E138K and I176T, suggesting that the virus obtained from the seal might exhibit a virulent phenotype.Figure 3. Phylogenetic tree based on E gene sequence.Genotypes (la, lb, II, III, IV, and V) are represented on the right.The scale bar refers to the phylogenetic distance of 0.05 nucleotide substitutions per site.The West Nile virus is adopted as an outgroup.The seal virus JEV/seal/UT1/2020 isolated in this study is shown within the red square.bo: bovine; mosq: mosquito; hu: human; sw: swine; vac: vaccine.
Mouse Study
Mouse experiments were performed to assess the pathogenicity of JEV/seal/UT1/2020.All four mice injected with the seal virus showed a significant Figure 3. Phylogenetic tree based on E gene sequence.Genotypes (la, lb, II, III, IV, and V) are represented on the right.The scale bar refers to the phylogenetic distance of 0.05 nucleotide substitutions per site.The West Nile virus is adopted as an outgroup.The seal virus JEV/seal/UT1/2020 isolated in this study is shown within the red square.bo: bovine; mosq: mosquito; hu: human; sw: swine; vac: vaccine.
Mouse Study
Mouse experiments were performed to assess the pathogenicity of JEV/seal/UT1/2020.All four mice injected with the seal virus showed a significant reduction in body weight compared to those injected with the vaccine strains (Figure 4).Sick mice infected with the seal virus presented with symptoms indicative of neuronal disorders, such as dullness, abnormal posture, and excessive grooming.In addition, considerable titers of viruses (1.6 × 10 4 − 6.6 × 10 6 PFU/g) were recovered from the cerebrum samples of all three mice infected with the seal virus.The viruses were also recovered from the cerebellum (2.0 × 10 2 PFU/g) and medulla oblongata/pons (1.6 × 10 4 PFU/g) samples of an infected mouse.
Sick mice infected with the seal virus presented with symptoms indicative of neuronal disorders, such as dullness, abnormal posture, and excessive grooming.In addition, considerable titers of viruses (1.6 × 10 4 − 6.6 × 10 6 PFU/g) were recovered from the cerebrum samples of all three mice infected with the seal virus.The viruses were also recovered from the cerebellum (2.0 × 10 2 PFU/g) and medulla oblongata/pons (1.6 × 10 4 PFU/g) samples of an infected mouse.The histological examination of the mice inoculated with JEV/seal/UT1/2020 revealed characteristic findings consistent with those of brain lesions in seals.Extensive neuronal loss was observed in the external pyramidal layer of the cerebral cortex (Figure 5A).The residual neurons had eosinophilic cytoplasm and nuclear debris, indicating neuronal necrosis.JEV antigens were detected in cortical neurons via IHC (Figure 5B).
Discussion
To our knowledge, this is the first reported clinical case of co-infection with JEV and S. commune in a harbor seal leading to complex disease outcomes, including respiratory and systemic symptoms, and eventually resulting in death.In terms of S. commune infection in seals, this is the second report of a lethal case; notably, the first case occurred in Tokyo, Japan [32].The abundance of S. commune in the environment and the absence of the disease in other animals in the same cage suggested that the infection was The histological examination of the mice inoculated with JEV/seal/UT1/2020 revealed characteristic findings consistent with those of brain lesions in seals.Extensive neuronal loss was observed in the external pyramidal layer of the cerebral cortex (Figure 5A).The residual neurons had eosinophilic cytoplasm and nuclear debris, indicating neuronal necrosis.JEV antigens were detected in cortical neurons via IHC (Figure 5B).
Sick mice infected with the seal virus presented with symptoms indicative of neuronal disorders, such as dullness, abnormal posture, and excessive grooming.In addition, considerable titers of viruses (1.6 × 10 4 − 6.6 × 10 6 PFU/g) were recovered from the cerebrum samples of all three mice infected with the seal virus.The viruses were also recovered from the cerebellum (2.0 × 10 2 PFU/g) and medulla oblongata/pons (1.6 × 10 4 PFU/g) samples of an infected mouse.The histological examination of the mice inoculated with JEV/seal/UT1/2020 revealed characteristic findings consistent with those of brain lesions in seals.Extensive neuronal loss was observed in the external pyramidal layer of the cerebral cortex (Figure 5A).The residual neurons had eosinophilic cytoplasm and nuclear debris, indicating neuronal necrosis.JEV antigens were detected in cortical neurons via IHC (Figure 5B).
Discussion
To our knowledge, this is the first reported clinical case of co-infection with JEV and S. commune in a harbor seal leading to complex disease outcomes, including respiratory and systemic symptoms, and eventually resulting in death.In terms of S. commune infection in seals, this is the second report of a lethal case; notably, the first case occurred in Tokyo, Japan [32].The abundance of S. commune in the environment and the absence of the disease in other animals in the same cage suggested that the infection was
Discussion
To our knowledge, this is the first reported clinical case of co-infection with JEV and S. commune in a harbor seal leading to complex disease outcomes, including respiratory and systemic symptoms, and eventually resulting in death.In terms of S. commune infection in seals, this is the second report of a lethal case; notably, the first case occurred in Tokyo, Japan [32].The abundance of S. commune in the environment and the absence of the disease in other animals in the same cage suggested that the infection was opportunistic, probably due to the animal's compromised immune status.As for JEV, there have been two reports of infection in seals, where viruses were isolated from the brains of dead animals in China [24] and Korea [25], confirming that seals are susceptible hosts for JEV infection.Interestingly, the seal in Korea was co-infected with Dirofilaria immitis.These findings suggest that the seal is an intrinsic dead-end host for JEV, although disease progression may vary depending on existing risk factors, such as immune status, which could be compromised by co-infections with other pathogens.
In this case, the mode of JEV transmission to the seal was undetermined.Generally, Culex mosquitoes move within a range of 250 m to 1 km [41,42], which can be extended to 8.4 km with the assistance of wind [43].However, there was no pig farm within the 10 km radius of the aquarium.The surrounding area of the aquarium comprises a river, rice fields, and forests inhabited by wading birds and wild boars, suggesting the potential role of wild animals and vector mosquitoes in the JEV transmission to the seal that was occasionally housed in the open pool of the aquarium.To address this possibility, a surveillance study around the aquarium is warranted.
Among the three mammalian cell cultures tested, BHK21 cells were the most suitable for the isolation and growth of JEV/seal/UT1/2020.In BHK21 cells, the virus formed visible plaques after day 2, with the plaques becoming countable at 2-3 mm in diameter by day 3 post-infection.In contrast, in HmLu-1 cells, clear plaques of 1-2 mm were observed by day 4.In Vero cells, the plaques formed were less distinct compared to those in BHK21 or HmLu-1 cells in terms of color contrast and the sharpness of the edges.In contrast, the two live vaccine strains formed clear plaques 1-2 mm in diameter on days 7-8 in Vero cells, whereas their plaques on BHK21 cells were distorted in shape with blurred edges, albeit still countable.These findings suggest that BHK21 cells may serve as valuable tools for detecting and isolating JEVs.
JEVs are phylogenetically classified into five genotypes, GI to GV, based on their E gene nucleotide sequences [44].Southeast Asia is the only region where all five genotypes have been found and is considered the original epicenter of JEV [45].GIII was the dominant lineage in temperate and subtropical Asia until the 1990s and 2000s when GI gradually replaced it [46][47][48][49].GI is further divided into two sub-genotypes: GIa and GIb.The geographical distribution of GIa is limited to mainland Southeast Asia, Tibet, and Australia, while GIb exhibits a broader geographical distribution across temperate Asia [50,51].Our seal isolate, JEV/seal/UT1/2020, belongs to genotype GIb, which includes the other two seal strains as well as recent human and mosquito viruses in Japan.The transmissibility of JEV genotypes other than GIb to seals is unknown because there is no unique amino acid substitution in the E protein shared among the three seal strains.
All vaccine strains used in Japan, including the two live vaccines, m and at strains for pigs, and the inactivated vaccine Beijing-1 strain for humans, pigs, and horses, were classified as genotype III because these vaccines were developed when GIII was the dominant lineage in Japan.Although JEVs are known to exist as a single serotype, previous studies indicated that cross-protective immunity is only partial between genotypes GI and GIII [52,53].Therefore, the development of an alternative vaccine using a recent GIb strain is expected to provide better protection against JEV infection in humans and animals, including seals, in Japan.
Conclusions
Here, we report a lethal co-infection of S. commune and JEV in a harbor seal.Granulomatous pneumonia, lymphadenitis caused by S. commune infection, and meningoencephalitis caused by JEV infection were confirmed.Although seals are not well recognized as being susceptible to JEV infection, this is the third case of JEV infection in seals, supporting the notion that seals are dead-end hosts for JEV infection and that vaccinating seals against this infection may be warranted for zoos and aquariums.The isolated JEV strain belonged to the genotype GIb, consistent with recent isolates from Japan.The currently available JEV vaccines were developed before the genotype shift that occurred during the 1990s; therefore, they are exclusively effective against genotype GIII.Consequently, it is necessary to re-examine the efficacy of the JEV vaccine against circulating strains.
Figure 1 .Figure 1 .
Figure 1.Gross findings of a harbor seal co-infected with Schizophyllum commune and Japanese encephalitis virus.(A) The lungs exhibited multiple whitish nodules, and the tracheobronchial lymph nodes appeared enlarged.Upon sectioning, a yellowish-white material was observed at the center of the nodule (inset).(B) The congestion of meningeal vessels was observed in the brain.Histologically, the pulmonary nodules were characterized by multifocal granulomatous inflammation with fibrosis, leading to the replacement of the alveolar structures (Figure 2A).Within these nodules, numerous epithelioid macrophages, including multinucleated giant cells, infiltrated and frequently contained phagocytosed fungi.Staining with PAS and GMS clarified the characteristic morphology of the fungi: filamentous septate Figure 1.Gross findings of a harbor seal co-infected with Schizophyllum commune and Japanese encephalitis virus.(A) The lungs exhibited multiple whitish nodules, and the tracheobronchial lymph nodes appeared enlarged.Upon sectioning, a yellowish-white material was observed at the center of the nodule (inset).(B) The congestion of meningeal vessels was observed in the brain.
Figure 2 .
Figure 2. Histological findings of a harbor seal co-infected with Schizophyllum commune and Japanese encephalitis virus (JEV).(A) Lung.Granulomatous inflammation with fibrosis.Macrophages, including multinucleated giant cells, surrounded fungal organisms (inset).Stained with hematoxylin and eosin (HE).(B) Lung.Fungi phagocytosed by multinucleate giant cells had septate hyphae with Y-shaped branching.Stained with PAS.(C) Tracheobronchial lymph node.Fungi were stained black and displayed morphological features similar to those observed in the lung (inset).Stained with Grocott's methenamine silver.(D) Cerebrum.Congestion and inflammatory cell infiltration in the meninges and perivascular cuffing in the cortex.Stained with HE. (E) Medulla oblongata.A necrotic neuron surrounded by phagocytes including neutrophils (i.e., neuronophagia) in the magnocellular reticular nucleus.Some phagocytes were immunopositive for myeloperoxidase (inset; dorsal motor nucleus of the vagus nerve).Stained with HE. (F) Cerebrum.JEV antigens detected in the cytoplasm of cortical neurons.Note that the antigens were distributed along the neurites.Staining was observed using immunohistochemistry.
Figure 2 .
Figure 2. Histological findings of a harbor seal co-infected with Schizophyllum commune and Japanese encephalitis virus (JEV).(A) Lung.Granulomatous inflammation with fibrosis.Macrophages, including multinucleated giant cells, surrounded fungal organisms (inset).Stained with hematoxylin and eosin (HE).(B) Lung.Fungi phagocytosed by multinucleate giant cells had septate hyphae with Y-shaped branching.Stained with PAS.(C) Tracheobronchial lymph node.Fungi were stained black and displayed morphological features similar to those observed in the lung (inset).Stained with Grocott's methenamine silver.(D) Cerebrum.Congestion and inflammatory cell infiltration in the meninges and perivascular cuffing in the cortex.Stained with HE. (E) Medulla oblongata.A necrotic neuron surrounded by phagocytes including neutrophils (i.e., neuronophagia) in the magnocellular reticular nucleus.Some phagocytes were immunopositive for myeloperoxidase (inset; dorsal motor nucleus of the vagus nerve).Stained with HE. (F) Cerebrum.JEV antigens detected in the cytoplasm of cortical neurons.Note that the antigens were distributed along the neurites.Staining was observed using immunohistochemistry.
Figure 4 .
Figure 4. Pathogenicity of the seal virus in mice.Body weights were monitored during a two-week period following intramuscular inoculation with JEV/seal/UT1/2020 and live vaccine strains (m and at) and represented as change percentages compared with initial body weights (day 0).For the seal virus, two different titers (10 and 10 3 PFU) were used for inoculation.For the vaccine strains, a viral titer of 10 4 PFU was used.
Figure 5 .
Figure 5. Histological findings of mice inoculated with JEV/seal/UT1/2020.(A) Cerebrum.Loss of neurons due to neuronal necrosis in the external pyramidal layer (arrowheads).Necrotic neurons were eosinophilic and contained nuclear debris (inset; higher magnification).Stained with hematoxylin and eosin.(B) Cerebrum.Japanese encephalitis virus antigens were diffusely distributed throughout the cortex.Strong immunopositivity was observed in the cytoplasm of neurons (inset; higher magnification).Staining was observed using immunohistochemistry.
Figure 4 .
Figure 4. Pathogenicity of the seal virus in mice.Body weights were monitored during a twoweek period following intramuscular inoculation with JEV/seal/UT1/2020 and live vaccine strains (m and at) and represented as change percentages compared with initial body weights (day 0).For the seal virus, two different titers (10 and 10 3 PFU) were used for inoculation.For the vaccine strains, a viral titer of 10 4 PFU was used.
Figure 4 .
Figure 4. Pathogenicity of the seal virus in mice.Body weights were monitored during a two-week period following intramuscular inoculation with JEV/seal/UT1/2020 and live vaccine strains (m and at) and represented as change percentages compared with initial body weights (day 0).For the seal virus, two different titers (10 and 10 3 PFU) were used for inoculation.For the vaccine strains, a viral titer of 10 4 PFU was used.
Figure 5 .
Figure 5. Histological findings of mice inoculated with JEV/seal/UT1/2020.(A) Cerebrum.Loss of neurons due to neuronal necrosis in the external pyramidal layer (arrowheads).Necrotic neurons were eosinophilic and contained nuclear debris (inset; higher magnification).Stained with hematoxylin and eosin.(B) Cerebrum.Japanese encephalitis virus antigens were diffusely distributed throughout the cortex.Strong immunopositivity was observed in the cytoplasm of neurons (inset; higher magnification).Staining was observed using immunohistochemistry.
Figure 5 .
Figure 5. Histological findings of mice inoculated with JEV/seal/UT1/2020.(A) Cerebrum.Loss of neurons due to neuronal necrosis in the external pyramidal layer (arrowheads).Necrotic neurons were eosinophilic and contained nuclear debris (inset; higher magnification).Stained with hematoxylin and eosin.(B) Cerebrum.Japanese encephalitis virus antigens were diffusely distributed throughout the cortex.Strong immunopositivity was observed in the cytoplasm of neurons (inset; higher magnification).Staining was observed using immunohistochemistry.
Table 1 .
Primary antibodies used for immunohistochemistry. | 7,789.6 | 2024-05-01T00:00:00.000 | [
"Environmental Science",
"Medicine",
"Biology"
] |
An Experimental and Systematic Insight into the Temperature Sensitivity for a 0.15-µm Gate-Length HEMT Based on the GaN Technology
Presently, growing attention is being given to the analysis of the impact of the ambient temperature on the GaN HEMT performance. The present article is aimed at investigating both DC and microwave characteristics of a GaN-based HEMT versus the ambient temperature using measured data, an equivalent-circuit model, and a sensitivity-based analysis. The tested device is a 0.15-μm ultra-short gate-length AlGaN/GaN HEMT with a gate width of 200 μm. The interdigitated layout of this device is based on four fingers, each with a length of 50 μm. The scattering parameters are measured from 45 MHz to 50 GHz with the ambient temperature varied from −40 °C to 150 °C. A systematic study of the temperature-dependent performance is carried out by means of a sensitivity-based analysis. The achieved findings show that by the heating the transistor, the DC and microwave performance are degraded, due to the degradation in the electron transport properties.
Introduction
As well-known, high electron-mobility transistors (HEMTs) based on the aluminum gallium nitride/gallium nitride (AlGaN/GaN) heterojunction are outstanding candidates for high-frequency, high-power, and high-temperature applications, owing to the unique physical properties of the GaN material. Throughout the years, many studies have been dedicated to the investigation of how the temperature impacts the performance of GaNbased HEMT devices. To this end, both electro-thermal simulations [1][2][3][4][5][6] and measurementbased analysis [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26] have been developed. Although the electro-thermal device simulation is undoubtedly a very powerful and costless tool to deeply understand the underlying physics behind the operation of the transistor in order to improve the device fabrication, the measurement-based investigation is a step of crucial importance for achieving a reliable validation of a transistor technology prior to its use in real applications. Typically, measurements are coupled with the extraction of a small-signal equivalent-circuit model, which can be used as cornerstone for building both large-signal [27][28][29] and noise [30][31][32] transistor models that are essential for a successful design of microwave high-power [33][34][35][36] and low-noise amplifiers [36][37][38]. Compared to the effective modeling approach based on using artificial neural networks (ANNs) [39,40], the equivalent-circuit model allows a physically meaningful description [41][42][43], thereby enabling development of a sensitivitybased investigation.
To gain a comprehensive insight, the present article focuses on the impact of the ambient temperature (T a ) on the behavior of an on-wafer GaN HEMT using DC and microwave measurements coupled with a small-signal equivalent-circuit model and a sensitivity-based analysis. The device under test (DUT) is an ultra-short gate-length HEMT based on an AlGaN/GaN heterojunction grown on a silicon carbide (SiC) substrate. The DUT has a gate length of 0.15 µm and a gate width of 200 µm. The interdigitated layout consists of four fingers, each being 50-µm long. The DC characteristics and the scattering parameters from 45 MHz to 50 GHz are measured at nine different ambient temperature conditions by both cooling and heating the device, spanning the −40 • C to 150 • C temperature range. The measured data are used for equivalent-circuit extraction and sensitivity-based analysis, enabling one to assess the impact of the variation in the ambient temperature on the transistor performance. Basically, the main goal of this work is to extend the results of a previous article focused on the same DUT [15] by developing a sensitivity-based analysis, thus enabling a quantitative and systematic investigation of the effects of changes in the ambient temperature on the DC and microwave characteristics. Nevertheless, it should be pointed out that the obtained results are not of general validity, as they may strongly depend on the selected device and operating bias condition.
The paper is structured with the following sections. Section 2 describes the DUT and the experimental characterization, Section 3 reports and discusses the achieved findings, and Section 4 presents the conclusions.
Device under Test and Experimental Details
The metal organic chemical vapor deposition (MOCVD) technique is used to grow the Al 0.253 Ga 0.747 N/GaN heterostructure on a 400-µm-thick SiC substrate. The schematic crosssectional view and the photograph of the tested GaN HEMT are illustrated in Figure 1. The epitaxial layer structure of the device is made up of a 25-nm-thick undoped (UD) AlGaN barrier and a 1.5-µm-thick UD GaN buffer layer. A 300-nm-thick graded AlN relaxation layer was grown between the GaN buffer and the SiC substrate. The device was capped with a 5-nm-tick n+-GaN layer. The evaporation process was employed to create the source and drain ohmic contacts (Ti/Al/Ni/Au with thicknesses of 12/200/40/100 nm, respectively) and followed by 30 s of thermal annealing at 900 • C. The Schottky mushroomshaped gate was formed through Pt/Ti/Pt/Au evaporation and the subsequent lift-off process. Finally, a Si 3 N 4 layer with a thickness of 240 nm was deposited to passivate the device. The gate length of the tested GaN device is 0.15 µm. The interdigitated architecture of the device is based on the parallel connection of four 50-µm long fingers, resulting in a total gate width of 200 µm. The source-to-gate distance (L SG ) and the gate-to-drain distance (L GD ) are 1 µm and 2.85 µm, respectively. The DUT was fabricated at the University of Lille, France. To gain a comprehensive insight, the present article focuses on the impact of the ambient temperature (Ta) on the behavior of an on-wafer GaN HEMT using DC and microwave measurements coupled with a small-signal equivalent-circuit model and a sensitivity-based analysis. The device under test (DUT) is an ultra-short gate-length HEMT based on an AlGaN/GaN heterojunction grown on a silicon carbide (SiC) substrate. The DUT has a gate length of 0.15 μm and a gate width of 200 μm. The interdigitated layout consists of four fingers, each being 50-μm long. The DC characteristics and the scattering parameters from 45 MHz to 50 GHz are measured at nine different ambient temperature conditions by both cooling and heating the device, spanning the −40 °C to 150 °C temperature range. The measured data are used for equivalent-circuit extraction and sensitivity-based analysis, enabling one to assess the impact of the variation in the ambient temperature on the transistor performance. Basically, the main goal of this work is to extend the results of a previous article focused on the same DUT [15] by developing a sensitivity-based analysis, thus enabling a quantitative and systematic investigation of the effects of changes in the ambient temperature on the DC and microwave characteristics. Nevertheless, it should be pointed out that the obtained results are not of general validity, as they may strongly depend on the selected device and operating bias condition.
The paper is structured with the following sections. Section 2 describes the DUT and the experimental characterization, Section 3 reports and discusses the achieved findings, and Section 4 presents the conclusions.
Device under Test and Experimental Details
The metal organic chemical vapor deposition (MOCVD) technique is used to grow the Al0.253Ga0.747N/GaN heterostructure on a 400-μm-thick SiC substrate. The schematic crosssectional view and the photograph of the tested GaN HEMT are illustrated in Figure 1. The epitaxial layer structure of the device is made up of a 25-nm-thick undoped (UD) AlGaN barrier and a 1.5-μm-thick UD GaN buffer layer. A 300-nm-thick graded AlN relaxation layer was grown between the GaN buffer and the SiC substrate. The device was capped with a 5-nm-tick n+-GaN layer. The evaporation process was employed to create the source and drain ohmic contacts (Ti/Al/Ni/Au with thicknesses of 12/200/40/100 nm, respectively) and followed by 30 s of thermal annealing at 900°C. The Schottky mushroom-shaped gate was formed through Pt/Ti/Pt/Au evaporation and the subsequent lift-off process. Finally, a Si3N4 layer with a thickness of 240 nm was deposited to passivate the device. The gate length of the tested GaN device is 0.15 μm. The interdigitated architecture of the device is based on the parallel connection of four 50-μm long fingers, resulting in a total gate width of 200 μm. The source-to-gate distance (LSG) and the gate-to-drain distance (LGD) are 1 μm and 2.85 μm, respectively. The DUT was fabricated at the University of Lille, France. were measured with a thermal probe station connected to an HP8510C vector network analyzer (VNA) and with the aid of commercially available software to guarantee that the data are free of human error. The DC and frequency-dependent measurements were performed at each temperature after the sample reached uniform steady-state temperature. Figure 2 shows the measurement process, model extraction, and sensitivity-based analysis.
Micromachines 2021, 12, x 3 of 12 100 °C, 125 °C, and 150 °C. The analysis is performed using the DC characteristics and the S-parameters at a bias point in the saturation region: Vds = 15 and Vgs = −5 V. The device parameters were measured with a thermal probe station connected to an HP8510C vector network analyzer (VNA) and with the aid of commercially available software to guarantee that the data are free of human error. The DC and frequency-dependent measurements were performed at each temperature after the sample reached uniform steady-state temperature. Figure 2 shows the measurement process, model extraction, and sensitivitybased analysis.
Experimental Results and Systematic Analysis
The systematic sensitivity-based analysis at the selected bias voltages is accomplished using the dimensionless relative sensitivity of each parameter (RSP) with respect to Ta, which is calculated by normalizing the relative change in P to the relative change in Ta: (1) where P0 is the value of the selected parameter P at the reference temperature (Ta0) of 25 °C.
The remainder of this section is divided into two subsections: the first part is focused on the impact of the ambient temperature on the DC characteristics, whereas the second part is dedicated to the effects of the variations in the ambient temperature on the microwave performance.
Sensitivity-Based Analysis of DC Characteristics
The DC output characteristics for the tested GaN HEMT at Vgs = −4 V and −5 V under different temperature conditions are illustrated in Figure 3. As can be clearly observed, Ids is considerably reduced with increasing temperature. This might be attributed to the degradation in the carrier transport properties as a consequence of the enhancement of the phonon-scattering processes at higher temperatures. Analogously, the reduction in Ids at higher temperatures can be observed by plotting the DC transcharacteristics of the studied
Experimental Results and Systematic Analysis
The systematic sensitivity-based analysis at the selected bias voltages is accomplished using the dimensionless relative sensitivity of each parameter (RSP) with respect to T a , which is calculated by normalizing the relative change in P to the relative change in T a : where P 0 is the value of the selected parameter P at the reference temperature (T a0 ) of 25 • C. The remainder of this section is divided into two subsections: the first part is focused on the impact of the ambient temperature on the DC characteristics, whereas the second part is dedicated to the effects of the variations in the ambient temperature on the microwave performance.
Sensitivity-Based Analysis of DC Characteristics
The DC output characteristics for the tested GaN HEMT at V gs = −4 V and −5 V under different temperature conditions are illustrated in Figure 3. As can be clearly observed, I ds is considerably reduced with increasing temperature. This might be attributed to the degradation in the carrier transport properties as a consequence of the enhancement of the phonon-scattering processes at higher temperatures. Analogously, the reduction in I ds at higher temperatures can be observed by plotting the DC transcharacteristics of the studied device at V ds = 15 V (see Figure 4). Similar fashion of degradation can be seen in the transconductance by plotting the g m -V gs curves at V ds = 15 V (see Figure 5a). As a matter of the fact, by heating the device, the transconductance is significantly reduced. However, it should be underlined that a higher temperature leads to a wider and flatter curve of g m versus V gs , thus implying a better linearity. Over the years, many studies have been devoted at improving the flatness of g m versus V gs , in order to yield to an improved transistor linearity and then to a more linear power amplifier [44,45]. For the sake of completeness, the behavior of g m is plotted also as a function of I ds (see Figure 5b). At the selected bias point: V ds = 15 V and V gs = −5 V, both I ds and g m are significantly degraded when the temperature is raised, as illustrated in Figure 6a. The interesting feature found in the g m −V gs curves of Figure 5a is that, by heating the device, the peak value of g m is not only greatly reduced but also shifted toward less negative values of V gs . As shown in Figure 6b, the value of V gs at which the peak in g m occurs (V gm ) is increased from −5.2 V at −40 • C to −4.8 V at 150 • C. It is worth noting that also the threshold voltage (V th ) shifts toward less negative values at higher T a . As illustrated in Figure 6b, V th is increased from −6.24 V at −40 • C to −5.64 V at 150 • C.
device at Vds = 15 V (see Figure 4). Similar fashion of degradation can be seen in the transconductance by plotting the gm-Vgs curves at Vds = 15 V (see Figure 5a). As a matter of the fact, by heating the device, the transconductance is significantly reduced. However, it should be underlined that a higher temperature leads to a wider and flatter curve of gm versus Vgs, thus implying a better linearity. Over the years, many studies have been devoted at improving the flatness of gm versus Vgs, in order to yield to an improved transistor linearity and then to a more linear power amplifier [44,45]. For the sake of completeness, the behavior of gm is plotted also as a function of Ids (see Figure 5b). At the selected bias point: Vds = 15 V and Vgs = −5 V, both Ids and gm are significantly degraded when the temperature is raised, as illustrated in Figure 6a. The interesting feature found in the gm−Vgs curves of Figure 5a is that, by heating the device, the peak value of gm is not only greatly reduced but also shifted toward less negative values of Vgs. As shown in Figure 6b, the value of Vgs at which the peak in gm occurs (Vgm) is increased from −5.2 V at −40 °C to −4.8 V at 150 °C. It is worth noting that also the threshold voltage (Vth) shifts toward less negative values at higher Ta. As illustrated in Figure 6b, Vth is increased from −6.24 V at −40 °C to −5.64 V at 150 °C. device at Vds = 15 V (see Figure 4). Similar fashion of degradation can be seen in the transconductance by plotting the gm-Vgs curves at Vds = 15 V (see Figure 5a). As a matter of the fact, by heating the device, the transconductance is significantly reduced. However, it should be underlined that a higher temperature leads to a wider and flatter curve of gm versus Vgs, thus implying a better linearity. Over the years, many studies have been devoted at improving the flatness of gm versus Vgs, in order to yield to an improved transistor linearity and then to a more linear power amplifier [44,45]. For the sake of completeness, the behavior of gm is plotted also as a function of Ids (see Figure 5b). At the selected bias point: Vds = 15 V and Vgs = −5 V, both Ids and gm are significantly degraded when the temperature is raised, as illustrated in Figure 6a. The interesting feature found in the gm−Vgs curves of Figure 5a is that, by heating the device, the peak value of gm is not only greatly reduced but also shifted toward less negative values of Vgs. As shown in Figure 6b, the value of Vgs at which the peak in gm occurs (Vgm) is increased from −5.2 V at −40 °C to −4.8 V at 150 °C. It is worth noting that also the threshold voltage (Vth) shifts toward less negative values at higher Ta. As illustrated in Figure 6b, Vth is increased from −6.24 V at −40 °C to −5.64 V at 150 °C. Using Equation (1), the relative sensitivities of I ds , g m , V gm , and V th with respect to T a are calculated and reported in Figure 7. As can be observed, RSI ds , RSg m , RSV th , and RSV gm are negative for the studied device, as a consequence of the fact that an increase in T a leads to a reduction in the values of I ds, g m , V th , and V gm . Using Equation (1), the relative sensitivities of Ids, gm, Vgm, and Vth with respect to Ta are calculated and reported in Figure 7. As can be observed, RSIds, RSgm, RSVth, and RSVgm are negative for the studied device, as a consequence of the fact that an increase in Ta leads to a reduction in the values of Ids, gm, Vth, and Vgm. Using Equation (1), the relative sensitivities of Ids, gm, Vgm, and Vth with respect to Ta are calculated and reported in Figure 7. As can be observed, RSIds, RSgm, RSVth, and RSVgm are negative for the studied device, as a consequence of the fact that an increase in Ta leads to a reduction in the values of Ids, gm, Vth, and Vgm. Using Equation (1), the relative sensitivities of Ids, gm, Vgm, and Vth with respect to Ta are calculated and reported in Figure 7. As can be observed, RSIds, RSgm, RSVth, and RSVgm are negative for the studied device, as a consequence of the fact that an increase in Ta leads to a reduction in the values of Ids, gm, Vth, and Vgm.
Sensitivity-Based Analysis of Small-Signal Parameters and RF Figures of Merit
The equivalent-circuit model in Figure 8 was used to model the measured S-parameters of the studied device. The equivalent-circuit parameters (ECPs) were extracted as described in [15], using the well-known "cold" pinch-off approach that has been widely and successfully applied to the GaN technology over the years [46][47][48][49][50]. The effect of T a on the measured S-parameters at the selected bias point is shown in Figure 9. It should be highlighted that as the carrier transport properties deteriorate with increasing T a , the low-frequency magnitude of S 21 is reduced. This is in line with the degradation of the DC g m at higher T a (see Figure 5). As can be observed, the tested device is affected by the kink effect in S 22 . As well-known, the GaN HEMT technology is prone to be affected by this phenomenon, owing to the relatively high transconductance [51][52][53][54]. In accordance with this, the observed kink effect in S 22 is more pronounced at lower T a , due to the higher g m . The DC parameters, ECPs, intrinsic input and feedback time constants (i.e., τ gs = R gs C gs and τ gd = R gd C gd ), the unity current gain cut-off frequency (f t ), and the maximum frequency of oscillation (f max ) are reported at 25 • C in Table 1. The three intrinsic time constants (τ m , τ gs , and τ gd ), which emerge from the inertia of the intrinsic transistor in reacting to rapid signal changes, are meant to represent the intrinsic non-quasi-static (NQS) effects, which play a more significant role at higher frequencies.
The values of f t and f max are obtained from the frequency-dependent behavior of the measured short-circuit current gain (h 21 ) and maximum stable/available gain (MSG/MAG), respectively (see Figure 10).
Sensitivity-Based Analysis of Small-Signal Parameters and RF Figures of Merit
The equivalent-circuit model in Figure 8 was used to model the measured S-parameters of the studied device. The equivalent-circuit parameters (ECPs) were extracted as described in [15], using the well-known "cold" pinch-off approach that has been widely and successfully applied to the GaN technology over the years [46][47][48][49][50]. The effect of Ta on the measured S-parameters at the selected bias point is shown in Figure 9. It should be highlighted that as the carrier transport properties deteriorate with increasing Ta, the lowfrequency magnitude of S21 is reduced. This is in line with the degradation of the DC gm at higher Ta (see Figure 5). As can be observed, the tested device is affected by the kink effect in S22. As well-known, the GaN HEMT technology is prone to be affected by this phenomenon, owing to the relatively high transconductance [51][52][53][54]. In accordance with this, the observed kink effect in S22 is more pronounced at lower Ta, due to the higher gm. The DC parameters, ECPs, intrinsic input and feedback time constants (i.e., gs = RgsCgs and gd = RgdCgd), the unity current gain cut-off frequency (ft), and the maximum frequency of oscillation (fmax) are reported at 25 °C in Table 1. The three intrinsic time constants (m, gs, and gd), which emerge from the inertia of the intrinsic transistor in reacting to rapid signal changes, are meant to represent the intrinsic non-quasi-static (NQS) effects, which play a more significant role at higher frequencies. The values of ft and fmax are obtained from the frequency-dependent behavior of the measured short-circuit current gain (h21) and maximum stable/available gain (MSG/MAG), respectively (see Figure 10).
Sensitivity-Based Analysis of Small-Signal Parameters and RF Figures of Merit
The equivalent-circuit model in Figure 8 was used to model the measured S-parameters of the studied device. The equivalent-circuit parameters (ECPs) were extracted as described in [15], using the well-known "cold" pinch-off approach that has been widely and successfully applied to the GaN technology over the years [46][47][48][49][50]. The effect of Ta on the measured S-parameters at the selected bias point is shown in Figure 9. It should be highlighted that as the carrier transport properties deteriorate with increasing Ta, the lowfrequency magnitude of S21 is reduced. This is in line with the degradation of the DC gm at higher Ta (see Figure 5). As can be observed, the tested device is affected by the kink effect in S22. As well-known, the GaN HEMT technology is prone to be affected by this phenomenon, owing to the relatively high transconductance [51][52][53][54]. In accordance with this, the observed kink effect in S22 is more pronounced at lower Ta, due to the higher gm. The DC parameters, ECPs, intrinsic input and feedback time constants (i.e., gs = RgsCgs and gd = RgdCgd), the unity current gain cut-off frequency (ft), and the maximum frequency of oscillation (fmax) are reported at 25 °C in Table 1. The three intrinsic time constants (m, gs, and gd), which emerge from the inertia of the intrinsic transistor in reacting to rapid signal changes, are meant to represent the intrinsic non-quasi-static (NQS) effects, which play a more significant role at higher frequencies. The values of ft and fmax are obtained from the frequency-dependent behavior of the measured short-circuit current gain (h21) and maximum stable/available gain (MSG/MAG), respectively (see Figure 10). Similarly, to what was done for the DC parameters, the relative sensitivities of the other parameters in Table 1 are calculated using equation 1 and then shown in Figure 11. Because of their low dependence on the temperature, the relative sensitivities of the extrinsic capacitances and inductances are almost nil, as depicted in Figure 11a,b. It can be observed in Figure 11c-e that the relative sensitivities of the extrinsic and intrinsic resistances are positive, reflecting the fact that the resistive contributions increase at higher temperatures. Figure 11f illustrates that unlike the resistances, the transconductance has a negative relative sensitivity, as this parameter is degraded when heating the device. Similarly, to what was done for the DC parameters, the relative sensitivities of the other parameters in Table 1 are calculated using equation 1 and then shown in Figure 11. Because of their low dependence on the temperature, the relative sensitivities of the extrinsic capacitances and inductances are almost nil, as depicted in Figure 11a,b. It can be observed in Figure 11c-e that the relative sensitivities of the extrinsic and intrinsic resistances are positive, reflecting the fact that the resistive contributions increase at higher temperatures. Figure 11f illustrates that unlike the resistances, the transconductance has a negative relative sensitivity, as this parameter is degraded when heating the device. As illustrated in Figure 11d, the relative sensitivity of C gs is negative, while the relative sensitivities of C gd and C ds are positive. Figure 11g shows that the relative sensitivities of the intrinsic time constants are positive, indicating that they increase when the temperature is raised. This finding implies that the NQS effects occur at lower frequencies when the device is heated. As can be observed in Figure 11f, the relative sensitivities of f t and f max are negative, implying lower operating frequencies at higher temperatures. Figure 11h shows that the relative sensitivities of the magnitude of S 21 and h 21 at 45 MHz are negative, in line with the reduction of the transconductance at higher temperatures, while the stability factor (K) shows a positive temperature sensitivity as illustrated at 1 GHz.
For the tested device, a good agreement between measured and simulated S-parameters was achieved. As an example, Figure 12 depicts the comparison between measurements and S-parameter simulations at two different T a for the tested GaN HEMT at the selected bias condition. The simulations are obtained using the equivalent-circuit model depicted in Figure 8 by means of the commercial microwave simulation software advanced design system (ADS). The small-signal ECPs extracted for different T a from the measured S-parameters are used as inputs to the schematic. other parameters in Table 1 are calculated using equation 1 and then shown in Figure 11. Because of their low dependence on the temperature, the relative sensitivities of the extrinsic capacitances and inductances are almost nil, as depicted in Figure 11a,b. It can be observed in Figure 11c-e that the relative sensitivities of the extrinsic and intrinsic resistances are positive, reflecting the fact that the resistive contributions increase at higher temperatures. Figure 11f illustrates that unlike the resistances, the transconductance has a negative relative sensitivity, as this parameter is degraded when heating the device. As illustrated in Figure 11d, the relative sensitivity of Cgs is negative, while the relative sensitivities of Cgd and Cds are positive. Figure 11g shows that the relative sensitivities of the intrinsic time constants are positive, indicating that they increase when the temperature is raised. This finding implies that the NQS effects occur at lower frequencies when the device is heated. As can be observed in Figure 11f, the relative sensitivities of ft and fmax are negative, implying lower operating frequencies at higher temperatures. Figure 11h shows that the relative sensitivities of the magnitude of S21 and h21 at 45 MHz are negative, in line with the reduction of the transconductance at higher temperatures, while the stability factor (K) shows a positive temperature sensitivity as illustrated at 1 GHz.
For the tested device, a good agreement between measured and simulated S-parameters was achieved. As an example, Figure 12 depicts the comparison between measurements and S-parameter simulations at two different Ta for the tested GaN HEMT at the selected bias condition. The simulations are obtained using the equivalent-circuit model depicted in Figure 8 by means of the commercial microwave simulation software advanced design system (ADS). The small-signal ECPs extracted for different Ta from the
Conclusions
We have reported an experimental investigation on the impact of the ambient temperature on the DC and microwave performance of a transistor based on an ultra-short 0.15-m GaN HEMT technology. Measurements have been coupled with an equivalentcircuit model and a sensitivity-based study to assess the thermal effects on device performance over the wide temperature range going from −40 °C to 150 °C. The relative sensitivity was used as the evaluation indicator for this study because it enables investigation of the effects of the ambient temperature on the device performance in a quantitative, systematic, and simple way. The measurement-based findings show that both DC and microwave performance of the studied device are remarkably degraded with increasing temperature.
Conclusions
We have reported an experimental investigation on the impact of the ambient temperature on the DC and microwave performance of a transistor based on an ultra-short 0.15-µm GaN HEMT technology. Measurements have been coupled with an equivalent-circuit model and a sensitivity-based study to assess the thermal effects on device performance over the wide temperature range going from −40 • C to 150 • C. The relative sensitivity was used as the evaluation indicator for this study because it enables investigation of the effects of the ambient temperature on the device performance in a quantitative, systematic, and simple way. The measurement-based findings show that both DC and microwave performance of the studied device are remarkably degraded with increasing temperature.
Data Availability Statement:
The data presented in this study are available on request from the authors. | 6,818 | 2021-05-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Performance of Waterborne Epoxy Emulsion Sand Fog Seal as a Preventive Pavement Maintenance Method: From Laboratory to Field
To preserve the existing asphalt pavement and extend its service life, various preventive maintenance methods, such as chip seal, slurry seal, fog seal, and microsurfacing, have been commonly applied. Sand fog seal is one of such maintenance methods, which is based on the application of bitumen emulsion and sand. Thus, its performance is largely dependent on the properties of the bitumen emulsion and sand. This study aims to develop an improved sand fog seal method by using waterborne epoxy resin as an emulsion modifier. To this end, both laboratory tests and field trials were conducted. In the laboratory, the wet track abrasion and British pendulum test were performed to determine the optimum sand size for the sand fog seal, and the rubbing test was carried out to evaluate the wearing resistance of the sealing material. In the field, pavement surface regularity before and after the sand fog seal application was measured using the 3 m straightedge method, and the surface macrotexture and skid resistance were evaluated with the sand patch method and British pendulum test, respectively. The laboratory test results indicated that the optimum sand size range is 0.45–0.9 mm, and the sand fog seal with waterborne epoxy resin showed good wearing resistance and skid resistance. The field test results verified that both the pavement texture and skid resistance were substantially improved after sand fog sealing.
Introduction
Asphalt pavement suffers from a series of distresses, such as rutting and cracking, because of vehicle loading and environmental effects. ese distresses, if not treated in the early stage, can negatively affect the driving safety and pavement service life. To preserve pavement from further deterioration, preventive maintenance is commonly applied. Techniques like chip seal, slurry seal, fog seal, and microsurfacing have been proven to be effective for such purpose. Among these techniques, fog seal is a convenient and economical method, which has gained widespread applications [1,2]. Many studies have shown that the application of fog seal with bitumen emulsion can help seal the small cracks, improve aggregate retention, and decrease the permeability of water and air [3][4][5]. Nevertheless, fog seal also faces some concerns, such as long curing time before traffic opening and reduction of the skid resistance [3,6,7]. Correspondingly, sand fog seal, which applies both bitumen emulsion and sand, has been developed to address these concerns [8,9]. e incorporation of sand into fog seal can increase the mechanical strength as well as the skid resistance of the pavement surface.
Cationic bitumen emulsion is commonly applied in fog seal to allow sufficient penetration of the emulsion into cracks and surface voids [10]. To further improve the inherent properties of bitumen emulsion, polymers or latexes have been added into the emulsion [10]. For mixing convenience with bitumen emulsion, latexes such as styrene-butadiene rubber (SBR) and natural rubber (NR) are commonly used [11][12][13]. ese latexes can disperse easily in the water continuous phase of bitumen emulsion, and with the evaporation of water, the interconnected polymer networks can be formed within bitumen. In recent years, waterborne epoxy resin has gained increasing interest due to its various advantages, such as convenience of application and improvement of strength and adhesion [14,15]. As waterborne epoxy resin is soluble in water, it can be easily blended with bitumen emulsion. e epoxy starts to cure as it is in contact with hardener, and then the three-dimensional chemically interconnected epoxy polymer networks can be formed inside bitumen as water in emulsion evaporates. Such a polymer structure will lead to significant improvement of the bitumen mechanical properties. It has been reported that the incorporation of waterborne epoxy resin into bitumen emulsion substantially increased its high-temperature performance, adhesion with aggregate, and fatigue properties [14]. e objective of this research is to investigate the performance of the sand fog seal using waterborne epoxy modified bitumen emulsion as a preventive maintenance method for asphalt pavement. To achieve this object, both laboratory tests and field trials were conducted. In the laboratory, the wet track abrasion and British pendulum test were performed to determine the optimum sand size for the sand fog seal, and the rubbing test was carried out to evaluate the wearing resistance of the sealing material. In the field, pavement surface regularity before and after the sand fog seal application was measured using the 3 m straightedge method, and the surface macrotexture and skid resistance were evaluated with the sand patch method and British pendulum test, respectively.
Materials.
Cationic bitumen emulsion was used in this research. Table 1 presents the main properties of the bitumen emulsion. e waterborne epoxy resin had two parts: Part A and Part B. Part A is the waterborne epoxy, and Part B is the amino curing agent. e weight ratio of Part A and Part B was 5 : 2 as recommended by the supplier. Table 2 presents the major properties of the waterborne epoxy resin. e black corundum sand, a very strong aggregate mainly composed of Al 2 O 3 , was used as the "sand" for sand fog seal. Table 3 shows the main components and properties of the black corundum. According to the previous research [16], to achieve the optimum properties of the waterborne epoxy bitumen (WEB) emulsion, the solid residual weight of the waterborne epoxy resin system was 50% by weight.
Wet Track Abrasion Test.
Wet track abrasion tests were conducted in this study to determine the optimum size of black corundum sand for the surface treatment according to ISSA TB 100. Five black corundum sand size ranges were considered, including 2.0-4.0 mm, 0.9-2.0 mm, 0.45-0.9 mm, 0.2-0.45 mm, and 0.15-0.2 mm, which are denoted as AS-1, AS-2, AS-3, AS-4, and AS-5, respectively, in this study. As Figure 1 shows, layer of WEB emulsion was first sprayed on the specimen holding plates, followed by spreading of the black corundum sand. Both the WEB emulsion and black corundum were applied at a rate of 700 g/m 2 . e specimens were then maintained at room temperature for 3 days to allow the evaporation of water and curing of waterborne epoxy resin. e weight and skid resistance of all the testing samples prior to abrasion were measured using a weight scale and British pendulum tester, respectively. After soaking in water at 25°C for 60 min, the wet track abrasion was performed on the samples for 300 s. In the follow-up step, the samples were heat conditioned at 60°C until the weight became constant. Finally, the weight loss due to abrasion was calculated, and the skid resistance of these samples after abrasion was measured. Figure 2 was used to measure the friction of the testing specimens according to ASTM E303. e specimens were fixed on the test table, which were then wetted thoroughly by water. Four swings were performed on each specimen, and the British pendulum number (BPN) was recorded.
Rubbing Test.
To simulate the polishing of the pavement surface under repeated vehicle loading, a conventional rutting test machine was modified ( Figure 3). An additional motor was installed to introduce a transverse movement of the tire, which is perpendicular to the original longitudinal movement. Figures 3(a) and 3(b) show the testing device and the moving track of the rubber tire, respectively. e horizontal movement was set to be 10 cm/min, while the longitudinal movement was 42 ± 1 times/min. In addition, a water bath was designed and installed to control the specimen temperature. e dense-graded bituminous mixture slabs with a size of 300 mm × 300 mm × 50 mm were prepared, followed by spraying of the WEB emulsion and 0.45-0.9 mm black corundum onto the surface, both at a rate of 700 g/m 2 . e specimens were then kept at room temperature for 3 days. Rubbing tests were conducted at three different temperatures: 25°C, 40°C, and 60°C. During the rubbing tests, skid resistance tests were conducted using a British pendulum tester at 2 h intervals till the end of the tests.
Construction and Testing of the Trial Section.
e trial section of the sand fog seal was conducted on a two-way sixlane asphalt pavement in Fo Shan, Guangdong Province, China, and the station number ranged from K17 + 600 to K18 + 000. e construction procedure mainly constitutes the following: (1) pretreatment of the old pavement, (2) grinding of the old pavement, (3) sand fog seal spraying, and (4) preservation before traffic opening (Figure 4). e pretreatment of the old pavement focused primarily on the cleaning of pavement surface and cracks sealing. Following light grinding of the pavement surface, the WEB emulsion and black corundum were sprayed with a construction speed of 6-8 km/h. e thickness of the sand fog seal was controlled at approximately 2 mm. Finally, the newly constructed sand fog seal was preserved until the surface dried.
Wet Track Abrasion.
During the wet track abrasion, the surface aggregates are subjected to both lateral shearing stress and vertical stress.
us, these aggregates become vulnerable and inclined to be stripped off the plate. Table 4 presents the abrasion loss of sand fog seal during the wet track abrasion test, as well as the coefficient of variances (COVs) of the test results. It shows that the larger-sized black corundum sands were more likely to get detached compared with smaller-sized ones. e aggregate loss ratio of AS-5 was 0.7%, which was much lower than that of AS-1, which was 2.2%.
is is mainly because larger-sized aggregates have smaller surface area than smaller aggregates. us, there is less and lower bonding between bitumen emulsion and larger-sized aggregate, leading to higher abrasion loss under the external force. Table 5 presents the surface frictions of the sand fog seal with different sand sizes before and after the wet track abrasion in terms of BPN. It is clear that opposite to the abrasion test results, the sand fog seal with smaller-sized aggregate produced weaker skid resistance. e BPN of AS-5 was 66, which was 25% lower than that of AS-1. Comparing the BPN values before and after wet track abrasion, it can be seen that the abrasion process significantly reduced the skid resistance of all specimens, but the reduction rates of largersized aggregates (AS-1 and AS-2) were much more Advances in Materials Science and Engineering significant than those of the smaller-sized ones. Among all test groups, AS-3 showed the smallest BPN reduction value, and the best friction after abrasion. Although the smaller size corundum sand (AS-4 and AS-5) had better retention on WEB emulsion during the wet track abrasion test, the skid resistance was slightly poorer than AS-3. us, comprehensively considering the results of the wet track abrasion tests and the skid resistance tests, the black corundum sand with the size of 0.45-0.9 mm was selected for the following tests.
Wearing Resistance Analysis.
Under the repeated vehicle tire loading, the surface of asphalt pavement becomes polished and its skid resistance declines over time, which leads to driving safety concern, especially under wet conditions. erefore, at certain stage, it is necessary to treat the pavement surface with appropriate methods for protecting the surface texture and ensuring good friction properties. Figure 5 illustrates the change of BPN values during the polishing process in the rubbing tests at different temperatures up to 8 hours. Rapid friction drop can be observed within the first 2 hours, and then the declination rate became slow. It is worth noting that unlike conventional sand fog seal with raw bitumen emulsion, temperature did not significantly affect the polishing resistance of the WEB sand fog seal. e possible reason is that WEB is a thermal-setting material which has good thermal stability. us, good retention of the aggregates was achieved even at a high pavement service temperature of 60°C. is ensures that WEB sand fog seal will have good wearing resistance under different weather conditions. e underlying mechanism is that the bitumen emulsion with waterborne epoxy resin had strong adhesion to the existing pavement layer. Furthermore, the sand was also bonded together effectively by WEB. From our previous study [16], it was found that the continuous connected epoxy microstructure can be formed by the addition of 50 wt.% waterborne into bitumen emulsion. Such structure would provide the residual bitumen significant improvement on the mechanical performance as well as the adhesion with aggregates. erefore, the WEB sand fog seal surface layer will not become loose easily. us, the WEB sand fog seal-treated pavement surface had good skid resistance and durability.
Pavement Conditions Index.
e existing pavement surface of the test trial site had been in service for more than 6 years, and cracking was the major type of distress as shown in Figure 6. In addition, some minor stripping was also observed. If not treated timely, the pavement surface was expected to deteriorate rapidly. Table 6 presents the results of the pavement condition index (PCI) surveys of the existing pavement conducted according to ASTM D6433: Standard Practice for Roads and Parking Lots Pavement Condition Index Surveys. In this table, "R" and "L" denote the right lane and left lane, respectively, and the follow-up number (1 or 2) indicates the lane number. It can be seen that all PCI values are larger than 70, indicating the pavement condition was "satisfactory" (85 > PCI ≥ 70) or "good" (PCI ≥ 85).
Pavement Surface Regularity.
e pavement surface regularity was measured using the 3 m straightedge method (Figure 7). e measurement was carried out every 100 m along the traffic direction (Figure 7(a)). e surface regularities of the pavement before and after the sand fog seal were measured and compared, as shown in Figure 8. It can be seen that the surface regularities were similar before and after treatment, with the deviations less than 3 mm for all the testing sites. e reason might be that the sand fog seal layer was very thin (around 2 mm), which would not cause significant change to the pavement surface regularity, and the existing pavement were in "satisfactory" and "good" conditions, which had relatively smooth surface.
Pavement Surface Texture Measurement.
e pavement surface mean texture depth (MTD) was measured with the sand patch method according to ASTM E965. e locations for the MTD tests matched those of the pavement surface regularity tests as shown in Figure 7(a). e MTD was calculated by using the following equation: where V is the volume of the testing material (mm 3 ) and D is the average diameter of the area covered by the testing material (mm). Figure 9 presents the MTD values of the pavement surface before and different years after the sand fog seal. It can be seen that the average MTD values for all the sections were in the range of 0.8-0.9 mm prior to the treatment. e MTD values were increased to more than 1.0 mm right after the treatment, corresponding to an increment of 20-30%. is result indicated that the pavement surface texture was significantly improved by the sand fog seal. e MTD values then started to decrease gradually under vehicle loading. However, all the MTD values were still significantly larger than those of the old pavement surface after 1.5 years of service. As discussed in one of the previous papers [16], the incorporation of waterborne epoxy resin into the sand fog seal significantly increased the adhesion between the seal layer and the original pavement surface. us, the pavement Advances in Materials Science and Engineering surface profile becomes more robust and will not be worn out by tires easily, which helps preserve the texture of the pavement surface.
Skid Resistance of Pavement.
e British pendulum tests were also conducted in the field trial sections. Figure 10 shows the BPN values measured before sand fog seal treatment and after treatment till up to 1.5 years of service.
e BPN values were less than 60 for all the locations of the old pavement surface, which were significantly increased to around 80 right after the WEB sand fog seal treatment, verifying the effect of the sand fog seal in enhancing the skid resistance of the pavement surface. e BPN values then decreased with time after traffic opening. Nevertheless, the BPN values after 1.5 years of service were still significantly larger than those of the old pavement surface. is indicated that the WEB sand fog seal can maintain durable skid resistance for at least 1.5 years, which was mainly because the waterborne epoxy resin increased the adhesion among the aggregates as well as the bonding between the sealing layer and the pavement surface. Figure 11 illustrates the relationship between the field measured MTD and BPN, which can be approximately fitted by a linear equation with a regression coefficient of 0.79. A larger MTD corresponds to a larger BPN, i.e., the skid resistance of the pavement surface is positively influenced by the MTD of the pavement surface.
Conclusions
In this study, both laboratory tests and field trials were conducted to investigate the performance of the WEB sand fog seal as a preventive maintenance method. e following points summarize the major findings of this study: (i) e black corundum sand with the size of 0.45-0.9 mm provided the optimum WEB sand fog seal performance in terms of balanced skid resistance and abrasion resistance (ii) e WEB sand fog seal caused insignificant change to the surface regularity of the existing pavement in "satisfactory" and "good" conditions. (iii) e WEB sand fog seal significantly improved the pavement surface texture and skid resistance. e MTD was increased by 20-30%, and the BPN was increased by 50% right after the WEB sand fog seal is applied. After 1.5 years of service, both the surface texture and skid resistance were maintained a level much better than those of the old pavement surface, indicating good durability of the WEB sand fog seal. (iv ) e MTD and BPN of the WEB sand fog seal surface are positively related, and a linear relation can be developed to describe their relationship.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 4,237.8 | 2020-09-29T00:00:00.000 | [
"Materials Science"
] |
Improving Singing Voice Separation Using Curriculum Learning on Recurrent Neural Networks
: Single-channel singing voice separation has been considered a difficult task, as it requires predicting two different audio sources independently from mixed vocal and instrument sounds recorded by a single microphone. We propose a new singing voice separation approach based on the curriculum learning framework, in which learning is started with only easy examples and then task difficulty is gradually increased. In this study, we regard the data providing obviously dominant characteristics of a single source as an easy case and the other data as a difficult case. To quantify the dominance property between two sources, we define a dominance factor that determines a difficulty level according to relative intensity between vocal sound and instrument sound. If a given data is determined to provide obviously dominant characteristics of a single source according to the factor, it is regarded as an easy case; otherwise, it belongs to a difficult case. Early stages in the learning focus on easy cases, thus allowing rapidly learning overall characteristics of each source. On the other hand, later stages handle difficult cases, allowing more careful and sophisticated learning. In experiments conducted on three song datasets, the proposed approach demonstrated superior performance compared to the conventional approaches.
Introduction
Single-channel singing voice separation aims to separate instrument sounds and vocal sounds from a given music data recorded by a single microphone. This problem has been considered as a difficult separation task in comparison with multi-channel signal separation that handles data recorded by two or more microphones. In recent years, deep neural network (DNN)-based modelling approaches such as convolutional neural network (CNN) [1][2][3], recurrent neural network (RNN) [4][5][6], and U-Net [7][8][9][10] have been adopted to overcome this difficulty. Although the conventional DNN-based approaches have reported improvement of separation performance, most of them have difficulties in obtaining a reliable convergence and they requires tremendous learning time.
To overcome these limitations, we propose a new single-channel singing voice separation approach based on curriculum learning [11].
Curriculum learning is a type of learning method, in which learning is started with only easy examples and then task difficulty is gradually increased. Thus, it is capable of learning a model by gradually adjusting the difficulty of training data according to learning stages. Several successful applications of the curriculum learning include image classification [12], object detection [13,14], and optical flow estimation [15]. The difficulty level of the training data can be determined either by humans [14,15] or automatically [16]. The curriculum learning used in the proposed method is implemented by adjusting the weight of the loss function in RNN according to the relative dominance of one source to the other. Giving different weights according to the dominance allows reducing learning time, as the dominant data tend to converge rapidly to a vicinity of the dominant region. In addition, it can make sophisticated learning, as higher weights given to less dominant data lead to fine-tuning of the models. This paper is organized as follows-in Section 2, the conventional DNN-based singing voice separation approaches are addressed. Section 3 explains the proposed curriculum learning-based approach. In Section 4, several experimental results are described. And Section 5 concludes this paper. Figure 1 shows a typical framework of singing voice separation using a DNN [4][5][6]. First, the input mixture in the time-domain is transformed to magnitude and phase spectra using short-time Fourier transform (STFT). DNNs take the magnitude spectra of the mixed sound as input to obtain the spectral magnitudes of both vocal and instrument sounds. The inverse STFT is performed by combining the magnitude spectra predicted by the DNN model and the phase spectra of the input mixture to get separated vocals and musical instruments. Some types of DNNs for single-channel singing voice separation are convolutional neural networks (CNNs) [1][2][3], recurrent neural networks (RNNs) [4][5][6], convolutional recurrent neural networks (CRNNs) [17,18], and U-Net [7][8][9][10].
RNN-Based Singing Voice Separation
RNN is a network created to process sequential data using memory, in other words, hidden states that are invisible to the outside of the network. A vanilla RNN conveying basic elements only is defined by the following equation: where t is a discrete time index, h t is a hidden state output vector at time t, x t is an input vector whose components are magnitude spectra generated by STFT, b is a bias vector, and g is an activation function. The input, state variable, and bias variable represented by boldface letters are all vectors. W hh is a weight matrix from the past output (h t−1 ) to the current output vectors (h t ), and W hx is another weight matrix from the input feature (x) to the output. RNN can process sequences of arbitrary length in such a way that hidden state h t−1 summarizes the information of the previous inputs, {x 1 , . . . , x t−1 }, and combines them with the current input x t to calculate the output vector h t . According to the first Markovian assumption in Equation (1), the vanilla RNN assumes dependency to previous output only, so it may not be able to handle the cases where the dependency is complicated and exists over long time. To model multiple-level dependency in time, several vanilla RNNs are connected sequentially to build stacked RNNs as shown in Figure 2. Stacked RNN consists of several layers of multiple RNNs, and longer time dependencies are expected to be modeled by the cascaded recurrent paths. The dependency across both time and frequency axes can be modeled by convolutional neural networks (CNNs) [19,20]. The CNN architecture is not well suited to sequential data such as audio sounds because the size of its receptive field is fixed, and the convolution operation calculates the output based only on the region within the receptive field. However, CNN has its own advantage that most calculations can be implemented in a parallel manner, which, in terms of computational efficiency, makes a great advantage over RNN that requires computation of all previous outputs to be finalized before generating the current outputs. For these reasons, CNN and its variants [1][2][3] have been widely used for singing voice separation. In addition, there have been attempts to combine an RNN and a CNN. One of their combinations is a convolutional recurrent neural network (CRNN) that has been successfully adopted to singing voice separation as well [17,18].
Loss Function of Singing Voice Separation Models
To measure the error between the ground truth and the predicted spectra, there are many distance metrics which provide scale-invariance. One of such metrics for the STFT spectra is Itakura-Saito divergence [21], and it was applied to nonnegative matrix factorization (NMF) [22,23] with successful results in music sound analysis [22]. β-divergence also provides scale-invariant distance, and it was applied to NMF as well [24,25]. However, in DNN learning, it is preferred to use simple metrics that are easy to differentiate to derive a learning algorithm and compute gradients efficiently. In our paper, we use mean squared error (MSE, squared L2 loss) between ground truth and predicted spectrum that was adopted to singing voice separation recently [7,8]. The MSE between two arbitrary STFT vectors x and y is defined as follows: where f is a discrete frequency index, and F is the total number of frequency bins. The advantage of the squared L2 loss is that it is differentiable and has smoother convergence around 0. For the given ground truths in the powerspectral domain at time t, y 1,t and y 2,t , and their approximates,ỹ 1,t and y 2,t , the prediction error is defined by the MSE averaged over all the frequency bins: E p (t) = l 2 (y 1,t ,ỹ 1,t ) + l 2 (y 2,t ,ỹ 2,t ) The objective function should be designed to reduce the prediction error between the ground truth and the approximate at all time and frequency units [4][5][6]: where T is the number of samples in time. Learning using this loss is expected to yield a high signal to interference ratio (SIR) [26].
Proposed Curriculum Learning Technique on RNN and U-Net
In this section, the baseline models for the singing voice separation and the proposed curriculum learning framework are explained in detail, and the advantage of the proposed method is described. Our proposed method can be applied to various types of models that are based on stochastic learning. In this paper, we use a stacked RNN [4][5][6] and a U-Net [7] as reference baselines for the proposed method.
Stacked RNN-based Separation Model
Assume that x t is a STFT output obtained by a vector of dimension F at time t. In order to add temporal variation to the input feature vector, the previous and next frames are concatenated as follows: The stacked RNN for the singing voice separation has three RNN layers followed by a fully connected (FC) layer as shown in Figure 3. The hidden state output at time t and level l, denoted by h (l) t , is generated by passing the previous hidden state outputs, h (l−1) t , through the RNN unit at level l, expressed as follows: where σ is a sigmoid function, W hz , and b (l) are weight matrices and bias vector of the RNN at level l. The input to the RNN at level 1 is the mixed sounds described in Equation (5), that is, h The outputs of the last RNN layer pass through an FC layer with rectified linear unit (ReLU) activation function. At time t, the prediction of y of source i is expressed aŝ where W are the weight matrix and the bias vector of the fully connected layer. Figure 3 illustrates the data flow of the stacked RNN. To train the weights, the objective function in Equation (4) is minimized with appropriate optimization method. Learning the stacked RNN is briefly summarized in Algorithm 1. Detailed derivation of the gradients can be found in Reference [4].
using gradient descent learning end for Finally, the time frequency mask is obtained by using the ratio of the outputs of the individual FC layers,ŷ 1,t andŷ 2,t . The magnitude spectrum vector of the separated sound is obtained by the element-wise multiplication of the time frequency mask and the spectrogram of the mixed input: The original sound is reconstructed by inverse STFT on the magnitudeỹ i,t and the phase components of the mixed sound.
U-Net Baseline Model
Our U-Net baseline model is based on Reference [7]. For the mixed input signals, STFT extracts magnitude spectrogram X of size T × F, where T is the number of frames in time domain and F is the number of frequency bins. Each component of the 2-dimensional matrix X is a specific time-frequency unit region, where t and f are discrete time and frequency index variables, and x t ( f ) is STFT of the input mixture sound at time t and frequency f . The spectrogram matrix X is globally scaled so that all the values of X should be within [0, 1] for fast and stable learning. The encoder consists of six convolution layers with various kernel window sizes and stride values. Each convolution output is generated according to the following formula: where g is an activation function, l is the layer number expressed by parenthesized superscripts, e is a matrix whose column vectors are convolution kernel functions, and b (l) e is a bias vector. The superscript and the subscript notations, {·} (l) e , indicate that the parameter set belongs to layer l of the encoder network. The initial output is a copy of the input spectrogram, E (0) = X. The function "conv" defines a 2-dimensional convolution operator with the given set of a kernel matrix and a bias vector. The encoder extracts high-level features with resolution reduction from the input features.
Decoding part of the U-Net consists of five up-convolution layers with initial input as the encoder output at the final convolution layer. The outputs of the encoder layers are concatenated to the inputs of the decoding layers to compensate any lost information during encoding. The decoding operation from lower layer to the upper layer is recursively defined as follows: where [·] is a concatanetion operator, W (l) d are a matrix of the deconvolution kernel functions and a bias vector, and H (l) is an intermediate output at layer l. The initial input to the decoding up-convolution layer is the encoder output only, that is, H (5) = E (6) . The detailed configuration including the sizes and the numbers of the kernel functions adopted in this paper is given in Table 1. Learning the U-Net baseline is briefly summarized in Algorithm 2. Exact calculation of the gradients can be found in Reference [7].
The activation function at the final layer is a sigmoid function, and its output bounded in [0, 1] is used as a mask for vocal sound. The size of the final output of the decoder, D (0) , is T × F, and it is element-wisely multiplied to the magnitude spectrum of the mixed sound to obtain the magnitude spectra of the vocal and the instrument sounds as follows:
Proposed Curriculum Learning for Singing Voice Separation
Stochastic models can learn more effectively by dividing the training phase and increasing the degree of difficulty in sequence according to the phase, as known as curriculum learning [11]. Examples of applying curriculum learning to audio data include speech emotion recognition [27], and speech separation [28]. However, the proposed method differs in that the difficulty is determined using the source dominance of each time-frequency bin.
In order to apply curriculum learning to singing voice separation, the difficulty level of each data sample is defined. Our main assumption is that if one source signal is dominant to the other, it is more effective in describing the corresponding source. The dominance of source 1 is defined by the ratio of source 1 to the sum of source 1 and 2, in terms of their powerspectral energies as follows: and likewise, the dominance of source 2 is computed as The relationship between the dominance factors of source 1 and source 2 is that γ 2 = 1 − γ 1 . The multiplication of the two dominance factors, γ 1 γ 2 = γ 1 (1 − γ 2 ), is considered to model the reciprocal effect of the dominances of the two exclusive sources. Figure 4 shows the value of γ 1 γ 2 according to the change of γ 1 , the dominance of a single source. Same behavior is observed for γ 2 as well. It shows that if only a single source is active, γ 1 = 1 or γ 2 = 1, γ 1 γ 2 becomes zero, which is the minimum, because the other dominance factor is zero. In that case, no separation processes are required, so we can regard this obviously the easiest case. If both sources are active by the same degree, γ 1 = γ 2 = 0.5, γ 1 γ 2 is maximum (0.25), and it is the most difficult case. To selectively give weights according to the difficulty of the separation, we propose the following weight function: where α is a mode selection parameter that may vary according to the curriculum. As shown in Figure 4, If α = 1, w(t, f ) grows as γ 1 increases, and has the maximum at γ 1 = 0.5, which is the most difficult case. For α = −1, the graph is upside down, so the easiest cases (γ = 0 and 1) will have the maximum weights. We apply the weight in Equation (15) to the loss function in Equation (4) to obtain a new weighted loss function: for t, f ← 1 to T, F do calculate γ 1 (t, f ) and γ 2 (t, f ) end for (4) Update Θ using gradient descent learning end for end for In Equation (15), we can use α to choose where to focus during learning. The value of α is set differently according to the learning stages to obtain the proposed curriculum learning algorithm. Here, we can choose α ∈ {−1, 0, 1}. Figure 5 shows that when α is −1, it focuses on the parts that are easily separable, and when α is 1, it focuses on the parts that are hardly separable. In the first stage of training, α is set to −1 so that the model should focus on the easily separable parts. The model is expected to quickly learn the overall characteristics of each source early in the training. In the next stage, α is set to 0 and the model then learns the whole data evenly. In the last stage, the α is set to 1 and the model focuses on time-frequency regions that are difficult to separate. Thus, the model can be learned more sophisticatedly with difficult samples in the later stages of the training phase to improve the final performance of the model. Algorithm 3 summarizes the detailed procedure of the proposed curriculum learning method.
Evaluation
We performed singing voice separation experiments and compared the result of the proposed approach with that of baseline approaches. We adopted a stacked RNN [4][5][6] and a U-Net [7,29] as baseline models. Separation experiments were carried out on the simulated vocal-instrument recordings generated by mixing the sound sources from MIR-1K [30], ccMixter [31], and MUSDB18 [32] datasets.
Separation Model Configuration
The audio file format is mono and 16 kHz PCM (pulse code modulation). To obtain spectrogram features from audio signals, we applied STFT to each analysis frame of 64 milliseconds (1024 samples), while making a 25% overlap with a shift size of 16 milliseconds. After STFT, only the first half of the STFT frequency bins are used because the second half is the complex conjugate of the first one. The number of frequency bins of the STFT spectrogram in Equation (2) is the half of the frame size, F = 1024/2 = 512. The extracted spectrogram features are component-wisely rescaled so that all the elements belong to [0, 1]. The first baseline model is implemented by a stacked RNN similar to that in References [4][5][6]. The number of layers and the number of hidden nodes in each layer are set to 3 and 1024, respectively. ReLU activation functions are used for all layers including the output, because spectral magnitudes are nonnegative. Hence, no further post-processing is required. The second baseline is an U-Net implemented by the description given in References [7,29]. Its detailed architecture is given in Table 1. The encoder consists of six convolution layers with a kernel size of 5 and a stride of 2. Each layer uses batch normalization [33] and leakyReLU [34] with 0.2 sloop. The encoder consists of six transposed convolution layers with a kernel size of 5 and a stride of 2. As shown in Table 1, the number of channels of the encoder in each layer doubles in the next layer, except layer 0. At the decoder, the number of channels decreases by half in the next layer to reconstruct the spectrogram as the original size.
Each layer uses a batch normalization and a plain ReLU activation function [34]. The first three layers of the decoder use dropout [35] with the drop probability of 0.5. All models are trained by Adam optimizer [36] with its initial learning rate of 10 −4 and a batch size of 128. The number of training steps of each model is 10,000. Baseline models use the mean squared error (MSE) loss given in Equation (4), and the proposed method uses the weighted loss function in Equation (16) with the weights computed by Equation (15). All the experiments are performed on an Intel i5 4-core desktop computer with 64 gigabytes main memory, equipped with NVIDIA GTX 1080 GPU with 8 gigabytes memory. The batch size (128) was determined considering that the GPU memory should hold all the weights of a single batch to compute the gradients of the weights at the same time.
Memory and Computational Complexity Analysis
The proposed curriculum learning requires additional computation and memory space. The parameters to be computed are
To calculate γ 1 in Equation (13), it requires three square operations, an addition, and a division. γ 2 = 1 − γ 1 requires an additional subtraction. If the unit time for basic floating-point operations is identical, the additional number of calculations is 6 for each time-frequency bin, thus making the total amount of operations for T frames and F frequency bins to be 6TF. Weight calculation requires two multiplications and an addition, and loss function weighting requires a multiplication, so it totally costs 4 unit times for each time and frequency bin. The total amount of loss function update is 4TF. In total, the proposed curriculum learning approach requires 10TF of unit computation time for each iteration in training of the networks. These additive computation only updates the final loss function, so it does not depend on the network architecture. The stacked RNN has three hidden layers of 1024 output units, and they should be computed for each sample, x t . The number of weights in the stacked RNN is roughly defined by input dimension ×1024 2 × output dimension ×2, so the additional computation is negligible in training. As shown in Table 1, the U-Net also requires large number of weights but additional computation (10 units per input) slightly affects the computation time.
In terms of memory resources, the proposed learning method requires memory spaces to store variables γ 1 and w in Equations (13) and (15). Total number is 2TF, and each variable is stored as 4-byte float data. The number of frames refers to the batch size in batch gradient learning, so T = 128 in our experiments. F is the number of frequency bins of the spectrogram, which is a half of the STFT analysis size, so F = 1024/2 = 512. Therefore, the required memory space for a single batch is 2TF × 4 bytes = 2 × 128 × 512 × 4 = 512 kilobytes. We used NVIDIA GTX 1080 GPU with 8 gigabytes memory, so the additional memory space can be disregarded in training models. Because the network architecture is the same, there is no difference in computation and the number of model parameters, and a testing procedure of the proposed model is same as that of the baseline model. Although the training time varies with the model configuration, 10,000 training steps were usually conducted within an hour.
Experimental Results of MIR-1K Dataset
Singing voice separation experiments with simulated mixtures were performed on the MIR-1K dataset [30] to verify the effectiveness of the proposed methods. This dataset consists of 1000 clips extracted from 110 karaoke recordings sung by 19 Chinese amateur singers. The sound files in this dataset contain instrument and vocal sounds in the left and right channels, respectively. All the stereo sound files are converted to mono and 16 kHz PCM format. We used clips of one male ('Abjones') and one female ('Amy') singer for training set, and clips of the remaining 17 singers were assigned to the test set. Input mixture signals are generated by simple additions, y[n] = y 1 [n] + y 2 [n], for each pair of song clips. Figure 6 represents the distribution of γ 1 γ 2 on training set of MIR-1K dataset. γ 1 is distributed highly around 0 and 1, which are from the regions where only a single source is present. The performance was evaluated by global normalized source to distortion ratio (GNSDR), global source to interference ratio (GSIR), and global source to artifact ratio (GSAR) values, which are the weighted mean of the NSDRs, SIRs, SARs, respectively, using each frame length as the weight of the corresponding frame [4,26]. Normalized SDR (NSDR) is defined by an increase in SDR after separation. GSIR measures how much uninterested interference is present, GSAR measures how much irrelevant artifact sound exists in the separated results, and GNSDR considers both. The detailed definition of SIR, SDR, and SAR metrics can be found in BSS-EVAL 3.0 [26]. Table 2 shows the results of the experiments performed on the MIR-1K dataset. Subscript m in GSIR m , GSAR m , and GNSDR m means that the target source is musical instrument sound, so they measure the performances of instrument sound extraction from the input mixture. In the same way, GSIR v , GSAR v , and GNSDR v measure the performances of vocal sound extraction from the mixture. For RNN, the three measures of instrument sound extraction, GSIR m , GSAR m , and GNSDR m , were all improved by the proposed curriculum learning. The interfering vocal sound was removed quite well as shown by the GSIR m increment, from 11.72 dB to 12.30 dB. In the case of vocal sound extraction, GSAR v was increased by 0.46 dB, but GSIR v was decreased by 0.44 dB. It means that the proposed method left more music but less unwanted artifact in the separated vocal sound. The overall performance by GNSDR v showed 0.27 dB increment with the proposed. For U-Net, the improvements were similar but mostly vocal extraction was better than music extraction. This might be caused by the property of U-Net using localized convolution windows that provide positive effects to vocal sounds in which gain and frequency characteristics change more often over time than instrument sounds. Table 2. Comparison of separation performance for MIR-1K dataset by global normalized source to distortion ratio (GNSDR), global source to artifact ratio (GSAR) and global source to interference ratio (GSIR). "Instrument" columns with subscript "m" are the evaluation results of instrument sound extraction, and "Vocal" columns with subscript "v" are those of vocal sound extraction. The baseline models are stacked RNN ("RNN" row) and U-Net ("U-Net" row). The rows with "proposed" header are the models trained by the proposed curriculum learning method.
Experimental Results of ccMixter Dataset
The ccMixter dataset [31] consists of 50 songs of various genres, the total length of which is approximately 3 hours. This dataset provides a set of songs consisting of a vocal sound, an instrument sound, and their mixture. Each song is sampled at 41 kHz, so it is downsampled to 16 kHz for the separation experiments. Setups for the spectrogram extraction is the same as that of MIR-1K dataset. We used 10 songs with singer names beginning with A to J for test, and the remaining 40 songs for training the models. Table 3 shows the results of the experiments performed on the ccMixter dataset [31]. In the music results, the proposed learning improved GNSDR m by 0.27 dB with the RNN, and 0.14 dB with the U-Net. Interestingly, GSIR m was degraded with the RNN, while it was improved with the U-Net. In the case of GSAR m , opposite results were observed. One of the reasonable explanations is that, because U-Net focuses more on better reconstruction of the original input, removing interfering sound should be preferred rather than eliminating unwanted artifacts. In the vocal extraction results, most of the values were improved except slight degradation in GSIR v . The final GNSDR v increments are 0.74 dB with the RNN and 0.41 dB with the U-Net, which is much larger than the GNSDR m values. We used only two singers in the training set of MIR-1K, but there are up to 40 different singers in the training set of ccMixter. So more significant improvements were obtained with vocal extraction results. However, music extraction results were generally worse with ccMixter than with MIR-1K, because of the size of the dataset (50 songs and 110 songs).
Experimental Results of MUSDB18 Dataset
MUSDB18 [32] is a much larger dataset than MIR-1K and ccMixter. The training set consists of totally 100 songs, approximately 10 hours long, and the test set consists of 50 songs. This dataset is a multitrack format consisting of five streams that are divided into mixtures, drums, bass, rest of the accompaniment and vocals. The multitrack mixture is used as inputs, and only vocal sounds are extracted from the mixture. The sum of all the other instrument sounds is considered as instrument sounds in the experiments. Table 4 shows the results of the experiments performed on the MUSDB18 dataset [32]. The most significant increment was observed in GSIR m in RNN, 1.69 dB, but there were decrement in GSAR m and GSIR v of U-Net. The overall performance measured by GNSDR values were all increased. The smallest and the highest results were 0.38 dB and 1.06 dB. These results show that the proposed method is also effective for large datasets as well.
Discussion
In this paper, we propose a method of applying curriculum learning to singing voice separation by adjusting the weight of the loss function. In order to apply curriculum learning, it is necessary to set the difficulty level of each data. We hypothesized that the model is easy to learn characteristics when one source component is dominant. The dominance of each source can be defined by the ratio of one source to the other, which can be obtained from the training data. Using this definition of source dominance, we can apply curriculum learning to singing voice separation by learning more of the different difficulty levels for each train stage. We conducted three experiments to verify the effectiveness of this method. GNSDR was significantly improved by at least 0.12 dB and up to 1.64 dB for two models and three data sets. These experimental results show that the proposed curriculum learning is effective in hard problems such as singing voice separation.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 6,965 | 2020-04-03T00:00:00.000 | [
"Computer Science"
] |
Nanoscale Simulations of Wear and Viscoelasticity of a Semi-Crystalline Polymer
We investigate the underlying tribological mechanisms and running-in process of a semi-crystalline polymer using molecular-dynamics simulations. We subject a slab of simulated polyvinyl alcohol to a sliding contact asperity resembling a friction force microscope tip. We study the viscoelastic response of the polymer to the sliding and show both plastic and elastic contributions to the deformation, with their relative strength dependent on the temperature. As expected, the elastic deformation penetrates deeper into the surface than the plastic deformation. Directly under the tip, the polymer has a tendency to co-axially align and form a layered structure. Over time, the plastic deformation on and near the surface builds up, the friction decreases, and the polymers in the top layer align with each other in the sliding direction (conditioning).
Introduction
Many of the objects surrounding us are made of polymers. The friction that we experience while walking on our shoes with rubber soles or the wear of the soles are macroscopic properties. This scale has been studied widely for obvious practical reasons and there are many non-trivial effects in friction and wear specific to polymers, such as non-linearities and non-trivial temperature dependence. The origin of many of those macroscopic effects can be found at smaller scale, especially the molecular scale. Studying the contact at this nanoscale can thus provide better understanding of the underlying tribological mechanisms leading to friction and wear on the macro-scale.
The past few decades have seen rapid nanoscale developments of experimental techniques such as friction force microscopy (FFM). This technique gives accurate measurement of surface properties and frictional behavior of a single asperity, and enables to some extent speculation about what is happening below the contact, see [1]. FFM experiments on polymers have already shown that molecular chain reorientation, due to displacement or rotation of a chain segment or a side group (relaxation), occurs during sliding [8,10,17]. This restructuring has an influence on the friction. Not surprisingly, in polymers the biggest changes in tribological properties in experiments occur at the same temperature where the polymer's bulk mechanical properties also change drastically, the glass transition temperature T g .
Currently, theoretical understanding of the frictional behavior of polymers is still lacking. As a result, the development of novel low friction solid polymer materials can only be achieved through expensive testing. Nevertheless, tools for theoretically investigating this problem at the nanoscale exist: molecular-dynamics (MD) simulations. Massive MD simulations of this problem are however extremely challenging, due to the high level of complexity of both the material and friction phenomena. There is a large elastic response, but also plastic deformation and permanent damage. Polymers form semi-crystalline structures, with both crystalline and amorphous domains that may change during sliding. The simplest limit of this semi-crystalline structuring is the ideal, but unrealistic, single crystal, and this has been investigated numerically by Heo et al. [7]. It has also been shown that the structure of the polymers changes near the tip as a result of the high stresses during indentation and sliding [16].
The aim of the present study is to improve our understanding of polymer friction and wear, especially in relation to the structure in semi-crystalline polymers. We perform MD simulations that are designed to model an FFM experiment on polyvinyl alcohol (PVA), a commonly used prototype polymeric material. This approach allows us to investigate in detail what happens to the individual chains and monomers, something that is not possible in heavily coarsegrained finite element simulations or real experiments.
Simulation Setup
We simulate an FFM experiment by rubbing a model atomic force microscope (AFM) tip against a polymer surface. The molecular-dynamics software LAMMPS [14] is used to calculate particle motions via a coarse-grained model, see [11,18]. In this study, we used the coarse-grained model for PVA (CG-PVA) developed by Meyer and Muller-Plathe [12].
Each simulation contains 200,000 coarse-grained monomers for the substrate and around 25,000 particles for the tip. The radius of the tip is 4.68 nm . The atoms of the tip are arranged in a fcc configuration with a lattice spacing of 2.08 nm . They are kept that way during the entire simulations. The tip is a rigid body. The lowest atoms (last row) are removed in order to create a flat contact surface. The melt relaxation, cooling, indentation and sliding take roughly 15000 CPU-hours for a simulation of 8 ns with a time-step of 8 × 10 −16 s.
Coarse-Grained Model
The coarse-grained model replaces a group of atoms by one coarse-grained particle while assuring that the overall structural characteristic of the polymer is preserved. In the coarse-grained model we use, this is done by assigning suitable bond, pair and angular potentials, Fig. 1.
Our simulation box is 42 nm in the sliding direction. During our simulation, the tip passes over the same point around 5-10 times (Fig. 2).
The bonded interactions are between monomers in a chain, and the potential energy is a sum of stretching and bending contributions. The stretching of a bond is described by a harmonic potential V bond = K(r − r 0 ) 2 where K characterizes the stiffness of a spring ( K = 1352 0 ∕ 0 ), and r 0 = 0.5 0 is the equilibrium bond length. To account for possible bond-breaking, this interaction is replaced by a Morse potential during the sliding simulations, where D = 95 0 determines the depth of the potential well (the bond energy), = 3.77∕ 0 is a stiffness parameter and r 0 = 0.5 0 is the equilibrium bond distance. These values were chosen to preserve the equilibrium bond length and the second derivative in the minimum. The bending potential is approximated by an angular potential which is provided in table format. Because each monomer contains several carbon atoms, it accounts also for the torsion stiffness.
The non-bonded interaction is given by a Lennard-Jones 9-6 potential V pair (r) = 4 [( r ) 9 − ( r ) 6 )] , where = 0.38 0 is the depth potential, = 0.89 0 is the distance at which the potential vanishes, and r is the distance between the monomers.
Melt Relaxation and Cooling
In order to obtain a realistic surface for our simulations, we start from a polymer melt and cool it down. Our simulation box is periodic in x and y, but confined by impenetrable hard walls in the z direction.
We generate physical initial conditions for the melt using the DPD-push-off method [15] which is designed to efficiently obtain equilibrated polymer melts. In this approach, we start from non-physical random overlapping initial conditions and a non-physical soft hybrid interaction potential. This potential consists of a 12-6 Lennard-Jones potential for the non-bonded interactions and a spring potential for the bonded interactions. After this system is equilibrated for 0.25 ns using the DPD-push-off protocol, the non-physical soft hybrid potential is replaced by the realistic coarsegrained PVA potential described above. The system is no longer in equilibrium for the PVA potential, so it is equilibrated again for another 0.25 ns. At this point, the melt is still unphysically hot, around 5000 K. The melt is coupled to a Nosé-Hoover thermostat at 520 K, slightly above the glass transition temperature, and time scale of 1 in LJ units.
The system is then equilibrated for 4 ns at which point we have a physical and properly equilibrated melt at 520 K. We confirm this by checking that the radius of gyration is stable. Next, the temperature is gradually decreased to 220 K with a cooling rate of 75 K/ns. We vary the cooling rate to obtain different structural properties.
Crystallinity
The crystallinity level is calculated at various stages during the simulation. The method to calculate the crystallinity is called Individual Chain Crystallinity, see [19]. We define this quantity as the ratio between the number of aligned bonds and the total number of bonds in our coarse-grained force field. For every bond we calculate the bond vector i and a directional vector made of the average of the ten neighboring bonds, i . If the normalized scalar product of those two quantities is higher than 0.95 (18.2°), the bond is considered as aligned, i.e., a bond is deemed straight if
Sliding
Once we have obtained a simulation of a physical polymer surface, we perform indentation and sliding simulations using a simulated AFM tip. The tip is represented by a hemispherical rigid body consisting of a rigid fcc arrangement of the same PVA monomers, see Fig. 3. The interaction between tip particles and the monomers is given by the same non-bonded pair potential as the monomer-monomer interaction. A constant load is applied to the tip in the z direction. The center of mass of the tip is tethered to a support using a harmonic springs in the x and y directions with spring constant 17.8 0 ∕ 2 0 . During sliding, the support moves at a constant velocity in the x direction of 15 m/s. The force F lat (t) needed to keep the support moving at constant velocity corresponds to the lateral force in an FFM experiment, and its average gives the friction.
To prevent the substrate from moving with the tip, the centers of mass of the chains in the lower quarter of the substrate are tethered to their original positions using springs with spring constant 0 ∕ 2 0 . A Langevin thermostat with decay time 1000 0 is also applied to these chains and set to the appropriate temperature for each simulation.
Collecting Statistics
In order to collect enough statistics to understand what is happening around the tip during sliding, we investigate averaged quantities in a frame that move with the tip so that the tip is always at the origin. The simulation box is divided into a grid that moves with the tip. For any given time and for each individual bin, the properties of the atoms present within the bin are recorded and averaged. We note that as the tip moves, the atoms enter and leave the co-moving bins. In the cases where we investigate the displacement over finite times, we assign the entire displacement to the atom's initial bin. The density is calculated by counting the average number of particles in the bin. To obtain a mapping of the orientation of the chain in the sliding direction, we compute the dot product between the bond vectors, i , and the unit vector ̂.
Results and Discussion
We first discuss the equilibrium substrate. Examples of the substrates we obtain using the method described above are shown in Fig. 4, where one can see different structures depending on the chain length. The surface breaks the symmetry, and therefore the surface structure is not necessarily the same as the bulk. The shortest chains ( m = 10 ) form a layer of polymers perpendicular to the surface. For longer chains the substrate becomes more homogeneous with an increase of the amount of folded segments and entanglement of the chains. Because of limitations in computation time, we restrict the study of the sliding to one chain length, m = 50 . This chain length produces a surface that is representative of surfaces consisting of longer chains. The amount of end monomers at the surface is low, and the chains are long enough to fold back on themselves. We show the crystallinity as a function of the system temperature for a constant cooling rate of 75 K/ns in Fig. 5. In general, the glass transition temperature and melting temperature of a sample of this size are subject to finite-size effects. Nevertheless, we have estimated the glass transition temperature for our system from the temperature with maximum rate of change of the crystallinity and found it to be approximately 350.4 K, as can be seen from Fig. 5. We have estimated the melting temperature by melting a crystalline sample, and found it to be around 407 K.
Indentation
Still at 220 K, we place the tip over the slab of polymer and apply a specific load force to it. The tip is then pushed into the surface with some violence, and we wait for it to come to full rest, which takes 0.4 ns. After this, we switch the thermostat to the target temperature and equilibrate the system for 1.6 ns. We do not observe any significant further creep during this equilibration period that could be relevant for our sliding simulations.
Frictional Forces
We show the lateral force as a function of time in Fig. 6a for different loads at T = 220 K. The frictional force decreases with time during the running-in period. The force fluctuates and has a repeating pattern due to repetitive crossing of the simulation cell. It takes around 2.1 ns to cross this cell. The system has not fully reached the steady state as the friction is still going down slowly.
The average frictional force is calculated over the entire time interval and the results are shown in Fig. 6b. The friction shows a nearly linear dependence on the load. If we extrapolate the data to zero load, the frictional force will be around 0.53 nN , which is due to adhesion.
In order to understand the effect of the viscoelasticity of the polymer, we investigate the dependence of the friction on temperature. We show the lateral force as a function of time in Fig. 7a for different temperatures at the same load of 0.38 nN . The lateral forces are initially very similar for the different temperatures. This is due to the fact that we have used very well controlled and similar initial states. Such initial similarities would not be The vertical lines represent the time when the virtual atom which is attached to the spring goes back to his initial position. The friction is calculated from the lateral force by averaging over the last two passes of the simulation. Extrapolation to zero shows a non-zero friction at vanishing load, which indicates that there is a substantial contribution from adhesion Fig. 7 a Moving average of the frictional force vs time for different temperatures. At regular intervals, the total lateral forces between the tip and the substrate are measured. The data were then smoothed by a moving average with small interval of 1.6 ps in order to reduce the noise. b Average frictional force over the last two passes vs. temperature achievable in experiments. As soon as the sliding begins, the systems at different temperatures start to diverge due to different mechanical properties of the polymer as well as thermal activation. The lateral force decreases with time for most temperatures, but increases slightly with time at the highest temperatures. This increase is related to more and more molecules adhering to the tip and thus needing to be dragged over the surface. Since this is occurs above the glass transition temperature, it is part of a purely viscous response. The mechanism by which the friction decreases with time below the glass transition temperature is more interesting and complicated, and we will discuss and investigate it in more detail below.
The average frictional force as a function of temperature is shown in Fig. 7b. The friction shows a non-linear dependence on the temperature with a minimum around 200 K. This is known to be the result of a competition between the local shear stress which decreases with the temperature and the contact area which increases with temperature [4]. Figure 8 shows a snapshot of the substrate after multiple passes at temperature 220 K. From this figure, we see that there are significant changes in the structure on the surface. In order to understand what is happening during the runningin period, we investigate the structure in more detail. Figure 9 shows the average density in cross sections of the surface directly under the middle of the tip for three different temperatures. We note that the gentle density gradient in the first few nanometers of the surfaces is due to the fact that the surface is not atomically flat, but has some nm-scale roughness. There are a small number of alternating high-and low-density lines around the tip, as can be seen from the horizontal red lines in the figure. In Fig. 10, we quantify this further and show the density in the region under the flat part of the tip as a function of the depth under the tip. There is a fast decay in the fluctuation with respect to the height. The first maximum is around 3 times the bulk density and the first minimum is half of the bulk density. Temperature somewhat reduces the effect, as is to be expected, but is still quite pronounces, even above the glass transition. This formation of a layered structure is not unexpected; it is commonly found in strongly confined materials under high pressure.
Structure
Finally, the snapshot in Fig. 8 shows specifically that the chains in the center are aligned in the sliding direction in the wear track. Most of the adjacent chains to this wear track show partial alignment with respect to the sliding direction. The chains further away from the wear track show little to no sign of this. Such behavior has been observed experimentally for rubbers [3,6,13].
We investigate this more systematically by considering the orientation of the bonds in the chains. Figure 11 shows the average component of the bond in the sliding direction for different temperatures. In the first few nanometers from the surface, there is a strong preferential orientation of the chains in the sliding direction (combing effect). The thickness of this reorientation layer increases with the temperature (Fig. 12).
Dynamic
The viscoelastic flow of the material surrounding the tip is an important characteristic of the contact [9]. We therefore investigate the average displacement around the tip, including the elastic restoration and permanent plastic deformation; the former is removed from the system by the thermostat. Figures 13a, c, e and 14a, c, e show the vector displacements calculated after a time interval t of 0.08 ns and 0.4 ns, respectively. We first consider the temperatures below the glass transition, 50 K and 220 K. In front of the tip, the material near the surface moves downward and in the direction of the sliding. Directly below the tip, it moves upward and slightly in the direction opposite to the sliding.
The atoms near the surface at the rear of the tip move slightly downwards and also in the direction opposite to the sliding. This change of direction before and after the tip indicates that elastic energy is stored in the deformation and returned. In addition, some energy is dissipated as heat and in plastic deformation. The latter can be seen from the displacements after the longer time interval in Fig. 14. At temperature 385 K, above the glass transition, there is predominantly a viscous response, and no elastic restoration. In this case, the substrate mainly moves towards the sliding direction. Figures 13b, d, f and 14b, d, f show cross sections in the yz plane of the vector displacements calculated after time intervals t of 0.08 ns and 0.4 ns. There are symmetrical displacements in the y and z directions. We also note that there are quite large fluctuations visible in Fig. 14. It is nevertheless possible to discern significant displacement near the tip. Figure 15 shows the displacement in the x direction (sliding direction). There is a small zone surrounding the tip where the displacements are large, at least an order of magnitude higher, in the first few nanometers than in the bulk material. They also do not recover, indicating that this is plastic deformation, as can been seen from the top view displacement map after one pass of the tip, see Fig. 12. The thickness of the layer with plastic deformation increases with the temperature. This observation is in agreement with Briscoe hypothesis [5] of a thin top layer of polymer being submitted to higher shear stresses close to the surface. It is also consistent with alignment of the polymers on the surface in the sliding direction. The right side of the dashed line represents the front of the tip. As can be seen from Fig. 15, there are atoms counted at positions overlapping with the final position of the tip. This is due to the fact that we are investigating finite time intervals and we assign the entire displacement to the atoms initial bins.
In the displacements, we see no indication that the layers visible in Fig. 10 under the tip shear significantly with respect to one another. There is likely due to the fact that there are covalent bonds between the high-density planes where the polymer chains fold around from one to the next. The layer with high plastic displacement is roughly the same thickness as the area with more structure under the tip. Even though the precise rearrangements are different, this similarity in size is to be expected, because the monomers in this region are subjected to forces that are strong enough to cause significant structural rearrangements.
At T = 55 K and 220 K, the chains have a strong cohesion and are strongly attached to the substrate. Only the very surface of the substrate sees high deformation. At the rear of the tip, the substrate detaches from the tip. Part of the energy is restored where the substrate is moving backwards. One can notice the presence of transition zones going from negative to positive displacement. We suppose that this effect can be explained by the production of fast propagation of Schallamach waves during sliding, see [2].
At T = 385 K, the chains are not sufficiently attached to the substrate and are free to move with the tip. This free movement implies a reduction of the shear stress and an increase of the surface area (higher penetration).
Conclusion
We have performed molecular-dynamics simulations of an AFM tip sliding on a polymer substrate of PVA chains. We have investigated structural changes occurring in the surface on the atomic scale, as well as the viscoelastic response. We have investigated the system at several temperatures below and around the glass transition and relate the response to the proximity of the glass transition.
We compute the friction as well as a number of structural and dynamic properties. For low temperatures, the friction decreases with temperature, as the shear strength decreases. For higher temperatures, but still below the glass transition, the friction increases again as the contact area increases due to larger plastic deformation. At low temperatures, we see that the polymer is mostly elastic, and we see this in a large recovery and backwards motion of the material in the substrate behind the tip. At higher temperatures, close the glass transition, there is a much larger viscous component. In all cases, the polymers near the surface reorient and align permanently with the sliding direction. While our simulations are for a specific polymer, the qualitative behavior is likely to be general and present in other polymers. Using MD simulations has allowed us to provide a detailed picture of the molecular behavior of sliding polymers. Nevertheless, much remains to be investigated, as there are many additional complications in many realistic polymers that can affect the structure and friction, such as stronger interchain interactions, cross-linking, or the presence of water and other contaminants. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 5,635 | 2021-01-13T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Plasma convection jets near the poleward boundary of the nightside auroral oval and their relation to Pedersen conductivity gradients
In this work, we have shown that the ionospheric azimuthal plasma velocity jets near the open-closed field line boundary on the nightside can be associated with the peak in the ionospheric conductivity gradient. Both model and DMSP observations have been utilized to conduct this investigation. The model tests show that when the gradient of conductivity in the poleward boundary becomes sharper, convection peaks appear around the poleward edge of the aurora. The model results have been confirmed by DMSP observations. Hundreds of large ion flow events are identified from one year DMSP observations, with flow speed larger than 500 m/s that occurred poleward of the aurora. Among them, 280 (74%) events are found to be associated with conductivity gradient peaks. Most of the convection jets occur in winter when conductivity gradients are expected to be large. The convection jets tend to occur at later local times (21:00– 22:00 MLT) at 70–72 MLat. These events are preceded by increasing of the merging electric field suggesting that they occur after the expansion of the polar cap. Both observation and model results show that the conductivity gradient at the polar cap boundary are one of the important elements in establishing the convection jets.
Introduction
Electric fields from the solar wind are mapped to the high latitude ionosphere, creating horizontal E × B plasma drift, also called convection.The convection depends on the IMF direction (e.g.Heppner and Maynard, 1987;Weimer, 1996).The convection pattern can be measured using satellites, radars, and ground based magnetometers (e.g Heppner and Maynard, 1987;Ruohoniemi and Greenwald, 1996;Richmond and Kamide, 1988;Lyons et al., 1996;Wang et al., 2008).When global patterns of the convection are compiled, they are typically smoothed and there are few localized features in the pattern.When individual traces of satellite orbits are examined, however, significant structure is often observed.For example, intense ionospheric F-region east-west flow jets or enhanced electric fields are frequently recorded by satellites at high latitude near the open-closed field line boundary (e.g.Karlsson and Marklund, 1996;Burke et al., 1994;Hamza et al., 2000;Ridley et al., 2002).
The rapid plasma flow jets on the dayside (06-12-18 MLT) have been studied widely in the literature, including their dependencies on solar wind, IMF conditions and seasons (e.g.Clauer and Ridley, 1995;Ridley and Clauer, 1996;Provan and Yeoman, 1999;Milan et al., 2001;Ruohoniemi and Greenwald, 2005).Several mechanisms were proposed for this phenomena, such as Kelvin-Helmholtz instability (e.g.Clauer and Ridley, 1995;Ridley and Clauer, 1996) or IMF B y related reconnection (Provan and Yeoman, 1999).Furthermore, Milan et al. (2001) found that the influence of IMF B y on the dayside flow bursts is greater in the summer than in the winter.
Opposed to the dayside, the rapid plasma flow jet on the nightside aurora boundary has been rarely studied statistically.Only event studies have been conducted, and it was suggested that the plasma flow burst events in the midnight sector are related to the processes in the magnetotail plasma sheet boundary, which might be an element of the substorm (e.g.Burke et al., 1994;Hamza et al., 2000) or to the convective transport after reconnection in the more distant tail, not directly related to substorms (Grocott et al., 2004;Grocott et al., 2008).These studies have suggested the plasma fast flow is attributed mainly to magnetotail processes.Since the open-closed field line boundary is proximate to the poleward boundary of the auroral oval, it is worthwhile to investigate whether the flow bursts may be due to the conductivity structure.
Previous studies have shown that ionospheric conductivity has a strong effect on the magnetospheric dynamics (e.g.Raeder et al., 2001).Fedder and Lyon (1987) proposed that on a global scale, as Pedersen conductivity increases, the electric potential will decrease, and the current will change.This makes the magnetospheric dynamo to look like neither an intrinsic voltage source nor an intrinsic current source.Simplistically, one expects that when the conductivity decreases, the flow velocities will increase.Therefore, the fast flows observed by DMSP may be caused by a decrease in conductivity.Previous studies have shown that gradients in the Hall conductivity can cause the potential pattern to be asymmetric from dawn to dusk (e.g.Ridley et al., 2004;Sandholt and Farrugia, 2009).However, statistical studies are still failing to explicitly describe the effect of the ionospheric conductivity gradient on the nightside fast flows.This paper examines whether gradients in the Pedersen conductance plays a role in the development of fast flow jets.
Spatial and temporal variations of the ionospheric conductivity are primarily due to solar radiation and precipitation of magnetospheric particles.The particle precipitation-related Pedersen conductivity depends primarily on the electron average energy and energy flux (Hardy et al., 1987), while the solar radiation induced conductivity depends on both the level of extreme ultraviolet (EUV), which is approximated by the F10.7 index, and the solar zenith angle.The solar driven conductivity varies smoothly over large spatial scales, while the aurora can vary radically over small scales.It is therefore clear that if the gradient in the Pedersen conductance causes spatially confined fast flows to occur, it will most likely happen near the auroral oval.
In this paper, we investigate the dependence of the nightside convection jet on the conductivity gradient by using both model results and DMSP satellite observations.
Model results
The relationship between the electric potential ( ) and the radial component of the field-aligned currents (j R ) is j R = ∇ ⊥ • ( • ∇ ), where is the ionospheric conductance tensor.This relationship can be expanded as follows (e.g.Good-man, 1995;Amm, 1996), Where C = 0 cos 2 ε + P sin 2 ε, ε is the angle between the radial direction and the magnetic field, θ is colatitude, ψ is longitude, and 0 , P , and H are field-aligned, Pedersen, and Hall height integrated conductances, respectively (for more information, the reader is referred to Ridley et al., 2004).By assuming that the potential and conductivity are homogeneous in the longitudinal direction (this is common near dawn and dusk), and that the field lines are vertical, which is valid in the high latitude region, we can simplify Eq. (1) as The northward directed electric field can then be described as E θ = − δ δθ .The above relationship can be expressed as . This reflects a simple relationship between the northward directed electric field (E θ ) and Pedersen conductance ( P ).
Solving the above function in a numerical way, we have assumed that duskside R1 and R2 FACs exhibit sin wave shaped latitudinal profiles in the auroral region, as shown in Fig. 1, while the auroral Pedersen conductance is shaped as a sin wave (left) and sin 0.25 wave (right) superposed on a 1 S background conductance.Using sin wave and sin 0.25 wave shaped conductance, we have run the model twice.
The left figure assumes that the auroral boundaries are quite smooth, while the right figure shows the aurora to have sharp boundaries, with strong gradients.The electric potential and electric field can then be solved by using Eq. ( 2).This solution is shown as solid line.We then determine the contribution from the gradient in the conductivity by neglecting the term δ P δθ (i.e.setting it to zero) in Eq. ( 2).This results in the dotted curve.
It can be seen, that for δ P δθ = 0 the electric field structure does not change much when the conductivity gradient becomes steeper at the edges, when comparing the left plot to the right.Since there is more conductivity in the sin 0.25 case, the electric field (when the conductivity gradient is neglected) is diminished.But for δ P δθ = 0, when the gradient of conductivity becomes larger, the electric field structure becomes more complicated with several peaks occurring at the edges.Since the Pedersen conductivity on the nightside can change suddenly and dramatically at the boundary of the auroral oval, the δ P δθ term is expected to have an important effect on the electric field structure.From this analysis, we predict that jumps in cross-track velocity are associated with the sudden changes in the ionospheric conductivity.The negative peaks in V y (anti-sunward flow) near 75 • latitude might resemble flow bursts in DMSP observations.
Satellite observations of fast flows
The DMSP satellites sample polar regions at ∼835 km altitude along orbits of fixed local times.The orbital period is approximately 100 min.One of the satellites (F13) has a near dawn-dusk orbit and two (F14, F15) have 09:30-21:30 MLT orbits.The DMSP satellite tracks are confined to the MLT sector of 15:00-22:00 MLT, leaving a midnight gap in the MLT coverage.Among various scientific instruments onboard, the instruments of primary interest for this study are the ion drift meter (IDM) and the electron spectrometers (SSJ/4).The ion drift velocities in the horizontal and vertical direction perpendicular to the satellite track are derived from the IDM data (Rich and Hairston, 1994).The DMSP SSJ/4 instruments monitor the energy flux of electrons and ions in the range of 30 eV to 30 keV that precipitate from the Earth's magnetosphere (Hardy et al., 1984).Robinson et al. (1987) have described the relationship relating the energy flux and the average energy of the electrons with the height-integrated ionospheric conductivity.Here we make use of this empirical relation.
For this study, rapid convection flow is defined as a clearly identifiable large ion flow poleward of the aurora oval.A threshold of flow velocity greater than 500 m/s is used for selection.The poleward boundary of the aurora oval is found automatically by computing the auroral Pedersen conductance along the DMSP path and determining the peak conductance, then stepping poleward until the conductance is reduced to 0.2 times the peak value or 1 S, whichever is smaller.These criteria are based on the finding of Troshichev et al. (1996) that the diffuse electron and ion precipitation is one order of magnitude or more weaker in the polar cap than in the auroral oval.The selected orbits are further visually inspected to fully satisfy the above criteria.One year of DMSP measurements during 2002 have been processed and 378 rapid convection flow events are identified.Among them, 280 (74%) events are associated with conductivity gradient peaks.These 280 events are considered and termed convection jet events in the following.Four examples of convection jets in the north polar region are shown in Fig. 2. From top to bottom it shows latitude profiles of Pedersen conductance due to particle precipitation, the latitudinal gradient of the Pedersen conductance, and the ionospheric convection.The enhanced conductivity due to particle precipitation represents the auroral oval.The large sunward (positive) flow in this region is the auroral plasma convection.For all events the aurora has clear poleward boundaries, that is, with strong latitudinal gradients.The enhanced anti-sunward (negative) plasma drift poleward of the auroral precipitation is the convection jet.The UT, MLat (magnetic latitude), and MLT (magnetic local time) where the velocity peak occurs are listed in the plots.
The time (UT), position (MLT, MLat) and magnitude of peak velocities of the convection jets are listed in a catalog for each event.All these parameters describe the position where the jet velocity peaks.Only the convection jet observations in the Northern Hemisphere are considered in this study, and the inter-hemispheric comparison will be left for future studies.
MLT and MLat distribution
The identified jet events are binned by magnetic local time (1 h bins).The number of events has been normalized to the number of DMSP passes over that MLT bin. Figure 3 (left frame) shows the occurrence frequency of jet events as a function of MLT.The bars reflect the percentage of DMSP passes that measure jets.Jets exhibit occurrence peaks around 21:00-22:00 MLT.We do not have data after 22:00 MLT so we do not know what happened to the distribution outside this MLT sector.
The jets are then binned by magnetic latitude (2 • bins).Figure 3 (right frame) shows the resulting distributions of the number of the jet events as a function of MLat, reflecting a clearly preferred MLat location.The distribution peaks at 70 • -72 • MLat.
Solar wind and IMF condition
It would be interesting to see whether there are interplanetary configurations favorable for the jet events to occur.The solar wind and interplanetary magnetic field (IMF) con- ditions during which the jet events occur are therefore investigated.Figure 4 shows the occurrence frequency of jet events as a function of IMF clock angle, which is defined as θ = tan −1 (B y /B z ) in GSM coordinates.The IMF and solar wind parameters have been propagated from ACE to the dayside magnetopause at 12 Re by using all three velocity components of solar wind (The propagation has been done by the standard procedure described on the omni web page: http: //omniweb.gsfc.nasa.gov/html/HROdocum.html#3).The solar wind and IMF parameters at the magnetopause are averaged over 20 min periods before the times of the event detection (Gérard et al., 2004).It can be seen that most of the events occur under southward IMF conditions, that is, A superposed epoch analysis of the merging electric field, E m has been made.As the key time the detection of the flow burst was used.E m is defined as v sw B 2 y + B 2 z sin 2 (θ/2) (e.g.Kan and Lee, 1979), where v sw is the solar wind velocity and θ is the clock angle of the IMF defined in GSM coordinates.Figure 5 shows the average evolution of E m one hour before and after the jets.It can be seen that E m shows an increase starting about 40 min before the jet.The peak value is reached at the time of the jet.
Seasonal variation
Figure 6 shows the occurrence frequency of jet events as a function of season.Most of the events occur in the winter season and least events occur during summer.This is as expected since the conductivity pattern is dominated by solar illumination in summer, which exhibits preferably large scale variations.While in winter, the conductivity is particle precipitation induced, generating more small-scale variations.Figure 7 shows the velocity as a function of season (blue line).Overplotted is the monthly average of the conductivity gradient (green line).As expected, the Pedersen conductivity gradient is large in the winter and small in the summer.The velocity also maximizes in the winter and minimizes in the summer.The coincidence is clear for the months of June and December (see Fig. 7).But for all the other months no clear seasonal variation is emerging.
When we try to find a direct correspondence between the gradient of Pedersen conductivity and the jet velocity, we obtain a bad correlation between these two parameters, indicating there is no simple linear relation between these two quantities.
Discussion and conclusion
In this work, we have shown that the cross-track velocity jets near the open-closed field line boundary can be associated with the peaks in ionospheric conductivity gradients.Both model and DMSP observations have been utilized to conduct this investigation.From the model test (Fig. 1) it can be seen, when the conductivity gradient is neglected in the calculation (that is, δ P δθ = 0), the convection in the aurora region is reduced in magnitude with larger conductivity but the structure does not change too much at the poleward edge.For δ P δθ = 0, when the gradient of conductivity in the poleward boundary becomes sharper, there is a peak in V y emerging near the poleward edge of the aurora (see Fig. 3 right frame), which resembles the flow bursts in DMSP observations.When one year of DMSP data are processed, hundreds of large ion flow events are identified with flow speeds larger than 500 m/s occurring poleward of the aurora.Among them 280 (74%) events are found to be associated with conductivity gradient peaks.The large probability of the coincidence between the jets and the conductivity gradients from observations as well as the model results may indicate that the conductivity gradient at the polar cap boundary is one important element in establishing the convection jets.The important effect of the conductivity gradient on the convection jets has also been noticed previously by Sandholt and Farrugia (2009).Based on several event studies at the dawn and dusk sectors, Sandholt and Farrugia (2009) found that for a small conductivity gradient the convection jet was absent, while in case of a sharp poleward boundary the fast (2-3 km/s) flow can be observed.
When looking at the seasonal variation, it can be seen that most of the jet events occur in the winter season (Fig. 6), when conductivity gradients maximize (Fig. 7).Although we did not find a direct linear relationship between the jet velocity and the conductivity gradient, we observed that the convection jet maximizes in the winter season and minimizes in the summer season (Fig. 7).
The convection jets may be one element of the substorm processes (Hamza et al., 2000).We have checked the substorm onset event catalogue during 2002 as reported by Frey et al. (2004) and tried to find coincidences with convection peaks.Among the 378 rapid fast flow events, only 29 events occur less than 30 min after the onsets.We regard this as an indication that our convection jets are not related to substorms.
Supporting evidence for the way we explain the plasma flow jets at the poleward boundary comes from the superposed epoch analysis of the merging electric field, E m .Figure 5 clearly reveals a rising E m during the 40 min before the appearing of the jets.It is known that for a larger E m the polar cap expands in size.This means, the temporal change of E m deduced from Fig. 5 implies an equatorward retreat of the auroral oval and the precipitation region.As a consequence the conductance poleward of the new oval position will decay rapidly.This forces the plasma to drift faster in order to maintain current continuity.Sandholt and Farrugia (2009) found that the enhanced antisunward convection is mostly observed at the dusk side of the polar cap (18:00 MLT) during positive IMF B y condition in the Northern Hemisphere.In our work the jet events in the 18:00-22:00 MLT sectors occur (see Fig. 4) with no IMF B y polarity preference.The difference may be due to the different MLT ranges under investigation.The preferred occurrences of the jet events in the premidnight sector (see Fig. 3) makes sense.Here we expect the upward FACs on the poleward side of the oval, which are known to be more efficient in enhancing the conductivity.Also the latitude of highest event detections (∼ 72 • MLat) corresponds well with the latitude of R1 FAC for the average merging electric field of E m > 1.6 mV/m prevailing before the event.
From a joint interpretation of observations and model predictions we propose a scenario for the generation of the plasma flow jet.Before the event we have a steady plasma convection.Due to magnetic field merging open flux is added to the tail and the polar cap expands.The equatorward retreat causes steep conductivity gradients to occur, especially on the duskside.On the equatorward edge the conductivity gradients are expected to be smooth after the polar cap expansion.For that reason plasma flow jet are forming at the poleward boundary of the premidnight auroral oval.
We have tried to reproduce this situation with our model.For the set up we used a combination of the two conditions shown in Fig. 1.The conductance profile has a sine wave shape in the lower latitude half and a sin 0.25 in the poleward half.Field-aligned currents are kept unchanged.The results are presented in Fig. 8.With these obvious features the observed plasma convection can be reproduced.There is a broad sunward plasma flow in the auroral region and the sudden switch to antisunward at the steep conductivity gradient.
In summary, we have provided convincing evidence that the steep conductivity gradients at the poleward boundary of the auroral oval in the premidnight sector are the cause for the plasma flow jets.These jets form preferably in the dark ionosphere where precipitating particles are the main cause for the conductivity.Flow jets do not seem to be related to substorm events, but they occur during times of enhanced reconnection when the polar cap expands.
Fig. 1 .Fig. 2 .
Fig. 1.Numerical simulation of high latitude plasma dynamics.From top to bottom frames show the Pedersen conductivity (sin wave on the top, sin 0.25 wave on the bottom), field-aligned currents (FACs, sin wave) and the derived electric field, potential, and azimuthal velocity (positive denotes sunward) from Eq. (2) (real line).Also shown are the derived electric field, potential, and azithual velocity when the conductivity gradient is neglected in Eq. (2) for comparison (dashed line).
Fig. 3 .
Fig. 3. Distribution of fast plasma jets.Top shows the occurrence frequency of convection jets as a function of MLT.Bottom shows the number of events as a function of MLat.
Fig. 4 .
Fig. 4. Occurrence frequency of convection jets as a function of IMF clock angle.
Fig. 5 .Fig. 6 .
Fig. 5. Superposed epoch analysis of merging electric field, E m , occurring during convection jets.The key time, 0, corresponds to the jet observation time.E m has been propagated from the ACE satellite to the dayside magnetopause.The bars indicate the uncertainties of the 10 min averages.
Fig. 8 .
Fig. 8. Same as Fig. 1 except that Pedersen conductivity has a sin wave at the equatorward half and a sin 0.25 wave at the poleward half, which is a combination of the two conditions shown in Fig. 1.Field-aligned currents are kept unchanged. | 4,958.8 | 2010-04-15T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Improvement of power efficiency of hybrid white OLEDs based on pi-n structures
In this article, hybrid white organic light-emitting diodes (WOLEDs) under p-i-n structures have been investigated in terms of power efficiency. By using tris(8-hydroxy quinolinato) aluminum (Alq3) doped with 8-hydroxy-quinolinato lithium (Liq) as an n-type and WHI112 doped with molybdenum trioxide (MoO3) as a p-type, the typical device structure of ITO/WHI112: 20 wt.% MoO3 (55 nm)/HTG-1 (10 nm)/UBH15: 3 wt.% EB502 (10 nm)/EPH31: 3 wt.% EPY01 (25nm)/3TPYMB (10 nm)/Alq3: 33 wt.% Liq (25 nm)/Al (150 nm) was fabricated. It has been found that the p-i-n device based device showed the lowest driving voltage and highest power efficiency among the undoped and n-type devices. At the current density of 20 mA/cm 2 , the roll-off of the efficiency in the p-i-n device was much smaller than the n-type and the undoped devices. The current and power efficiency of the p-i-n device were maintained with 17.2 cd/A and 5.1 lm/W at 100 mA/cm 2 , it was reduced to 7.5 % and 21 %, respectively. In contrast, the n-type device exhibited the significant reduction of efficiency (14.4 cd/A and 3.8 lm/W at 80 mA/cm 2 ), it was reduced to 20 % and 39.6 %, respectively. The superior performances of the p-i-n structure based device were attributed to the high hole injection ability of WHI112:MoO3 and high electron mobility of Alq3:Liq, leading to high power efficiency and low driving voltage. A better balance of electrons and holes could contribute to a good current efficiency for the device. These findings strongly indicated that carrier injection ability and balance showed significant affects on the performance of OLED.
INTRODUCTION
Since the discovery of the efficient organic light-emitting diodes (OLEDs) considerable interest has been increased in developing OLEDs with high efficiency, low operating voltage for display applications is concerned [1].Much effort has been expanded for improving the OLED performance by modifying its structure to achieve the effective and balanced carrier injection.The carrier injections from electrodes are dependent on the energy barrier height at the interfaces between the electrodes and organic layers [2,3].Reasonable charge carrier control in the OLED emitting layers (EMLs) is a key factor in OLED low driving voltage and high efficiency structure design.Two approaches are most frequently used to overcome the driving voltage problem.The first approach involves with inserting a thin layer as an anode buffer layer between the indium tin oxide (ITO) and hole transport layer (HTL).This buffer layer reduces the energy barrier and enhancesthe charge injection at the interface and ultimately reduces the driving voltage improving the device power efficiency [4][5][6][7].The second method involves the use of strong electron acceptor and donor materials as dopants in the organic HTL and electron transport layer (ETL) [8,9].Great efforts have been carried out to enhance n-doping electron transport conductivity [10].DING et al. [11] have demonstrated the significantly enhanced device performance by combination of Lithium hydride (LiH) doped 8hydroxyquinoline aluminum (Alq 3 ) as a electron transport layer (ETL).However, a few studies have been reported using 8-hydroxy-quinolinato lithium (Liq) as the electron injection layer [12].The p-doping of HTL for enhancing the hole injection and lowering the drive voltages in OLEDs has attracted much attention [13].The p-doping HTL is typically made by co-evaporating the hole transporting materials with a strong electron acceptor like the molybdenum trioxide (MoO 3 ) [14] and the tetrafluro-tetracyano-quinodimethane (F 4 -TCNQ) [15].The p-doping could also achieve the ohmic conductivity to minimize the voltage drop across the ITO/HTL interface.Judicious control of doping levels can also lead to the efficient carrier injection by the tunneling [16].It is very difficult to balance the holes and electrons in the emitting layer, because hole mobility is generally faster than the electron mobility in organic materials.In order to solve this problem, several kinds of HTL, ETL, hole block layer and electron block layer have been studied [17,18].
In this paper, we have demonstrated hybrid white organic light-emitting diode (WOLEDs) devices based on the p-i-n structure with Liq doped into Alq 3 as the n-doping layer and MoO 3 doped into WHI112 as the p-doping layer.In order to reduce the driving voltage and improve efficiency, we use MoO 3 doped into WHI112 as the p-doping layer and Liq doped into Alq 3 as the n-doping layer, power efficiency and carrier balance have been overwhelmingly improved.The electrical engineering and hybrid WOLED charge balance are developed based on these experimental results.The mechanism of such improvement is also discussed on these experimental results.
MATERIALS AND METHODS
Glass coated with indium-tin oxide (ITO) was used as the starting substrate.The substrate was immersed sequentially in acetone and isopropyl alcohol under the ultrasonic bath for 15 min each, following by rinsing in DI water.The substrates were dried with nitrogen gas.Then the samples were treated with the oxygen plasma for 1 min.prior to use.The devices were prepared by the vapor deposition onto the ITO coated glass substrate.Firstly, the series of electron-only devices were fabricated in order to obtain some data on the electron transport ability of Alq 3 doped with Liq layers.The structures of electron-only devices were as follows: ITO/Alq 3 : x wt.% Liq (30 nm)/Al (130 nm), where x was 0 wt.% for device E-1, 10 wt.% for device E-2, 33 wt.% for device E-3, and 50 wt.%device E-4 (Table 1).Secondly, the study of hole-injection ability of WHI112 doped with MoO 3 layer, the series of hole-only devices were fabricated.Hole-only device got the following structures: ITO/ WHI112: y wt% MoO 3 (50 nm)/ HTG-1 (15 nm)/Al (130 nm), y was 0 wt.% for device H-1, 10 wt.% for device H-2, 20 wt.% for device H-3, and 30 wt.% for device H-4 (Table 2).Finally, hybrid WOLEDs devices were fabricated with undoped, n-type and p-i-n structures.HTG-1 was used as HTL, blue host UBH15 doped with blue fluorescent dopant EB-502 as blue EML, yellow host EPH-31 doped with yellow phosphorescent dopant EPY01as yellow EML, and tris(2,4,6-trimethyl-3-(pyridin-3-yl)phenyl) borane (3TPYMB) was used as hole blocking layer (HBL), while LiF and Al were used as electron injection layer and cathode, respectively.The energy band diagrams and molecular structures were displayed in Fig. 1.The structures of undoped devices was ITO/WHI112 (55 nm)/HTG-1 (10 nm)/UBH15: 3 wt.%EB502 (10 nm)/EPH31: 3 wt.%EPY01 (25nm)/3TPYMB (10 nm)/Alq 3 (25 nm)/LiF (0.8 nm)/Al (150 nm).When we added Alq 3 : 33 wt.%Liq (25 nm) and WHI112: 20 wt.% MoO 3 (55 nm) onto the undoped devices, we obtained n-doped layer and p-doped layer (Table 3), respectively.All materials were purchased from e-Ray Optoelectronics Technology Co., Ltd., Taiwan (R.O.C.).The organic layer and the cathode layer were deposited under the ultrahigh vacuum chamber at 4x10 -6 Torr.The active area of the devices was 5x5 mm 2 .The thickness of the organic layers was monitored by using quartz-crystal monitor.Current-voltage characteristics were measured with the computer-controlled Keithley 2400 Source Meter and elctroluminescence (EL) spectra was measured with the Spectrascan PR650 photometer.All the measurements were carried out at room temperature and atmosphere.
Characteristics of electron-only and hole-only devices
In electron-only device, the current density as a function of voltage (J-V) characteristics at the various Liq doped into Alq 3 ratios are shown in Fig. 2. In electron-only device, a rapid increasing in the device current occurs when Liq is doped into the Alq 3 layer.Increasing the current density is seen when a small 10 wt.% Liq doping concentration is introduced into the Alq 3 layer, as compared with the control device.The J-V characteristics of electron-only device are strongly dependent on the doping ratio in the electron transport layer.At the same voltage, the current density increases along with increasing the doping ratio.The highest current density is observed at 33 wt.%Liq doping ratio.The E-1, E-2, E-3 and E-4 device driving voltage at 100 mA/cm 2 are 8, 5.3, 2.6 and 6.7 V, respectively.These electron-only device J-V characteristics suggest that a certain Liq to Alq 3 doping ratio could improve the co-deposited layer electron transport ability.The advantage of using Alq 3 :Liq as the ETL is explained using the electron hopping exchange along with their Lowest Unoccupied Molecular Orbital (LUMO).In a single host device, electrons hop along the LUMO in Alq 3 .Since the LUMO-LUMO difference between Alq 3 (3.1eV)and Liq (3.24 eV) is negligible, subject to their similar LUMO, transport manifolds alongwith their LUMO are expected to exhibit a certain extent of overlapping after a mixing ratio goes beyond 33 wt.%Liq.Therefore, it is likely that a large energetic disorder between Alq 3 and Liq contributes to the electron hopping, implying that electron hopping among Alq 3 and Liq sites is favorable [19].The high electron conductivity of Alq 3 :Liq might originate from the short electron transport hopping length as compared with the pure Alq 3 ETL.However, the current conduction is reduced dramatically as the doping ratio is further increased to 50 wt.% in the device E-4.This result is attributed to the carrier quenching and defection effects.
In hole-only device, J-V characteristics at various MoO 3 ratios doped into WHI112 are shown in Fig. 3.The J-V characteristics of hole-only device are strongly dependent on the hole transport layer doping ratio.
Comparing with the undoped device, we could see that the low doping strikingly decreases the driving voltage and the J-V characteristics are strongly dependent on the hole injection layer doping ratio.At the same voltage, the current density increases along with increasing MoO 3 doping ratio.The highest current density is observed at 20 wt.% MoO 3 doping ratio, indicating that the p-doping HIL layer conductivity increases due to MoO 3 doping into WHI112.The H-1, H-2, H-3 and H-4 device driving voltages are 5.4, 4.3, 3.8 and 4.2 V, respectively.The results indicate that doping MoO 3 reduces the potential barrier for the hole injection at the ITO interface [20].The hole-only device current enhancement is attributed to the reduction of resistivity and activation energy, leading to decreased ohmic losses.It is possible that the high holes are transferred from the Highest Occupied Molecular Orbitals (HO-MO) in the WHI112:MoO 3 matrix into the HTG-1 HOMO.The MoO 3 (5.3eV) and HTG-1 (5.4 eV) HOMO levels have closely energetic positions making a charge transfer and energetically favorable process.The hole transfer results in increased charge carrier concentration in the bulk HTL which increases the film conductivity and reduces HTL ohmic losses during the device operation.Through the increased bulk conductivity process, the current density is expected to increase with increasing the doping concentration.However, our devices demonstrate the reduced performance at higher 20% MoO 3 to WHI112 concentration.It is likely that because the heavy doping MoO 3 molecules saturate the layer and escape into the HIL (WHI112:MoO 3 )/HTL (HTG-1) interface.This thin MoO 3 layer creates a dipole barrier at the interface with HTL which increases the necessary device driving voltage.It is possible that a high MoO 3 concentration might lead to significant dopant diffusion through the HTL into the EML, causing electroluminescence (EL) quenching in the emissive region.This suggests that the aggregations tend to degrade the device performance.
Comparison between undoped and n-type devices of hybrid WOLEDs
Hybrid WOLEDs are attracting significant attention due to their unique large-scale fabrication merits for the solid-state lighting sources.The yellow phosphorescent with the blue fluorescent emitter combination might result in a good compromising among the high efficiency hybrid systems.We know that in conventional devices, such as undoped device, the number of the holes is much greater than the number of the electrons.A surplus of the holes at the HTL/EML interface increases the probability that EML cations are formed, leading to the device degradation rapidly.From the electron-only device, the further experiments should focus on a device with Alq 3 : 33 wt.%Liq layer as the n-doping.The n-type device with 33 wt.%Liq doped into Alq 3 as the electron carrier for hybrid WHOLEDs is therefore studied.It is very important to balance out the current supply to the emission zone.This is carried out with 3TPYMB as the electron-blocking and HTG-1 as holeblocking layers nearby the emission zone.These layers create an additional barrier for the carriers to be injected.The result shows that the recombination and/or emission zone is clearly separated from the area with the high carrier concentration.The structure of this device is ITO/WHI112 (55 nm)/HTG-1 (10 nm)/UBH15: 3 wt.%EB502 (10 nm)/EPH31: 3 wt.%EPY01 (25nm)/3TPYMB (10 nm)/Alq 3 : 0 or 33 wt.%Liq (25 nm)/with or without LiF (0.8 nm)/Al (150 nm), where Liq is 0% with LiF for undoped device and Liq is 33 wt.% without LiF for n-type device.Figure 4 shows L-V and J-V (inset) characteristics of undoped and n-type devices.The n-type device shows the lower operational voltage and higher current density and luminance slopes than the undoped device.It is clearly seen that under the same current density, the n-type device produces higher emissions than the undoped device.This represents the lower electron injection barrier and higher efficiency from the n-type device.Therefore Alq 3 :Liq produces the higher electron injection efficiency and higher luminance than LiF.The power efficiency of the two devices is shown in Fig. 5.The power efficiencies of the undoped and n-type devices are 6.4 and 7.81 lm/W at 5 mA/cm 2 , respectively.The driving voltage of the n-type device at 5 mA/cm 2 is 7.2 V, which is reduced, as compared with the undoped device (11 V).This significant enhancement of performance is attributed to the improved transport conductivity of the n-doping Liq doped into Alq 3 layer.This shows that Liq incorporation into Alq 3 materials could improve device performance, by increasing the electron concentration in Alq 3 films and moving the Fermi level close to the LUMO of Alq 3 [19].
Comparison between n-typeand p-i-n devices of hybrid WOLEDs
The power efficiency depends on the carrier injection, the transportation and the carrier balance.The p-i-n device is fabricated in which the WH112 layer is doped with 20 wt.% MoO 3 in the ITO/WHI112: 20 wt.% MoO 3 (55 nm)/HTG-1 (10 nm)/UBH15: 3 wt.%EB502 (10 nm)/EPH31: 3 wt.%EPY01 (25nm) Alq 3 : 33 wt.%Liq (25 nm)/Al (150 nm) configuration.The n-type device and p-i-n device characteristics are displayed in Figs.6-8.As compared with the n-type device, the L-V and J-V (inset) curves of the p-i-n device are significantly enhanced, indicating that the device conductivity is improved using the p-i-n structure, as shown in Fig. 6.It is clear that the power efficiency of the p-i-n device is considerably increased as compared with the n-type device as shown in Fig. 7.This indicates the improvement of the p-i-n deviceconductivity.Table 4 summarizes the data for both devices obtained from the n-type device and p-i-n device at 20 mA/cm 2 .The p-i-n device current efficiency, power efficiency and voltage are improved at 18.6 cd/A, 6.5 lm/W and 8.9 V at 20 mA/cm 2 , respectively, as compared with the n-type device at 18 cd/A, 6.3 lm/W and 9.4 V, respectively.However, the roll-off of the efficiency in the p-i-n device is much smaller than the n-type device.The current and power efficiency of the p-i-n device are maintained with 17.2 cd/A and 5.1 lm/W at 100 mA/cm 2 , it is reduced to 7.5 % and 21 %, respectively.In contrast, the n-type device exhibits the significant reduction of efficiency (14.4 cd/A and 3.8 lm/W at 80 mA/cm 2 ), it is reduced to 20 % and 39.6 %, respectively.On the other hand, for the p-i-n device, the combination of the hole-transport character of 20 wt.% MoO 3 doped into WHI112 and the electron-transport property of 33 wt.%Liq doped into Alq 3 is contributed to the controlling the holes and the electrons in the light-emitting layer and results in the stable efficiency roll-off hybrid WOLED.Figure 8 shows the EL spectra of the n-type device and p-i-n device.The EL spectrum shows the difference of the peak between these two devices.It is interesting to note that the EL spectrum of the p-i-n device is high as compared with the n-type device.The blue peak spectrum appeared as the electron injection is increased and it shifts the recombination into the blue emission layer.This indicates that the p-i-n device plays a major impact on the hybrid WOLED optical characteristics.The CIE coordinates of the n-typeand pi-n devices are (0.45, 0.50) and (0.39, 0.48), respectively, as shown in Table 1.As the result, the p-i-n device contributes to a certain degree to the good hole-electron balance in the light-emitting layer.
CONCLUSIONS
We have presented the hybrid WOLEDs based onp-i-n structure of novel Alq 3 :Liq and WHI112:MoO 3 as ntype and p-type, respectively.Current efficiency of 18.6 cd/A, power efficiency of 6.5 lm/W, and driving voltage 8.9 V at a current density of 20 mA/cm 2 in p-i-n hybrid WOLEDs were obtained.The roll-off of the efficiency in the p-i-n device was much smaller than the n-type and undoped devices.The current and power efficiency of the p-i-n device were maintained with 17.2 cd/A and 5.1 lm/W at 100 mA/cm 2 , it was reduced to 7.5 % and 21 %, respectively.In contrast, the n-type device exhibited the significant reduction of the efficiency (14.4 cd/A and 3.8 lm/W at 80 mA/cm 2 ), it was reduced to 20 % and 39.6 %, respectively.The superior performance was attributed to the high hole and electron ability of WH112:MoO 3 and Alq 3 :Liq, leading to low driving voltage and better electron and hole balance, contributing to enhanced efficiency even at the high current density.Effective carrier balance between the holes and electrons was achieved from the enhanced transport layer conductivity, leading to the device enhanced efficiency.
ACKNOWLEDGMENTS
This work is supported by King Mongkut's Institute of Technology Ladkrabang (KREF145901).
Figure 1 :
Figure 1: Energy band diagrams and molecular structures of the tested materials.
Figure 4 :
Figure 4: L-V and J-V (inset) characteristics of undoped and n-type devices.
Figure 5 :
Figure 5: Power efficiency-current density characteristics of undoped and n-type devices.
Figure 6 :
Figure 6: L-V and J-V (inset) characteristics of hybrid WOLEDs of n-type and p-i-n devices.
Figure 7 :
Figure 7: Power efficiency-current density character-istics of hybrid WOLEDs of n-type and p-i-n devices.
Figure 8 :
Figure 8: Electroluminescence (EL) spectra of hybrid WOLEDs of n-type and p-i-n devices.
Table 1 :
The parameters of electron-only devices.
Table 2 :
The parameters of hole-only devices.
Table 3 :
The parameters of undoped, n-doped and p-i-n devices. | 4,223.8 | 2018-08-15T00:00:00.000 | [
"Chemistry"
] |
Geomorphology of the topmost part of the Bistra Mountain, Mavrovo Park, North Macedonia
ABSTRACT Identification of the remnant traces of paleo-glaciers provides important proxies to understand the response of the environment to rapid climate changes. We present a 1:25,000 scale geomorphological map covering ∼12.5 km2 of the upper part of Mount Bistra (North Macedonia) on the basis of remote sensing analyses and geomorphological surveys. Particular attention is given to the description of glacial and periglacial landforms, to the reconstruction of single glacier shapes and to Equilibrium Line Altitude (ELA) value calculation. The results of the survey and the reconstructed ELAs indicate the occurrence of three glacial phases that led to the formation of frontal and lateral moraines. The age of these phases is tentatively attributed to the Late Pleistocene by comparing these ELAs with those of other Balkan mountains. This map is the first step of a wider project aimed at reconstructing the relation between climate change and geomorphic response in this area.
Introduction
Small glaciers at mid-latitudes, like those developed in the Mediterranean mountains, are very sensitive to climate changes and their responses to environmental variations are very rapid and can be recorded. This sensitivity makes it possible to analyse paleo-glacier oscillations to study paleoclimate, and also to foresee the consequences of modern climate changes (Haeberli & Beniston, 1998;Huss & Fischer, 2016;Oerlemans, 2001).
The first records of Pleistocene glaciation in the Balkan Peninsula come from Cvijić, who identified many glacial forms when he explored the Šara Mountain (Ljuboten, 2499 m a.s.l.), in 1891 (Cvijić, 1917). Various authors later reported on glacial remains throughout the Balkans, but only recently has it been possible to establish the phases of glacial advance and retreat, thanks to some chronological constraints (e.g. Gromig et al., 2018;Hughes, 2007;Hughes et al., 2006a;Kuhlemann et al., 2009;Ribolini et al., 2018;Ruszkiczay-Rüdiger et al., 2020;Temovski et al., 2018;Žebre et al., 2019). Leontaritis et al. (2020) reviewed the glacial history of the mountain of Greece and proposed some chronological correlations with glacial phases described for the surrounding regions. As concerns Balkan glaciations, after the work of Kolčakovski (1999), the mountains of North Macedonia have only recently been studied in more detail, (Gromig et al., 2018;Ribolini et al., 2018;Ruszkiczay-Rüdiger et al., 2020;Temovski et al., 2018;Žebre et al., 2019). The results obtained so far have shown that in addition to the phases of the Last Glacial Maximum (LGM), or to antecedent cold events, there were glacial readvances/standstills even during the late Pleistocene deglaciation.
On the basis of this approach, we present a geomorphological map of the topmost part of the Bistra Mountain, located in northern Macedonia. Particular attention is focussed on glacial features (e.g. moraines, cirques) and on the reconstruction of the shape of paleoglaciers for the calculation of Equilibrium Line Altitudes (ELAs). These values are then used to build a possible sequence of glacial phases in the studied area and to compare these cold events with others described in the Balkan region.
Study area
The Bistra massif, situated in the western part of North Macedonia (Figure 1), belongs to the Šara-Pelister mountain range. The massif has several peaks of over 2000 m a.s.l. and reaches the highest elevation at Medenica (2163 m a.s.l). The alignment direction of the highest mountain ridges is NNW-SSE, which reflects the general Dinaric direction (Milevski, 2016). Mean annual temperature in the study area is in the temporal interval 1960-1990, ∼4.4°C, while mean annual precipitation in the same interval approximates ∼1030 mm/yr (Milevski, 2015).
The Bistra Mountain corresponds to a syncline horst block bordered by faults belonging to the Western Macedonian Tectonic Unit, which extends from the Šara Mountains (Šar Planina) in the north to the Pelister Mountains in the south. This Unit geologically belongs to the Dinarides (Helenides) (Arsovski, 1960), and it is principally composed by a Paleozoic metamorphic complex, with volcanic and sedimentary formations in the lower parts, and a carbonate formation in the upper parts. Basal Paleozoic schists and marbles, which outcrop only locally, are covered by Triassic limestones with a thickness of 400 m. Marble and limestone outcrops, locally fractured, together with abundant precipitations, have favoured intense karst processes, so that the area of the Bistra Mountain appears as an extensive karst denudation surface, punctuated by a number of topographic peaks. The area presents several caves and sinkholes, as well as surface karst landforms (karren, dolinas, karst valleys, poljes) (Andonovski, 1977).
Alongside the karst features dominating the landscape, the glacial landforms are relevant elements of the landscape. This is because Pleistocene glaciers affected the Bistra Mountain, leaving deposits and erosional traces like the glacial cirques carved in the Medenica and Čaušica peaks (Manakovikj & Andonovski, 1983). However, these glacial landforms, especially the depositional landforms, have received little attention, despite their well-developed appearance. Since 1949, this area has been a National Park, and nowadays covers an area of ∼734 km 2 . Dense forest vegetation covers the mountain slopes in the lower part of the park, while Alpine grassland grows in the highest areas. The Park is characterized by very high biodiversity and by the presence of numerous relict and endemic species (both herbaceous plants and trees), mostly related to the last glaciation. The presence of the artificial lake Mavrovo, which is one of the largest lakes in North Macedonia (∼13.7 km 2 ), and of one of the major ski resorts in the uppermost parts of the area, have favoured its touristic development.
Methods
The geomorphological map of the topmost part of Bistra Mountain (Main Map) was drawn on the basis of the criteria adopted by Ribolini et al. (2011) and Isola et al. (2011Isola et al. ( , 2017. After collecting the bibliographic data, we made a preliminary analysis of the area by using remote sensing tools to identify the most important geomorphological features. Quickbird imageries (QB02 sensor and Pan_MS1 band, 60 cm spatial resolution, year 2008) were used for this analysis. Three field campaigns were then performed to gather detailed geomorphological data. Owing to the lack of a high-resolution topographic map, field observations were drawn directly onto Quickbird imageries, and many GPS ground control points were acquired in correspondence to the most relevant geomorphological features. This phase of geomorphological map building is fundamental because some landforms cannot be identified clearly from the satellite imageries on account of their resolution, tone and texture.
All the data were then uploaded in a GIS environment so as to generate a geodatabase with both raster and vector levels. The raster levels consist of the Quickbird imageries and the reinterpolated high-quality 5-m digital elevation model (DEM) from the Agency of the Real Estate Cadastre (AREC) of North Macedonia (Milevski, 2014), used as a base for the final map. The vector levels (points, lines and polygons), manually redrawn in GIS from the original field-work maps, correspond to the geomorphological features identified. The contour lines (25 m equidistance) used for the final map are derived from the DEM. Toponyms and elevation points come from Lazarpole (Republička geodetska uprava Skopje, 1972) and from the Mavrovo (Republička geodetska uprava Skopje, 1978) topographic maps at 1:25,000 scale.
Results
The geomorphological map covers ∼12.5 km 2 , and consists of ∼900 polygonal features and ∼100 linear features, in large part taking into account the criteria reported in GLCM (1994). These criteria consist in assigning a single colour to landforms belonging to the same morphogenetic domain, and different shades of the same colour to distinguish active from inactive landforms.
A shadow-relief representation of the topography (resolution 10 m, sun azimuth 315°, sun high 45°) was used to better appreciate surface roughness.
Glacial, periglacial (mainly fossil) and karst are the most widespread landforms and deposits, strongly characterizing the Bistra landscape at various elevations (Figures 1 and 2). The map is limited to the topmost part of the Bistra Mountain, with a focus on the glacial and periglacial landforms that are the main targets of this paper. However, the survey was extended outside the map limits to check the presence of other glacial deposits and to assess the landscape at a more regional scale. Given the aim of the paper, particular attention is devoted to the description of glacial and periglacial landforms, while those of other origin are only synthesized for the sake of completeness.
Glacial landforms and deposits
Five types of glacial landforms (cirques, glacial steps, moraine ridges, glacial deposits and cemented glacial deposits) have been identified. For their representation, purple has been used to combine linear symbols with polygons. Six glacial cirques at the border of a general smooth summit relief are formed, with variable aspects from northeast to northwest. The crest ridge of five of the cirques: Ezerište, Čaušica, Bistra III, Bistra II and Bistra I (following the terminology of Vasilevski, 2011), developed between ∼1950 and 2000m a.s.l., while Medenica is located at ∼2100 m a.s.l. The coalescence of several cirques in the same sub-basin explains the irregularity of the cirque crest-line. Most cirques present vertical backwalls and lateral sides, whilst central over-deepened depressions and valley steps are hardly visible, since they are buried by detrital deposits of different origins. Evidence of ongoing toppling rockslides was observed in the rock wall of the Bistra I1 cirque.
Although partly covered by glacial debris and incised by torrents, the only one valley step presumably shaped by glacial erosion is placed at the base of the valley descending from the Čaušica cirques. As a result, the uppermost part of the relief forms a hanging valley above the fluvial plain.
Glacial deposits cover ∼2.6 km 2 in the entire mapped area, and mainly consist of poorly sorted debris with grain sizes varying from cobbles to boulders. The deposit is a clast-supported massive diamicton with a limited sandy-gravel matrix in pockets. These deposits cover the over-deepened cirque depressions down the slopes toward the valley. A glacial deposit infills the valley depression at the base of the slope descending from several cirques. The thickness of this deposit is in the 2-8 m range, and the bedrock crops out locally. Lateral and frontal moraines are evident in each sub-basin analysed. The frontal moraines exhibit both regular arcs and sinuous-shaped ridges, whereas lateral moraines consist of single ridges elongated for several hundreds of metres. Most frontal moraines are composed of single arcs/ridges, and nested frontal moraines are rarely observed. In some cases, large boulders (up to 2 m in diameter) stand on moraine ridges or on the surface of sparse glacial deposits interpreted as erratics (not represented in the map). The most relevant glacial deposits in the mapped area are described here.
The glacial deposits related to the Ezerište cirque cover ∼0.63 km 2 with several lateral and frontal moraines. The lower moraine, located at ∼1720 m a.s.l., is formed by a glacial tongue flowing northward in the narrow valley between the Planinica and Golem Rid crests. The northern and western sides of the deposit are bounded by a long and almost continuous lateral moraine. Several moraine arcs in the southern and eastern sides mark the occurrence of retreat and/or of stillstand phases of a small cirque glacier flowing eastward from Ezerište. A number of nested moraines were mapped between 1835 and 1845 m a.s.l. in the Lekočere area.
Glacial deposits from the Čaušica cirque cover a total of ∼0.36 km 2 . The lowest part of this deposit is sinuously shaped according to four lobes whose corresponding fronts are respectively at 1775, 1760, 1760 and 1730 m a.s.l. from north to south. Inner frontal moraines are present at 1800, 1830, 1840, 1850 m a.s.l (Figure 3).
Glacial deposits at Bistra IV cover only 0.05 km 2 . Here, two frontal moraine ridges bordering the cirque area are present at 1840 and 1860 m a.s.l. On the northernmost side there is the lowest frontal moraine ridge of the study area at 1690 m a.s.l.
Glacial deposits in the Manastirište area extend for 0.8 km 2 , draping a north-south oriented doline field, partially filling the depressions and giving rise to a highly irregular surface morphology. These deposits extend upvalley to partly infill the Bistra II cirque, but were probably related also to the Bistra I and III cirques at the time of their formation. Frontal moraine ridges are present at 1760, 1780, 1810, 1860 m a.s.l.
The deposits in the Bistra I cirque cover 0.06 km 2 and two frontal moraine ridges are present at 1780 and 1700 m a.s.l.
The southernmost glacial deposits of the area westward of Medenica cover 0.3 km 2 from 2100 to 1680 m a.s.l. The lowest frontal moraine ridge is at 1790 m, while the highest ridges are at 1950, 1970 and 1990 m a.s.l.
Glacial deposits on the western sector of the Bistra mountain spread for 0.1 km 2 in the Trebiška Rupa area up to 1890 m a.s.l. Five frontal moraine ridges are present between 1900 and 1930 m a.s.l.
In the Juročka Češma area the glacial deposits discontinuously cover ∼0.5 km 2 with the lowest frontal moraine at 1875 m a.s.l. and the highest at ∼1930 m a.s.l.
A very small outcrop of cemented deposits ( Figure 4) of glacial origin is located only at ∼1875 m north of Gorna Korija. It is composed of sub-rounded clasts within carbonate cement.
Periglacial and nival deposits
Two types of periglacial and nival landforms (rock glaciers and pronival ramparts) have been identified. For their representation, linear symbols with polygons have been combined using the pink colour for rock glaciers and the blue colour for pronival ramparts. Both the rock glaciers and the pronival ramparts extend for ∼0.26 km 2 .
A first group of rock glaciers, those in the Gorna Korija area, is formed by three coalescent landforms extending between 1820 and 1760 m a.s.l.. In the Manastirište area, two distinct groups of rock glaciers were mapped, the former consisting of three small rock glaciers whose frontal elevations are between ∼1816 and 1810 m a.s.l.; the latter consists of three landforms whose frontal elevations are between ∼1776 and 1791 m a.s.l.. The longest mapped rock glacier is located in the Medenica area, where a tongue-shaped form extends for 950 m from a front at ∼1900 m a.s.l..
The rock glaciers present tongue-shaped forms, and only in some cases have spatulate (sensu Humlum, 1982) geometry. Interestingly, the majority of rock glaciers have their rooting zone inside glacial accumulation and terminate on the inner side of frontal moraines. Furthermore, a large and multilobate rock glacier in the area of the Medenica cirque shows stratigraphic overlapping onto a glacial deposit. Here, the south-east verging ridges cut a right latero-frontal moraine along the south-west directed lobe.
Only one pronival rampart was mapped in the Manastirište area at the foot of a north-facing slope at ∼1780 m a.s.l. Alongside its sedimentological characteristics, the distal/proximal slope dip of the ramp and its development in plan form was taken into account to distinguish this feature from an embryonic single-ridge rock glacier (Hedding, 2016).
In the case of the mapped landform, the distal slope of the ramp is less inclined than the proximal one, and the crest exhibits a single and slightly curved shape, while the deposit presents an open-work fabric (essentially composed of angular clasts and very low amounts of fines). Despite their non-univocal diagnostic characteristics (Hedding, 2016, and references therein), these observations can be considered reliable indicators of debris accumulation at the base of a longlasting snow cover.
Other mountain landforms
Six alluvial types of landforms (colluvial-alluvial deposits, alluvial fans, debris flow fans, marsh deposits, gullies and surfaces affected by rill erosion) have been identified. Lines and polygons in different shades of green have been used for their representation. The areas at the base of the mountain slopes are covered with extensive alluvial deposits transported by seasonal or ephemeral creeks. These are flat or slightly inclined surfaces, locally exhibiting sinkholes and traces of channelization. Fan-shaped alluvial accumulations have formed locally where temporary stream transport reaches the slope base. The mapped alluvial fans consist of both single and coalescent landforms. In general, sandy-gravel with rare decametric clasts is of the dominant alluvial type.
The presence of several moraine ridges at the bottom of the valley favoured the formation of marshes via damming of surface waters. Marsh development was preceded by the existence of a lake that eventually underwent progressive silting. The mapped marshes are mainly concentrated in the Lekočere area ( Figure 5) and in the northernmost part of the study area, and are located from north to south at 1810, 1845, and 1835 m a.s.l. A small marsh at Bistra III is currently located between two moraine arcs at 1850 m a.s.l.
Three gravitational types of landforms (rockfalls, debris cones, scree slopes) have been identified at the foot of steep slopes and rock walls of the glacial cirques. Red-coloured polygons for active forms and orange-coloured polygons for inactive forms have been used for their interpretation.
In the Bistra Mountain, the majority of outcropping lithotypes (marbles, dolomitic marbles and limestones) have a carbonate composition. The combination of these rock types and abundant precipitations have favoured karst dissolution, shaping the relief at various scales. It is possible to recognize a karst plateau in the long wave-length of the relief. Micro-scale solution features (e.g. karrens) are not frequent and meso-scale landforms such as sinkholes, dolines and small blind-valleys are barely visible since they are almost totally covered by glacial till and alluvial-colluvial deposits. Yellow-coloured polygons have been used for their representation.
Ice-surface reconstruction and ELA calculation
The survey of glacial landforms made it possible to reconstruct the glacier geometry ( Figure 6) derived either by interpolation and extrapolation of mapped glacial landforms (e.g. moraines, lateral steps in valley flanks, overall valley topography), or by inferred paleoglacial limits (i.e. freehand drawing of glacier margins) (see guidelines in Porter, 1975). The calculation of the ELA of reconstructed glaciers is a classic method used to retrieve the relative chronology of different glacial advances/retreats in the same area and across the surrounding regions. This is because glaciers respond to changes in air temperature and precipitation by adjusting their mass balance, and thereby shifting their ELA. It is assumed that similar ELAs ultimately correspond to the same climatic phase (Bakke & Nesje, 2011;Oerlemans, 2005, and references therein). We adopted the ELA calculation toolbox (Pellitero et al., 2015) to automatically derive the ELA value of the reconstructed glaciers, by applying the classic area-altitude balance ratio (AABR) method (Osmaston, 2005). A ratio value of 1.6 was used, which corresponds to the average obtained for existing glaciers in other Mediterranean mountains (Rea, 2009;Rea et al., 2020).
In the northern sector of the Ezerište area, following a phase that formed a well-developed glacier (∼1500 m long), deglaciation is punctuated by some readvances/standstills marked by frontal moraines. More specifically, an older phase with an ELA of 1813 m a.s.l. can be recognized, followed by two phases with very similar ELAs (1846 m, 1847 m and 1850 m a.s.l. respectively). It is worth noting that the moraines in the area of the Ezerište marshgrouped in a single phaseare very close to each other, and locally appear as nested moraines. Nested frontal moraines are also present in the Čaušica area where the ELA values are 1807 m, 1848-1855 m and 1863 m a.s.l.
In the area of Trebiška Rupa, four frontal moraines can be recognized along with a valley sector that is ∼1200 m long and NW oriented. The ELA values from the older to the younger moraines vary from 1916 to 1935 m a.s.l.. The results of this analysis are shown in Table 1.
Discussion
The geomorphological map of the Bistra Mountain is the first step of a wider project aimed at reconstructing the relationship between climate change and geomorphic response in this area. The results of this study can be compared with other data from the mountains of North Macedonia (i.e. the Jablanica, Pelister and Galičiča mountains) and other sectors of the Balkans.
Starting from the mapped frontal and lateral moraines, the paleoshape of the single glaciers was reconstructed ( Figure 6), and the calculated ELAs (Table 1) can be grouped according to three possible glacial oscillations: ca. 1750-1800 m a.s.l. (G0 phase), ca. 1850-1900 m a.s.l. (G1), and 1800-2010 m a.s.l. (G2). In the Ezerište and Čaušica areas, the G1 phase is characterized by nested moraines very close to each other, which can be interpreted as two pulses of the same glacial phase (G1a and G1b in Figure 6 and Table 1). The G2 phase appears to have formed glaciers in almost all the areas examined, and it is evident that most of the glaciers were confined inside the cirques (not overcoming the threshold) or consisted of a small ice remnant at the base of the rockwall (e.g. Čaušica).
Four frontal moraines in the area of Trebiška Rupa can be recognized along a valley sector ∼1200 m long and NW oriented. The ELA values from the older to the younger moraines vary from 1916 m to 1935 m a.s.l.. This suggests that after the oldest phase, tentatively assigned to G1, the valley deglaciation was probably characterized by standstill phases leading to the formation of recessional moraines that were very close to each other.
The geomorphological survey evidenced that after the G2 glacial phase a non-glacial cold period affected the area. In some sub-basins (Medenica I, Bistra II and Čaušica) this led to the formation of permafrost inside glacial accumulations or scree deposits eventually forming rock glaciers instead of small glaciers Rock glaciers most likely co-existed with the small cirque glaciers at ∼2000 m a.s.l. in the upper part of Medenica. This phenomenon suggests a cold phase characterized by dryer conditions, with summer precipitations lower than those at the limit for the presence of glaciers (Ohmura et al., 1992). Persistent snow accumulation only took place at higher elevations, while most debris accumulations in other sectors of the slopes were exposed to atmospheric cooling. These conditions (cold air temperature and limited rainfall) are the most favourable for permafrost formation (French, 2017).
Grouping of the reconstructed ELAs was challenging because elevation distribution is in some cases inconsistent with a clear sequence of glacial phases across the valleys examined. This probably depends on the topography and on the small size of the former glaciers, that is, mainly small and steep cirque glaciers with very short or no tongue. Owing to these characteristics, calculation of the ELAs strongly depends on the shape of the reconstructed paleo-glaciers, and on the shielding from solar radiation exerted by the slopes (for example, in narrow circles). Furthermore, the features of the Bistra mountain relief may have allowed an accumulation of snow in the summit plateau then conveyed into the cirques through avalanches and wind-driven snow accumulation.
Accordingly, a morphostratigraphical approach was adopted to disentangle the ambiguous cases, considering that in the Ezerište valley there are three main advances/standstills and possible pulses in the same phase.
In order to check the reliability of our reconstructions and in the light of a potential generalization of the results, we tried to compare the ELAs of our glacial phases with data from the literature of the Balkan area.
The ELAs of the Last Glacial Maximum (LGM) and of the Lateglacial (Oldest Dryas: OD and Younger Dryas: YD) in the Balkan area are reported in Table 2 Table 1. ELA values calculated following Pellitero et al. (2015), and applying the AABR method with ratio value of 1.6.
Glacier
Phase ELA (m a.s.l.) Bistra I1 G2 1784 Bistra I2 G2 1808 Bistra II1 G2 1821 Bistra II2 G2 1813 Bistra II3 G2 1890 Bistra II4 G2 1830 Bistra III1 G2 1880 Bistra IV G0 1751 Čaušica1 G0 1807 Čaušica5 G1a 1848 Čaušica6 G1b 1855 The ELAs of the three glacial phases described for the Bistra Mountain can be compared to the values derived from previous studies (Figure 7). A recalculation of all the ELA values with a common and semi-automatic method would certainly be necessary for a more accurate comparison (Rea et al., 2020;Ribolini et al., 2018); however, the Balkan ELAs, at least those of the North Macedonia mountain, show a longitude increase. This is particularly evident for the phases attributed to the Younger Dryas. In this respect, the ELAs retrieved from the paleo-glaciers in the Bistra mountain seem to be compressed in a short elevation range, with a good correspondence between the ELA of the G0 phase and the one attributed to the LGM in the Jablanica Mountains (ca. 50 km from Mt. Bistra). The 100-150 m of difference between the ELAs of the G0 phase and those relative to the LGM in the Galičiča Mountains and in the Šara Range could be due to the high sensitivity of the ELA, to inaccuracies in the reconstruction of very small glaciers, or to the different methods adopted for its calculation. Excluding the values of Mt Orjen and of the Greek mountains, the other ELAs do not show a coherent variation with latitude.
Accepting as valid the hypothesis of a correspondence between G0 and LGM, it is evident that the two Lateglacial phases of readvance/standstill characterized the deglaciation of the Bistra Mountains. According to the regional results, these two phases can be tentatively attributed to the cold events of the Oldest Dryas and of the Younger Dryas.
Conclusions
This map is to be intended as a basis for ongoing works where dating techniques will constrain the recognized glacial phases, improving knowledge of the environmental evolution of this sector of the Balkans. In general, this research aims at adding a new element to the complex picture of glacier evolution in the circum-Mediterranean mountains. The results obtained by comparing data from different terrestrial and marine proxies (i.e. speleothems, paleolimnology, biostratigraphy of marine cores among others) may contribute to the study of climate evolution during the Late Pleistocene.
The outcomes of the geomorphological survey of the Bistra mountain confirm that this region records surface evidences related to the most striking climate variations of the Late Pleistocene, that is, the formation of glaciers during the Late Pleistocene. Indeed, although this area receives, and has received (Rea et al., 2020) a limited amount of precipitation because of the orographic effects caused by the reliefs facing the Adriatic Sea (e.g. Montenegro mountains), three phases of glacial advance/standstill are recognized by mapping frontal moraines. An ELA comparison across the Balkans supports a tentative chronological attribution to the Last Glacial Maximum and to the following Table 2 for details and references; see Figure 1 for locations).
LGM: Last Glacial Maximum; OD: Oldest Dryas; YD: Younger Dryas. cold events that characterized the Pleistocene termination (Oldest Dryas and Younger Dryas). Furthermore, the results point out that Mount Bistra was, at some point in its climatic history, close to the climatic limit for the existence of glaciers. Depending on local topoclimatic factors, this equilibrium could shift towards glacial or periglacial conditions, with the consequent formation of glaciers (moraines) or permafrost features (rock glaciers).
Calculation of the ELA values for the reconstructed glaciers made it possible to implement a dataset that could be used for future regional comparisons. At first examination, a correlation seems to emerge with the ELA values of other glaciers in the Balkan region, at least for the oldest phase identified in the Bistra Mountain (G0-LGM). The distribution of ELA values on a regional scale also seems to agree with general circulation patterns, which in this Mediterranean area are dominated by sources of humidity in the Adriatic Sea and by the orographic barrier formed by the Balkan mountains.
Software
Google Earth ® was used to perform remote sensing analysis. QGIS 3.10 was used to draw the geomorphological features, to create the geodatabase and to obtain the final layout. ArcGis10 ® was used to run the Pellitero tool for ELA calculations. | 6,456.8 | 2021-06-16T00:00:00.000 | [
"Geography",
"Geology",
"Environmental Science"
] |
Business Cycles, International Trade and Capital Flows: Evidence from Latin America
This paper adopts a flexible framework to assess both short- and long-run business cycle linkages between six Latin American (LA) countries and the four largest economies in the world (namely the US, the Euro area, Japan and China) over the period 1980:I-2011:IV. The result indicate that within the LA region there are considerable differences between countries, success stories coexisting with extremely vulnerable economies. They also show that the LA region as a whole is largely dependent on external developments, especially in the years after the great recession of 2008 and 2009. The trade channel appears to be the most important source of business cycle co-movement, whilst capital flows are found to have a limited role, especially in the very short run.
Introduction
It is well known that macroeconomic volatility generates both economic and political uncertainty with detrimental effects on investment and consumption plans and, ultimately, future economic growth (Acemoglu et al., 2003) and aggregate welfare (Athanasoulis and van Wincop, 2000). There is therefore considerable interest, among academics as well as policy-makers, in shedding light on the sources of output fluctuations, especially in the new economic environment characterised by a much greater role played by emerging market economies. Stronger international financial and trade linkages have affected the relative importance of external, regional and country-specific factors in driving national business cycles, with implications for the design of effective stabilisation policies (Fatàs and Mihov, 2006). Economic theory does not provide unequivocal predictions: stronger linkages could result either in a higher or a lower degree of business cycle co-movement depending on whether or not demand-and supply-side (as well as wealth) effects dominate over increased specialisation of production through the reallocation of capital (Baxter and Kouparitsas, 2005;Imbs, 2006;Kose et al. 2003Kose et al. , 2012. This cannot be established ex-ante: it is essentially an empirical question. A knowledge of cross-country spillover effects is especially relevant for emerging countries because of their higher degree of volatility compared to more mature economies. According to Loayaza et al. (2007), both internal and external factors explain why emerging economies are so volatile: i) the instrinsic instability induced by the development process itself; ii) the lack of effective mechanisms (such as well functioning financial markets and proper stabilisation macroeconomic policies) to absorb external fluctuations; iii) the exposure to exogenous shocks in the form of sudden capital inflows/outflows and/or large changes in the international terms of trade.
The Latin American (LA) economies in particular have experienced a remarkable sequence of booms and busts in the last three decades. After the debt crisis of the 1980s, most countries in the region benefited from huge capital inflows (with a resulting high growth rate) until the Russian crisis in the late nineties led to their sudden drying up; then in the early years of the following decade higher liquidity, a dramatic rise in commodity prices and low risk premia created a particularly favourable macroeconomic and financial environment in the region and generated again robust growth (Österholm and Zettelmeyer, 2007;Izquierdo et al., 2008); therefore the question has been asked whether there has been a decoupling of the business cycle in the industrialised countries and the LA region respectively, the latter having become an increasingly autonoumos source of growth for the world economy.
Most of the existing literature on international business cycles focuses on the industrial countries, specifically the Group of Seven (Bagliano and Morana, 2010), Europe (Artis et al., 2004), East Asia and North America (Helbling et. al, 2007), Western Europe and North America (Mody et al., 2007). There are, however, a few studies on LA differing in the set of countries examined and the adopted econometric methodology, and providing mixed evidence. Focusing on the average behaviour of the aggregate LA region, Izquierdo et al. (2008) find that external shocks account for a significant share of the variance of regional GDP growth. Similar results are reported by Österholm and Zettelmeyer (2007) for both the LA region as a whole and its individual countries, whilst Aiolfi et al. (2010) identify a sizeable common component in the LA countries' business cycles, suggesting the existence of a regional cycle. By contrast, Hoffmaister and Roldos (1997) conclude that domestic country-specific aggregate supply shocks are by far the most important source of output fluctuations in the LA countries. Kose et al. (2003) find that country-specific factors explain the largest share of the variance of output in these countries, with the exception of Bolivia, where the world component is more important than the regional and country-specific ones. Finally, Boschi and Girardi (2011) report that domestic factors account for by far the largest share of domestic output variability in six major LA countries, and that regional factors are more important than international ones.
None of the aforementioned papers includes the great recession of 2008 and 2009. By contrast, the present study examines the last three decades to assess the relative role of countryspecific, regional and external shocks in explaining business cycle fluctuations in six major LA economies (namely, Argentina, Brazil, Chile, Mexico, Peru and Venezuela) and the LA region as a whole, with the aim of shedding light on the role of bilateral trade flows and financial linkages in business cycle co-movements between the LA region and its main economic partners.
Building on the work of Diebold and Yilmaz (2012), we specify a very flexible empirical model enabling us to analyse the propagation of international business cycles without any restrictions on the directions of short-and long-run spillovers or the nature of the propagation mechanism itself. Using quarterly data from 1980:I to 2011:IV, we document a high degree of heterogeneity among the LA countries: while Argentina, Mexico and Peru appear to be increasingly dependent on external developments as a result of the great recession, Venezuela seems to be influenced mainly by the LA regional business cycle, with only Brazil showing a decreasing role of the external factors. As for the LA region as a whole, our results indicate that it can be characterised as a small open economy largely dependent on external developments. This applies especially to the the years following the great recession of 2008 and 2009, contradicting the so-called decoupling hypothesis.
In particular, our findings imply that the goods trade channel is the most important source of these linkages. Capital flows also affect business cycle co-movements, but their role is limited, especially in the very short run. The disaggregate analysis focusing on their components (debt, portfolio equity and foreign direct investment flows) reveals a negative effect of portfolio equity flows on the degree of business cycle synchronisation, as predicted by standard international real business cycle models with complete markets. By contrast, short-term capital and foreign direct investment flows reinforce in the short run the role of the trade channel and make the LA region more vulnerable to shocks from abroad, consistently with recent empirical evidence (e.g., Imbs, 2006Imbs, , 2010.
The layout of the paper is as follows. Section 2 describes the methodology used to assess the propagation mechanism of international business cycles. Section 3 describes the data and presents the empirical results based on the forecast error variance decompositions for the individual LA countries and the LA region as a whole. Section 4 provides evidence on the role of financial and trade linkages. Section 5 offers some concluding remarks.
The Empirical Framework
We focus on output growth in order to analyse the dynamic relationships between the LA region and the rest of the world as well as the intra-area linkages among countries belonging to that region.
Given the increasing degree of integration of the global economy, it is essential to consider possible linkages with a number of foreign economies. It is equally important to allow for time variation, since a fixed parameter model is not likely to capture possibly important changes in the business cycle propagation mechanisms resulting from globalisation. 1 Consequently, the modelling approach chosen here differs from previous ones in two ways.
First, it is flexible enough to accommodate possible nonlinear shifts in the propagation of international business cycles; second, it is based on analysing linkages with the output growth rate of various economies outside the LA region rather than a number of macroeconomic variables for a single foreign country (typically the US). Therefore we include the US as the main driving force behind business cycles co-movements in the LA region (see the literature on the US "backyard", e.g. Ahmed, 2003;Canova, 2005;Caporale et al., 2011), but also the Euro area because of its historical trade linkages with the LA region, as well as Japan (given the financial linkages documented by Boschi, 2012, andBoschi and and China, whose trade linkages with the LA region have become much stronger in recent years (Cesa-Bianchi et al., 2011). 1 Including additional variables (such as interest rates, exchange rates, consumption or investment) for a wide range of countries would result in a system whose dimensions would not be manageable in the standard Vector AutoRegression (VAR) approach followed here. Even advanced econometric approaches, such as the Global VAR (see Cesa-Bianchi et al., 2011, andBoschi and or dynamic factor models (as in Kose et al. 2012, among others), would not be a fully satisfactory modelling strategy since they belong to the class of (linear) time-invariant models.
As in Diebold and Yilmaz (2012), the econometric framework is based on the following covariance-stationary Vector AutoRegression (VAR) model where the sample initially spans the period from the first available observation to θ , and then both its starting and ending period are shifted forward by one datapoint at a time. As pointed out by Granger (2008), linear models with time-varying parameters are actually very general nonlinear models, and therefore the chosen framework is ideally suited to analysing the issues of interest.
Innovation Accounting
Examining all the effects of the lagged variables in a VAR model is often both difficult and unnecessary for the purposes of the analysis (Sims, 1980). Rather, it is more convenient to resort to some transformations of the estimated model (1) in order to summarise the dynamic linkages among the n variables under investigation. In the business cycle literature, a useful metric often used to measure the extent of business cycle synchronisation is the sum of the variance shares of different classes of shocks such as country-specific, regional or global sources of economic fluctuations (see Kose et al, 2012, among others). 2 Since the reduced-form residuals u 's are generally correlated, a common practice to obtain uncorrelated shocks is to use the Choleski decomposition of ( ) k u Σ . Despite its straightforward implementation, this method has the drawback that it is sensitive to the ordering of the variables in the system, and therefore all possible permutations should be considered when carrying out the dynamic simulations for a thorough assessment. A popular alternative is provided by the Generalised Forecast Error Variance (GFEV) decomposition (Pesaran and Shin, 1998). This approach estimates the percentage of the variance of the h -step ahead forecast error of the variable of interest which is explained by conditioning on the non-orthogonalised shocks while explicitly allowing for contemporaneous correlations between these shocks and those to the other equations in Pesaran and Shin (1998). In order to be able to interpret the results, we carried out. Therefore, while we analyse the innovation (or unsystematic) part of the series represented by the residual of the estimated model, they decompose its systematic part. The limitation of their approach, namely a Bayesian dynamic latent factor model, is that it does not allow the identification of the geographical origin of the factors affecting domestic business cycles, but rather of the world, region and country-specific components of a series.
follow Wang (2002) and rescale (3) using the total variance in the generalised rather than in the orthogonal case (for all k and h ), that is the sum of the variance decompositions in (4) are normalised to unity.
Measuring Spillover Effects at Country and Regional Level
By computing the decomposition (4) for all variables in system (1) for a given recursion k and for a given simulation horizon h , we obtain the following n n × matrix (the spillover table according to the terminology of Diebold and Yilmaz 2012), 3 which makes it possible to measure to extent to which two or more variables of the system are connected to each other: Diebold and Yilmaz (2012) show that a synthetic measure of the spillovers received by (transmitted to) country l from (to) all other countries can be obtained by summing by columns (rows) all the off-diagonal elements in the l -th row (column). We adapt their framework by dividing the variable of system (1) into two distinct subsets (which we label regional and external groups) with dimension of 1 6 n = and 1 4 n n − = , respectively.
We compute country-specific, ( ) , k cs h δ , regional, ( ) , k rs h δ , and external shocks, ( ) , k es h δ , as 3 By construction, the elements in each row of (5) sum up to unity, so that the total variance of the system is equal to n .
Moving from a single country to a regional perspective, we define the region-specific , , whilst the aggregate external shocks (that is, the innovations originating outside the LA region) are computed as i.e., both (6) and (7) are normalised so as to lie in the [0, 1] interval.
In order to identify the direction of the linkages between the two (aggregate) blocs of countries we define the regional net spillover index, ( ) , k net h γ , as the difference between the variability transmitted to and received by the elements belonging to the external bloc of the system so that positive (negative) values for (11) indicate that the region is a net transmitter (receiver) of variability to (from) outside. Using condition (11), it is straightforward to obtain a breakdown for the individual countries forming the external bloc, so that we can define 1 n n − pairwise regional net spillover indexes, ( ) , k cty h γ , as where 1 1,..., cty n n = + .
Before discussing the empirical findings it is worth noting that both the γ 's and δ 's indices depend on the simulation span (through the index h ) and on the estimation sample (through the index k ). This is motivated by the need for a sufficiently flexible model specification to analyse the sources of business cycles in a period such as the recent one characterised by exceptionally large fluctuations. 4 Further, considering a wide range of simulation horizons enables us to obtain a dynamic picture of how cross-country business cycle linkages evolve when moving from the short to the long run through a sequence of GFEV decompositions for which the conditioning information is becoming progressively less important as the simulation horizon widens. 5
Assessing Business Cycle Co-movements in the LA Bloc: Country-specific and Regional
Evidence
Data and Preliminary Analysis
We use quarterly real GDP series for six major LA countries ( As a preliminary step, we test for the presence of unit roots in the GDP series in logarithms.
ADF tests are performed, both on the levels and the first differences of the series. In each case, we are unable to reject the null hypothesis of a unit root in the levels at conventional significance levels. On the other hand, differencing the series appears to induce stationarity. Standard stationarity tests corroborate this conclusion. 7 Given the nonstationarity of the time series and the lack of an economic theory suggesting the number of long-run relationships and/or how they should be interpreted, it is reasonable not to impose the restriction of cointegration on a VAR model (Ramaswamy and Sloek, 1998). Thus, we have opted for a specification in first differences since the focus of our analysis is on (time-varying) short-run linkages rather than secular trends (as, for instance, in Bernard and Durlauf, 1995).
More specifically, we choose size 80 for the rolling windows (i.e., 20 years of quarterly observations, 80 observations in all). This can be regarded as a compromise between stability and flexibility, as it turned out that a smaller window size makes the VAR models more unstable. 8 Such a choice implies that the complete set of recursions produces 48 different sets of VAR estimates.
The GFEV decomposition analysis is then conducted over a simulation horizon of 20 quarters (5 years) over the period 2000:I-2011:IV.
The Role of Country-Specific, Regional and External Factors
The γ 's and δ 's defined in Section 2 above are tri-dimensional structures (surfaces) whose dimensions are given by the simulation horizon, the estimation window and the strength (or 7 These results are not reported to save space. direction) of the linkage among the elements of the system. 9 Figure 1 shows the decomposition of output variability of the LA countries into country-specific, regional and external sources.
Figure 1
The results reveal significant differences between countries with respect to the relative to 17 (20) percent after 20 quarters. Finally, a slowly declining pattern is observed in the case of Mexico, in contrast to Venezuela, for which a highly erratic evolution over time is found, with the Argentine crisis of the early noughties translating into a sizeable increase (from about 30 to 60 percent after 20 quarters) in the contribution of regional factors.
Finally, the average effect of external factors (graph III in Panels A-F) is within a similar range to the one for the regional components (as also in Aiolfi et al., 2010), its minimum and maximum values being those for Venezuela and Peru respectively. As for the individual countries, the observed pattern for Argentina mirrors that of the country-specific component: the lowest value corresponds to the Argentine crisis, whilst the highest coincides with the first symptoms of the global crisis. In all LA countries the role of external factors increases in the most recent years, the single exception being Brazil, where business cycles have become less synchronised with those in the industrialised economies during the years of the great recession, the evidence suggesting therefore some partial decoupling.
Evidence from the LA Region
The variance decomposition for the LA region is computed as an (equally-weighted) average of individual country-specific figures. According to equations (9) and (10), it is based on a synthetic economy which is an "average" LA country, as in Izquierdo et al. (2008). Figure 2 shows the decomposition of regional output variability between region-specific (Panel A) and external sources (Panel B).
Figure 2
Regarding the regional sources of fluctuations, their relative importance vis-à-vis the external ones appears to diminish over the simulation horizon. The dominant role of external factors in the long run is found for all quarters. In particular, external factors account for about 30 percent of the long-run (20-quarter horizon) variance of LA GDP growth, consistently with the evidence in Österholm and Zettelmeyer (2007) and in Aiolfi et al. (2010).
As for the evolution over time of the estimated effects, the relative contribution of the two types of factors is remarkably stable up to the first half of 2008. With the onset of the global crisis external factors appear to acquire an increasing role, especially at the very bottom of the global downturn (between mid-2008 and mid-2009), accounting for more than 50 percent of total variability in 2008:IV. Subsequently, following a partial recovery, idiosyncratic factors have regained some (but not all) of their former importance. Our findings therefore give support to previuos evidence according to which the LA region is still characterised by heavy dependence on external factors and does not carry sufficient weight to affect the international business cycle with its own growth dynamics (Calvo et al., 1993;Izquierdo, 2008;Cesa-Bianchi et al., 2011), thus contradicting the so-called "decoupling" hypothesis (Helbling et al., 2007).
Further evidence is presented in Figure 3, which shows the difference between the variability transmitted to and received from the external bloc of the system as defined by (11).
There is a predominance of negative values for the rolling estimates (especially when considering long-run effects), suggesting that the LA region can be characterised as a net receiver of variability from the outside world. This applies even more strongly to the recovery period after the peak of the global crisis: the long-run net effect, after reaching a minimum of -17 percent in 2008:IV, is about -8 percent at the end of the sample.
Figure 3
However, net spillover effects vis-à-vis an aggregate "rest of the world" could hide underlying heterogeneity, which can only be detected by a more disaggregate analysis. Figure 4 presents the net pairwise spillover effects between the LA region and the US (Panel A), the Euro area (Panel B), Japan (Panel C) and China (Panel D).
Figure 4
Both short-and long-run effects appear to be very stable, especially in the case of China. In particular, the balance between volatility transmitted to and received from the outside world is negative for the LA region in most cases. This is largely true for the years of the great recession (2007)(2008)(2009), during which the LA region suffered from the recessionary impulses coming from the most advanced economies (but not from China). In the most recent years, however, the overall picture seems to have changed significantly, namely the impact of business cycle conditions in the US, the Euro area and Japan has diminished, whilst the influence of the Chinese economy has increased. Overall, the disaggregate results provide no evidence of de-coupling; they also indicate that bilateral linkages with China have become stronger, making the LA region vulnerable not only to economic hardship in the industrialised economies but also to future developments in China, as already pointed out by Cesa-Bianchi et al. (2011).
The Role of Trade and Capital Flows
Since the study of Frankel and Rose (1998) a considerable body of empirical research (Imbs, 2006(Imbs, , 2010Kalemli-Ozcan et al., 2009) The positive effect of bilateral trade flows on the degree of international business cycle synchronisation has been widely established in the literature (see Imbs, 2006 among others) and is consistent with the theoretical predictions of the model developed by Kose and Yi (2006). As for capital flows, several studies give support to the view that financial integration increases the degree of business cycle correlations in cross-sections and over time (Imbs, 2006(Imbs, , 2010, whilst other papers (e.g., Kalemli-Ozcan et al., 2009) find that financially integrated economies have negative co-movements, as posited by standard international real business cycle models with complete markets (Backus et al., 1994). The sign for 3 ψ in (13) is thus the object of empirical scrutiny. 10 We follow Frankel and Rose (1998) The rationale behind such a proxy for capital movements is that capital should flow between countries or regions with different or even opposite external positions (Imbs, 2006). In particular, we compute the net foreign asset position as the sum of net positions in debt ( dbt ), equities ( eqt ) and foreign direct investment ( fdi ) as in Caballero (2012), among others. 11
Estimation Results
Using spillover indexes rather than standard correlation coefficients makes it possible to analyse (time-varying) business cycle correlations in a much more flexible framework by distinguishing between co-movements at different forecast horizons. To see this, we start by mapping our spillover index to the (time-varying) correlation coefficients. Following Forbes and Rigobon (2002), we consider the following least square regression between output growth rates of countries a and b , a b y a b y u ∆ = + ∆ + , so that The term in square brackets on the RHS of the second expression in (14) is the share of output growth variability of country a explained by b . In terms of our framework, it is expressed by ext γ in (10). Condition (14) implies that * ext b ρ =ρ = γ (for a given forecast horizon).
Accordingly, equation (13) The first step of the empirical analysis is based on standard correlation measures between * ρ and its main macroeconomic determinants. Table 1 shows that the unconditional correlation coefficients for the * ρ 's and tra variables are positive and statistically significant for all forecast horizons considered, confirming the well established finding that higher business cycle therefore fdi is computed using data only for the US, the Euro area and Japan. The series have been seasonally adjusted using the X-12 method.
synchronisation is associated with stronger trade intensity. One might argue that the positive correlation is spurious owing to the existence of factors correlated to both variables. When conditioning on fin , the magnitude (and the statistical significance) of the partial correlation coefficients remains virtually unchanged. By contrast, the unconditional correlation between * ρ 's and fin turn out to be statistically insignificant. The same conclusion holds when considering the partial measure of association (conditioned on tra ), even though the sign of the relationship in general becomes negative. Table 1 Correlations are only partially informative as they cannot gauge causality between the regressand and the explanatory covariates. In order to delve deeper into the effects of bilateral trade and capital flows on business cycle synchronisation, we estimate equation (15) Table 2 presents the estimation results of the baseline specification. 14 Single, double or triple asterisks denote statistically significant coefficients at the 1, 5 or 10 percent level, respectively. We 12 Typical external instruments for trade intensity are spatial characteristics (e.g. geographic proximity or the presence of common borders), and for financial integration institutional variables related to legal arrangements. As most of these instruments are constant over time, they cannot be used in a time series framework. In order to address endogeneity concerns, bilateral trade intensity and capital flows are measured at the beginning of the period and are treated as predetermined variables.
also report robust standard errors (in parentheses) as well as some basic diagnostics for the chosen instruments ( J statistics), the serial correlation of the residuals ( DW ) and the goodness of fit of the regression ( 2 adj R ). Table 2 The Overall, these findings reinforce the disaggregate results discussed in Section 3.3 and show that, in the presence of relatively weak financial linkages, propagation of the impulses from outside to the LA bloc has happened mainly through trade flows.
The apparent de-coupling of the LA bloc from the most advanced economies thus arises not only from trade being increasingly oriented towards China rather than its historical trading partners (namely the US and the Euro area -see Cesa-Bianchi et al. (2011), but also from a low degree of financial integration with the rest of the world economy.
Extensions
In this Section we present the results from a disaggregate analysis based on a breakdown of capital flows into debt, equity and FDI flows with the aim of shedding light on what type of flows are behind stronger business cycle co-movements.
We first assess the role of these components by replacing fin with disaggregated capital flows (entering the model individually). The results in Table 3 indicate that the trade channel, albeit dominant, is not the only one: capital flows can also affect the degree of international business cycle synchronisation in the short run (namely, up to the fourth simulation step). Moreover, portfolio equity flows have a negative effect on the degree of business cycle synchronisation whilst that of debt and foreign direct investment is positive. Table 3 As a further step, we consider a specification where the three types of flows enter the model simultaneously (Table 4). Over the first year of the simulation horizon its explanatory power is higher with respect to its (nested) counterparts in Table 2 and Table 3, suggesting that debt, portfolio equity and foreign direct investment act as additional channels of transmission of shocks from abroad.
Our findings complement previous evidence for emerging markets according to which both trade and financial variables mattered prior to the global crisis (Blanchard et al., 2010), since we document that these factors largely explain business cycle co-movements over the last decade.
However, our framework makes it possible to go further and to highlight the relative strength of the different transmission channels in the short and long run: the increasing explanatory power of trade flows over the entire simulation span is largely corroborated, whereas capital flows affect business cycle co-movement in the short term, as the 2 adj R statistics show. 15 Moreover, in the very short term debt and foreign direct investment have an opposite effect compared to equity portfolio flows.
While the result for eqt can be rationalised within the standard international real business cycle framework with complete markets, our findings for dbt and fdi suggest that short-term capital flows and internationalisation of production through foreign direct investment may strengthen the role of trade channel making the LA region more prone to suffer from the propagation of shocks from abroad. Table 4 15 This also implies that the statistically significance of the coefficient on equity portfolio flows after the first four quarters of the simulation span makes only a marginal contribution to explaining how the LA bloc and the rest of the world co-move.
Conclusions
This paper presents a flexible framework to assess both short-and long-run linkages between business cycles. Specifically, we extend the econometric approach of Diebold and Yilmaz (2012) For that purpose, we decompose macroeconomic fluctuations in domestic output growth rates into the following components: i) country-specific (idiosyncratic) factors; ii) regional factors, capturing fluctuations that are common to all countries belonging to the LA region; iii) external factors, which are related to business cycle development outside the LA region. Most importantly, we are able to determine the direction and the intensity of the propagation mechanisms and therefore establish to what extent the LA economies and LA region as a whole have been dependent on (or influenced by) external developments over time.
Overall, the business cycle of the individual LA countries appears to be influenced by country-specific, regional and external shocks in a very heterogenous way. Also, the LA region as a whole is strongly dependent on external developments. This conclusion holds especially for the years after the great recession of 2008 and 2009, ruling out any decoupling of the LA region from the rest of the world.
More specifically, we find a clear dominance of trade flows over financial linkages as a determinant of business cycle co-movements between the the LA region and the foreign bloc. The apparent de-coupling of the LA area with respect to the most advanced economies in recent years thus seems to have been determined not only by increasing trade flows towards China but also by a low degree of financial integration with its main economic partners.
The decomposition of capital flows into their components (debt, portfolio equity and foreign direct investment flows) shows a negative effect of portfolio equity flows on the degree of business cycle synchronisation, consistently with the predictions of standard international real business cycle models with complete markets. In contrast, short-term capital and foreign direct investment flows tend to reinforce in the short run the role of the trade channel and the responsiveness of the LA region to external developments.
The proposed econometric approach is of more general interest, since it does not include any variables which are highly country-or region-specific and thus can also be used to investigate the relationship between comovement across countries/regions or financial markets and their macroeconomic determinants. For instance, it could be applied to analyse the factors that have influenced integration of the real economies in the European Monetary Union after the adoption of the euro, or to assess the historical determinants of the regional convergence/divergence dynamics within countries over time, or to conduct a macro analysis of market integration in a given financial segment. These are all interesting topics for future research. Single, double or triple asterisks denote statistically significant coefficients at the 1, 5 or 10 percent level, respectively.
Robust standard errors are reported in parentheses. | 7,463.4 | 2012-11-01T00:00:00.000 | [
"Economics",
"Business"
] |
Simple all-optical FFT scheme enabling Tbit / s real-time signal processing
A practical scheme to perform the fast Fourier transform in the optical domain is introduced. Optical real-time FFT signal processing is performed at speeds far beyond the limits of electronic digital processing, and with negligible energy consumption. To illustrate the power of the method we demonstrate an optical 400 Gbit/s OFDM receiver. It performs an optical real-time FFT on the consolidated OFDM data stream, thereby demultiplexing the signal into lower bit rate subcarrier tributaries, which can then be processed electronically. ©2010 Optical Society of America OCIS codes: (060.4510) Optical communications; (070.2025) Discrete optical signal processing References and links 1. M. E. Marhic, “Discrete Fourier transforms by single-mode star networks,” Opt. Lett. 12(1), 63–65 (1987). 2. K. B. Howell, Principles of Fourier Analysis (CRC Press, 2001). 3. J. W. Cooley, and J. W. Tukey, “An algorithm for the machine calculation of complex Fourier series,” Math. Comput. 19(90), 297–301 (1965). 4. A. E. Siegman, “Fiber Fourier optics,” Opt. Lett. 26(16), 1215–1217 (2001). 5. A. E. Siegman, “Fiber Fourier optics: previous publication,” Opt. Lett. 27(6), 381 (2002). 6. S. Kodama, T. Ito, N. Watanabe, S. Kondo, H. Takeuchi, H. Ito, and T. Ishibashi, “2.3 picoseconds optical gate monolithically integrating photodiode and electroabsorption modulator,” Electron. Lett. 37(19), 1185–1186 (2001). 7. H. Sanjoh, E. Yamada, and Y. Yoshikuni, “Optical orthogonal frequency division multiplexing using frequency/time domain filtering for high spectral efficiency up to 1 bit/s/Hz,” in Proceedings of Optical Fiber Communication Conference and Exhibit, (Optical Society of America, 2002), paper ThD1. 8. C. K. Madsen, and J. H. Zhao, Optical Filter Design and Analysis: A Signal Processing Approach (WileyInterscience, 1999). 9. B. H. Verbeek, C. H. Henry, N. A. Olsson, K. J. Orlowsky, R. F. Kazarinov, and B. H. Johnson, “Integrated fourchannel Mach-Zehnder multi/demultiplexer fabricated with phosphorous doped SiO2 waveguides on Si,” J. Lightwave Technol. 6(6), 1011–1015 (1988). 10. N. Takato, K. Jinguji, M. Yasu, H. Toba, and M. Kawachi, “Silica-based single-mode waveguides on silicon and their application to guided-wave optical interferometers,” J. Lightwave Technol. 6(6), 1003–1010 (1988). 11. S. Suzuki, Y. Inoue, and T. Kominato, “High-density integrated 1×16 optical FDM multi/demultiplexer,” in Proceedings of Lasers and Electro-Optics Society Annual Meeting (IEEE, 1994), pp. 263–264. 12. N. Takato, T. Kominato, A. Sugita, K. Jinguji, H. Toba, and M. Kawachi, “Silica-based integrated optic MachZehnder multi/demultiplexer family with channel spacing of 0.01-250 nm,”,” IEEE J. Sel. Areas Comm. 8(6), 1120–1127 (1990). 13. K. Takiguchi, M. Oguma, T. Shibata, and H. Takahashi, “Demultiplexer for optical orthogonal frequencydivision multiplexing using an optical fast-Fourier-transform circuit,” Opt. Lett. 34(12), 1828–1830 (2009). 14. A. J. Lowery, L. B. Du, and J. Armstrong, “Performance of optical OFDM in ultralong-haul WDM lightwave systems,” J. Lightwave Technol. 25(1), 131–138 (2007). 15. A. Lowery, and J. Armstrong, “Orthogonal-frequency-division multiplexing for dispersion compensation of longhaul optical systems,” Opt. Express 14(6), 2079–2084 (2006). 16. W. Shieh, and I. Djordjevic, OFDM for Optical Communications (Academic Press, 2010). 17. J. Armstrong, “OFDM for optical communications,” J. Lightwave Technol. 27(3), 189–204 (2009). 18. R. P. Giddings, X. Q. Jin, and J. M. Tang, “First experimental demonstration of 6Gb/s real-time optical OFDM transceivers incorporating channel estimation and variable power loading,” Opt. Express 17(22), 19727–19738 (2009). #124508 $15.00 USD Received 19 Feb 2010; revised 2 Apr 2010; accepted 15 Apr 2010; published 20 Apr 2010 (C) 2010 OSA 26 April 2010 / Vol. 18, No. 9 / OPTICS EXPRESS 9324 19. Q. Yang, S. Chen, Y. Ma, and W. Shieh, “Real-time reception of multi-gigabit coherent optical OFDM signals,” Opt. Express 17(10), 7985–7992 (2009). 20. Y. Benlachtar, P. M. Watts, R. Bouziane, P. Milder, D. Rangaraj, A. Cartolano, R. Koutsoyannis, J. C. Hoe, M. Püschel, M. Glick, and R. I. Killey, “Generation of optical OFDM signals using 21.4 GS/s real time digital signal processing,” Opt. Express 17(20), 17658–17668 (2009). 21. H. C. Hansen Mulvad, M. Galili, L. K. Oxenløwe, H. Hu, A. T. Clausen, J. B. Jensen, C. Peucheret, and P. Jeppesen, “Demonstration of 5.1 Tbit/s data capacity on a single-wavelength channel,” Opt. Express 18(2), 1438–1443 (2010). 22. E. Yamada, A. Sano, H. Masuda, T. Kobayashi, E. Yoshida, Y. Miyamoto, Y. Hibino, K. Ishihara, Y. Takatori, K. Okada, K. Hagimoto, T. Yamada, and H. Yamazaki, “Novel no-guard-interval PDM CO-OFDM transmission in 4.1 Tb/s (50 × 88.8-Gb/s) DWDM link over 800 km SMF including 50-Ghz spaced ROADM nodes,” in Proceedings of Optical Fiber Communication Conference and Exposition and The National Fiber Optic Engineers Conference (Optical Society of America, 2008), paper PDP8. 23. S. Chandrasekhar, X. Liu, B. Zhu, and D. W. Peckham, “Transmission of a 1.2-Tb/s 24-carrier no-guard-interval coherent OFDM superchannel over 7200-km of ultra-large-area fiber,” in Proceedings of European Conference on Optical Communication (IEEE, 2009), paper PD2.6. 24. A. D. Ellis, and F. C. G. Gunning, “Spectral density enhancement using coherent WDM,” IEEE Photon. Technol. Lett. 17(2), 504–506 (2005). 25. Y. Ma, Q. Yang, Y. Tang, S. Chen, and W. Shieh, “1-Tb/s single-channel coherent optical OFDM transmission over 600-km SSMF fiber with subwavelength bandwidth access,” Opt. Express 17(11), 9421–9427 (2009). 26. W. Shieh, H. Bao, and Y. Tang, “Coherent optical OFDM: theory and design,” Opt. Express 16(2), 841–859 (2008). 27. T. Kobayashi, A. Sano, E. Yamada, Y. Miyamoto, H. Takara, and A. Takada, “Electro-optically multiplexed 110 Gbit/s optical OFDM signal transmission over 80 km SMF without dispersion compensation,” Electron. Lett. 44(3), 225–226 (2008). 28. D. Hillerkuss, A. Marculescu, J. Li, M. Teschke, G. Sigurdsson, K. Worms, S. Ben Ezra, N. Narkiss, W. Freude, and J. Leuthold, “Novel optical fast Fourier transform scheme enabling real-time OFDM processing at 392 Gbit/s and beyond,” in Proceedings of Optical Fiber Communication Conference and Exposition and The National Fiber Optic Engineers Conference (Optical Society of America, 2010), paper OWW3.
Introduction
The fast Fourier transform (FFT) is a universal mathematical tool for almost any technical field.In practice it either relates space and spatial frequencies (spatial FFT) or time and temporal frequency (temporal FFT).While the former can most easily be implemented in the optical domain by means of a lens, the time-to-frequency conversion in the optical domain is more intricate.Yet, it is exactly this conversion that is needed for next generation signal processing such as required for OFDM in optical communications.The importance of the optical FFT for the implementation of next generation processors has been recognized in the past, and direct implementations of the FFT in integrated optics technology have been suggested by Marhic et al. [1] and others.However, to this point these schemes are difficult to implement and stabilize, and they do not scale well with increasing FFT order.
In this paper, we introduce a new and practical implementation of the optical FFT.We discuss the feasibility, tolerances towards further simplifications, practical implementations, and we demonstrate the potential of the method by an exemplary implementation for a next generation OFDM system.We show that a single 400 Gbit/s OFDM channel can be demultiplexed into its constituting subchannels by optical means.The method is based on passive optical components only and thus basically provides processing without any power consumption.The scheme therefore is in support of a new paradigm, where all-optical and electronic processing synergistically interact providing their respective strengths.All-optical methods allow processing at highest speed with little -if no power consumption, and electronic processing performs the fine granular processing at medium to low bit rates.
In Section 2 we introduce the new optical FFT.In Section 3 we show how the optical FFT and IFFT can be applied to OFDM transmission systems in order to enable processing of OFDM-channels at very high bit rates without the limitations of the electronic circuits.In Section 4, we will present the results of an experimental implementation of the optical FFT for demultiplexing a consolidated OFDM signal into its OFDM subchannels.
The optical FFT/IFFT
In this section we introduce the new optical FFT method.We first recapitulate conventional means for performing the optical FFT and differentiate it from the commonplace FFT as done with a computer.We then show how subtle re-ordering of the various FFT circuit elements leads to a significant reduction of the complexity.It also leads to a simple and practical optical circuit with an equivalent output.Afterwards we show how this circuit can be further simplified at the cost of a small interchannel crosstalk penalty by replacing one or more stages of the FFT circuit by standard optical (tunable) filters.
Background
The fast Fourier transform (FFT) is an efficient method to calculate the discrete Fourier transform (DFT) for a number of time samples N, where N = 2 p with p being an integer.The Npoint DFT is given as transforming the N inputs x n into N outputs X m .If the x n represent a time-series of equidistant signal samples of signal x(t) over a time period T, as shown in Fig. 1(a), then the X m will be the unique complex spectral components of signal x repeated with period T [2].The FFT typically "decimates" a DFT of size N into two interleaved DFTs of size N/2 in a number of recursive stages [3] so that The quantities E m and O m are the even and odd DFT of size N/2 for even and odd inputs x 2l and x 2l+1 (l = 0,1,2,…N/2-1), respectively.Figure 1(b) shows the direct implementation of the FFT for N = 4 using time electrical sampling and signal processing.Marhic [1] and Siegman [4,5] have shown independently a possible implementation of an optical circuit which performs an FFT.Such an implementation for N = 4 is shown in Fig. 1(c).When using an optical circuit to calculate the FFT the outputs X m appear instantaneously for any given input combination x n .Thus, in order to obtain the spectral components of a time series, the N time samples in interval T must be fed simultaneously into the circuit.This can be achieved using optical time delays as a serial-to-parallel (S/P) converter, as shown in Fig. 1(c).
The optical FFT (OFFT) differs from its electronic counterpart by its continuous mode of operation.In an electronic implementation optimized for highest throughput, the optical signal is sampled and the FFT is computed from all samples x n .Afterwards, the next N samples are taken.In the optical domain, the FFT is computed continuously.Yet, the calculation is correct only when feeder lines 1 to N contain the time samples from within their respective interval.Sampling must therefore be performed in synchronization with the symbol over a duration of T/N.These samples subsequently can be processed in the optical FFT stage.However, it must be emphasized that proper calculation is only possible if all samples are forwarded from one stage to the next stage in synchronism.Care must therefore be taken, that not only all waveguides interconnecting the couplers have equal delay but also maintain proper phase relations, as indicated in optical FFT stage of Fig. 1(c).The optical FFT has several advantages over its electronic counterpart.First, the alloptical FFT may be used at highest speeds where electronics cannot be used.This is due to the fact that the optical sampling window sizes (e. g., with electro-absorption modulators, EAM) can be significantly shorter than electronic sampling windows of analog-to-digital converters (ADC) [6].Additionally, since all components used in the OFFT are passive (except for tuning circuitry and time gating), the power consumption is inherently low and barely increases with complexity or sampling rate.For an exemplary 8-point FFT of a 28 GBd OFDM signal, we estimate the power consumption for the optical and the electrical sampling as follows.The power requirement for the optical sampling of 8 tributaries is dominated by the EAM driver amplifiers and would be about 14 W.In addition, several watts will be required to compensate for insertion and modulation loss of the optical gates using optical amplifiers.In comparison to this, the power consumption for electrical sampling at the required sampling rate of 224 GSa/s for I and Q is estimated to be in excess of 160 W. This power value is calculated by interpolation from state of the art analog-to-digital converters (ADCs).A state of the art ADC at 28 GSa/s consumes at least 10 W of electrical power.If higher sampling rates are implemented using parallelization, the power consumption increases linearly.If no guard interval is used, a sampling rate of 224 GSa/s on two ADCs is required leading to the estimated total power of 160 W. Introducing a guard interval at the same overall bitrate would increase the power consumption of the electronic implementation as digitizing of the guard interval is also needed.In comparison to this, the power consumption of the optical implementation will not increase, as the number of required optical gates does not change with the introduction of a guard interval.It has to be pointed out, that the power consumption for the electronic sampling also includes the analog-to-digital conversion.
A disadvantage of this approach is the unfortunate scaling with size.The number of couplers is the complexity std of the FFT structure must be stabilized with respect to each other, thereby limiting N to a small number for practical cases.This renders the optical approach according to Fig. 1(c) impractical for large N. Electronic signal processing that could be used instead is, however strongly limited due to its power consumption and its limited speed.However, it is possible to significantly simplify the circuit of Fig. 1(c) without affecting its operation.It has to be mentioned that another possible scheme uses waveguide grating routers (WGR) to implement the DFT [7].This approach, however, suffers from the need to control the relative phases of all N paths of the structure simultaneously, requiring N-1 Phase shifters.
A new optical FFT scheme
Here we show that by re-ordering the delays and by re-labeling the outputs accordingly an equivalent but simpler implementation can be found, since a direct implementation of the circuit in Fig. 1(c) would be difficult to make due to its frequent waveguide crossings, and due to the large number of waveguide phases that need to be accurately controlled.The simplifying steps for an example with N = 4 are shown in Fig. 2. In a first step we relocate the sampling gates to the end of the circuit.This will not change the overall operation.Next we re-order the delays in the S/P conversion stage as indicated in Fig. 2(a) and re-label the outputs accordingly.This way the OFFT input stage consists of two parallel delay interferometers (DIs) with the same free spectral range (FSR) but different absolute delays (cf.In the Appendix it is shown mathematically that the DFT of order N = 2 p as described by Eq. ( 1) can always be replaced by an arrangement of p DI stages as indicated in Fig. 2(d).The DI delay in each stage and the location of the necessary phase shifters are derived as well.For illustration we show the structure for N = 8 in Fig. 3(a) as derived with the optical FFT approach according to [1] and in Fig. 3(b) the new structure presented in this contribution.showing the arrangement of delays and phase shifts as derived in Appendix A. The order of the outputs is different from that of the conventional FFT scheme.The sub-circuits for the FFT of order 2 and 4 are also marked, respectively.
An inherent advantage of this approach is that a single frequency component of the sampled signal can be easily extracted without the implementation of the complete structure.In this case all DIs that are not part of the optical path to the corresponding output port can be removed, leaving only one DI per stage and thus a total of log 2 N DIs that require stabilization.By tuning the phases in each DI, any arbitrary FFT coefficient of the signal can be selected without changing the structure of the setup, as illustrated in Fig. 4.Such a reduction in complexity is not possible with the structures that have been shown previously (see Section 2.1).The ability to shape the spectral (Fourier) components of an optical signal with a structure similar to that in Fig. 4 makes it a member of the family of Fourier filters [8].However, in order to obtain the block-wise DFT of an optical signal, a Fourier filter is not sufficient.Instead, the complete structure according to Fig. 3 is required, including the time-domain optical sampling which is an essential part of the DFT or short-time Fourier Transform (STFT), as we will show.The frequency response of the DI cascade can be easily visualized, cf. Figure 5.As a matter of fact, the DFT of order N is able to discriminate N individual frequency components of the input signal, spaced ∆ω = 2π/T S apart, as theoretically shown in the Appendix.Thus, by cascading a sufficient number of DIs with correct delay and phase, any arbitrary frequency component can be isolated.Figure 5 shows that switching between outputs X 0 and X 4 can be achieved by simply tuning ߮ 3 = 0 to ߮ 3 = π, thus shifting the green curve, equivalent to using the second (lower) output of the DI.The cascade of DIs has previously been proposed as a demultiplexer for FDM channels that do not overlap, functioning as a filter bank [7,[9][10][11][12].Its adaptation to OFDM, which is based on the STFT/FFT property of the structure, however, is only possible in combination with optical sampling to delineate the OFDM symbol boundaries.
The traditional optical FFT according to Marhic [1] supplemented by optical sampling has also been implemented as OFDM demultiplexer by Takiguchi et al. [13] for N = 4.However, due to the complexity of the approach it does not scale well for larger N.
A further simplification
Figure 5 has shown that the DFT acts as a periodic filter in the frequency domain with a FSR of N∆ω, and each DI of the cascade is also a periodic filter with FSR N∆ω/2 p where p is the index of the FFT stage and N is the order of the FFT.In order to further simplify the optical FFT circuit, one might be tempted to replace one or more stages of the DI cascade by standard (non-DI) optical filters.Since the stages with a higher subscript (those being traversed last) have the largest FSR, it would be sensible to replace these first, because the requirements on these filters are most relaxed.In Fig. 6 we have reduced the number of DI stages of an N = 8 FFT from top to bottom and replaced them by a single Gaussian filter (for illustration purposes).Figure 6(a) shows the original optical FFT with N = 8 together with the impulse response of output X 2 .With the derivation given in Eq. ( 10) of the Appendix the impulse response at output X 2 is 3 3 and the corresponding frequency transfer function is If the third OFFT stage is replaced by a Gaussian filter (bank) as in Fig. 6(b), the FFT order is reduced to N = 4, which reduces the OFFT to two DI stages only.The impulse response then is described by the convolution of the response of the first two stages, which consists of 4 impulses according to Eq. (10) given in the Appendix, with the impulse response of a 1storder Gaussian filter centered at ω F , ( ) where ω B is the 3 dB bandwidth of the filter.As can be seen in the frequency response in Fig. 6(b), this replacement causes crosstalk from frequency component X 0 (red arrow).Furthermore, due to the significantly longer impulse response of the Gaussian filter, the total filter impulse response is increased (marked red) and causes inter-symbol interference (ISI).If more DI stages are replaced by optical filters, the filter passband in the frequency domain must become narrower.By the Fourier uncertainty principle [2] this results in an even longer impulse response and thus larger ISI, as shown in Figs.6(c) and 6(d).
Implementation
The all-optical (I)FFT structure presented in the previous section can be implemented straightforwardly using free-space optics or integrated optical circuits.Previously, implementations of DI cascades for up to N = 16 have been shown using the silica-on-silicon approach [9][10][11][12].Integration in material systems of more recent interest such as silicon-on-insulator or InP is expected to yield much more compact structures, but has to our knowledge not yet been published.Benefits of optical integration are the small footprint and ease of stabilization.
Application to orthogonal frequency-division multiplexing (OFDM)
OFDM is a multicarrier signaling technique that has emerged as a promising technology for ultra-high bit rate transmission.The reason lies in its potentially high spectral efficiency, which can be significantly higher than wavelength-division multiplexing (WDM), and its tolerance to transmission impairments like dispersion [14,15].A detailed overview of the state-of-the-art concerning OFDM transmission is available in [16].
In OFDM, the tributaries or subcarriers are spaced so tightly that their spectra overlap, whereas in WDM they are separated by guard bands which enable channel extraction by means of conventional optical filters, as shown in Fig. 7.The whole of the subcarriers in an OFDM channel form a signal in time -the OFDM signal, which can no longer be demodulated by a simple filter due to the spectral overlap of its tributaries.Its shape in time is an analog signal as was illustrated in Fig. 1.Yet if the subcarrier frequency spacing ∆ω is related to the duration T by 2 T then two subcarriers p and q are orthogonal with respect to integration over an interval T, As a consequence, an appropriate receiver will be able to distinguish them.Such receivers exist.They almost exclusively perform the FFT/STFT on the time-sampled signal in the electronic domain [17].Such electronic real-time implementations are currently restricted to OFDM symbol rates of a few MBd due to speed limitations of the digital signal processor [18,19].These symbol rates correspond to total bitrates of several Gb/s as a large number of subcarriers and higher order modulation formats are used.Higher bit rate OFDM signals usually have to be processed offline, which may be practicable for laboratory experiments but not for data transmission [20].
In this section we propose our low-complexity scheme based on the optical FFT, in which the demultiplexing of the OFDM subchannel is performed in the optical domain, and only the subchannel signal processing is done in the electronic domain.This way it is only the subchannel symbol rate that is limited by the capabilities of the electronic receiver.Two possible implementations exploiting the OFFT are depicted in Fig. 8.In Fig. 8(a), the optical inverse FFT (IFFT) is used to generate the OFDM signal and the optical FFT is used for demultiplexing.Here, the pulse train at rate T −1 from the mode-locked laser (MLL) is split and modulated independently with an arbitrary subcarrier modulation format for each subcarrier.The modulated pulse trains are subsequently fed into the optical IFFT circuit, which transforms them into an OFDM signal that consists of individual pulses of a length of ~TS , not unlike an OTDM signal [21].Each of the output pulses of the optical FFT is a superposition of copies of the different input pulses with different phase coefficients.The spectrum of the generated signal is significantly wider than expected for an OFDM signal.To obtain the same waveform and spectrum as in the second approach, it is necessary to limit the bandwidth of the signal using bandpass filtering of the output signal or pulse shaping of the MLL.The main difference to an OTDM signal, which consists of a series of pulses of (ideally) equal amplitude, is that each pulse is a superposition of N subchannel samples with a corresponding variation in pulse amplitude, and the combination of N such pulses forms a single OFDM symbol.This transmitter is described in detail in Section 3.1.In Fig. 8(b), a frequency comb, here provided by a MLL is split into its (non-overlapping) Fourier components by a waveguide grating router (WGR).The Fourier components of the input signals are directly modulated at a symbol rate chosen such that condition ( 7) is fulfilled.The tributaries, or subchannels, are then recombined to obtain the OFDM signal.This transmitter is described in detail in Section 3.2.
In both cases, the receiver part consists of the optical FFT to demultiplex the OFDM signal into its tributaries and a subcarrier receiver "subchannel Rx".The exact details of the subcarrier Rx in Fig. 8 varies with the modulation format used within the subchannels.It could be a direct detection receiver, a balanced DI receiver such as needed for DPSK signals, a DQPSK receiver, or a coherent receiver for QPSK or any other QAM signal.
Comparing OFDM and OTDM at similar bitrates, one can observe some similarities, but also significant differences.Probably the most important ones can be found in the resilience with respect to chromatic dispersion.In OFDM systems, the dispersion tolerance can be tuned by a proper choice of subchannel bandwidth, and by the insertion of a cyclic prefix.As we have shown in Section 2.3, the extraction of an OFDM subband before demultiplexing can further increase the dispersion tolerance.On the other hand, OTDM requires that narrow symbols remain narrow, and thus requires higher-order dispersion compensation in order to be properly demultiplexed [21].Also, it is not possible to access a fraction of the OTDM signal using optical filtering, as it can be done for the in the subband access in OFDM signals.Lastly, OFDM requires the various lines of the comb source spectrum to be locked relative to each other only in frequency, whereas for OTDM (and the IOFFT transmitter of Section 3.1) the spectral lines need to obey strict phase relations in order to obtain sufficiently short pulses.The output of a pulse or frequency comb source is split into its spectral components, each of which satisfies Eq. ( 7).Those spectral components are separately encoded with an arbitrary subcarrier modulation format and combined to form the OFDM signal.At the receiver, the subchannels are separated using the optical FFT circuit.The receiver is identical to the one above.Green insets show exemplary waveforms before WDM filtering at the transmitter with OTDM-like pulses after the optical IFFT.Blue insets show exemplary eye diagrams at various locations within the receiver.
OFDM using the optical IFFT
The optical FFT can be used for both, the implementation of the transmitter and of the receiver, Fig. 8(b).At the transmitter, the IFFT creates the OFDM symbols with the configuration of Fig. 3, whereas the signals would traverse the structure in the inverse direction, coming from the right.A short pulse launched into any of the IFFT circuit inputs will result in a series of pulses with equal shapes but correspondingly lower pulse energy to appear at the output of the circuit.In order for these pulses not to interfere with one another or with pulses from neighboring OFDM symbols, the duration of these input pulses must be sufficiently short (on the order of T/N).The input pulses can then be encoded with an arbitrary modulation format (e.g.QPSK or QAM) prior to injection into the optical IFFT circuit to obtain a sequence of OFDM symbols.At the receiver, the optical FFT circuit demultiplexes the OFDM symbols continuously.Correct outputs are obtained only when the FFT window is synchronized with the OFDM symbols.Otherwise intersymbol interference (ISI) will occur.This reduces the usable width of the received signal (the open "eye") at the receiver by a factor of approximately N. To extract the usable ISI-free time slot of the received signal, an optical gate must be part of the FFT circuit, such that bandwidth-limited receiver electronics may be used [7].
In OFDM transmission, accumulated group-velocity dispersion (GVD) will introduce crosstalk by shifting the OFDM symbol boundaries in each subchannel, which causes blurring [15,17].However, since subchannel rates and spacing can be high in our scheme, the sensitivity to GVD is non-negligible.The addition of a cyclic prefix could alleviate the problem, but is difficult to realize within the optical IFFT circuit, as a part of the optical signal would need to be duplicated and delayed increasing the complexity of the scheme.
OFDM using a frequency comb
At the transmitter, the subchannel rate limitations imposed by electronics may be overcome by using a DWDM-like approach, where the possibility to optically generate precisely tuned spectral components in frequency space is exploited to directly generate OFDM subcarriers at the correct frequency separation ∆ω as required by the OFDM condition (6).A bank of frequency offset-locked laser diodes or an optical comb generator provide subchannel carriers which can be modulated individually [22].Also the concept of a recirculating frequency shifter (RFS) has been successfully applied to generation of frequency offset-locked subcarriers [23].A similar approach is used by coherent WDM systems, which forego the OFFT at the receiver in favor of standard optical filters, but require a phase-synchronized transmitter laser bank [24].
In the approach shown in Fig. 8(a), the frequency comb of a pulse source is spectrally separated into subchannel carriers.Each subcarrier, frequency-locked but not phasesynchronized, is then individually modulated and combined to form OFDM symbols.Hence, a corresponding optical FFT receiver can be used to decode the subchannels.The major difference to the IFFT transmitter is that the output corresponding to any one input is not a series of pulses with discrete phases, but a continuous signal with a corresponding optical frequency.This transmitter can be considered as the continuous Fourier transform equivalent of the discrete transform performed by the IFFT.In such a transmitter, bandwidth limitations of the modulator will cause subchannel crosstalk because the orthogonality condition ( 7) cannot be fulfilled in the presence of residual amplitude modulation of the subchannels near the symbol boundaries.The orthogonality condition actually forbids residual modulation of the subcarriers within the FFT window -the phase and amplitude of the complex signal to be encoded must be maintained throughout the symbol time T.This would require the transition from one OFDM symbol to the next to be instantaneous, requiring modulators of infinite bandwidth.Any transition region between adjacent symbols therefore would lead to a violation of the orthogonality condition.If the subchannels are not orthogonal, crosstalk will be generated when performing the FFT in the receiver.As a result, the portion of the symbol duration usable for detection at the receiverand thus the width of the gating window is shortened.This is illustrated in Fig. 9(a).Similar to traditional OFDM [16], a remedy is the insertion of a guard interval (corresponding to the cyclic prefix) between symbols so that the FFT window duration at the receiver will not be changed.By making the guard interval just long enough to contain the intersymbol transition, the remaining part of the OFDM symbol fulfils the orthogonality condition and receiver crosstalk will only be generated within the guard interval.The width of the open "eye" at the receiver increases, albeit at the cost of a slightly reduced symbol rate.This is illustrated in Figs.9(b) and 9(c).
An additional increase of the guard interval increases resilience towards accumulated GVD.For an increase of the guard interval length τ GI (beyond that necessary to achieve the required degree of orthogonality) one can keep up with a maximum accumulated GVD B 2 , where L is the system length, β 2 is the local GVD coefficient and ∆ω is the angular bandwidth of the OFDM (super-)channel.By using optical filters to extract a slice of the OFDM channel before performing an FFT of correspondingly lower order, as shown in Fig. 6, the bandwidth ∆ω over which GVD may introduce crosstalk is reduced.Thus the resilience towards accumulated GVD increases (see Fig. 10).However, as discussed earlier, it comes at a price.Conventional optical bandpass filters introduce inter-subchannel crosstalk and inter-symbol interference and thus reduce the quality of the received signal.Nevertheless, such a scheme has been successfully used in [24] to extract a single subchannel of an optically generated OFDM signal.The combination of a cascade of DIs and simple optical filters to perform OFDM demultiplexing has also been exploited in the electrical domain.In [23], cascaded delay-and-add elements were used instead of an FFT in order to extract single subchannels out of an optically filtered section of the OFDM channel spectrum.This method reduces cost in terms of time and complexity, but is still limited by the speed of electronics.Another approach to reduce the required electronics speed is multi-band OFDM [25,26], which sacrifices some spectral efficiency for the ability to extract OFDM subbands by tuning the LO laser within the receiver.Exemplary, Fig. 11 shows the dependence of the back-to-back received signal quality, determined in terms of Q, averaged over the in-phase and quadrature components of the central subchannel, versus the bandwidth of a 4th-order Gaussian filter for various simplifying implementations of the optical FFT and for a guard interval of 25% of the OFDM Symbol duration T. In equ.(9), the quantities I denote the mean signal values and the symbols σ mean the standard deviations of the received symbol levels (in this case the in-phase and quadrature values of QPSK subchannel signals).The transmitter setup shown in Fig. 8(c) was used with a modulator rise time T rise = T/8 and an optical 200 GHz bandpass filter following the modulator.Clearly, the signal quality deteriorates with the number of DI stages that have been replaced by conventional filters.However, the improvement in residual GVD tolerance may well offset the penalty in systems with a large number of subchannels.
Experimental results
To verify the feasibility of the all-optical OFDM generation, including a guard interval, and OFDM demultiplexing by means of the OFFT, we performed a back-to-back experiment which will be briefly described in this section [28].
The OFDM receiver and transmitter is shown in Fig. 12.The transmitter is based on the principles presented in Section 3.2.The frequency-locked subcarriers are generated by a 50 GHz comb generator providing 9 sufficiently strong OFDM subcarriers.The optical comb source is based on two cascaded dual drive Mach-Zehnder modulators that are driven by an electrical clock signal as demonstrated in [24].These are separated into odd and even channels by a disinterleaver.The odd channels are encoded with PRBS 2 7 −1 DPSK data at 28 GBd while the even channels are encoded with 28 GBd DQPSK data, using decorrelated PRBS 2 7 −1 sequences for the in-phase and quadrature components.After combination within an optical coupler, the channel spectra overlap significantly and thus can no longer be demultiplexed by standard optical filters.This was verified with the help of simulations and experiments.We end up with a 392 Gbit/s OFDM signal.
The guard interval length was set to 15.7 ps due to the significant rise and fall times of the optical modulator used, and to increase the sampling window size at the receiver.Had we been able to use better performing components, the guard interval could have been reduced significantly without affecting the operation of the optical FFT at the receiver, and thus bring the OFDM symbol rate closer to the subchannel separation frequency ∆ω.Fig. 12. Setup of OFDM transmission system with (a) transmitter and (b) receiver.Two cascaded Mach-Zehnder modulators generate an optical frequency comb, which is split by a disinterleaver into 4 odd and 5 even channels.Spectrally adjacent subcarriers are modulated alternately using DBPSK or DQPSK modulation.All subcarriers are combined in a coupler and transmitted.The received OFDM signal is processed using the low-complexity OFFT circuit of Section 2.3 with 2 DIs and one standard optical filter.The resulting signals are sampled by electro-absorption modulators (EAM) and detected using DBPSK and DQPSK receivers.Bit error rates were measured with a BERT.The receiver comprises the all-optical FFT scheme followed by a preamplified receiver with differential direct detection.The optical FFT circuit consists of a cascade of two DIs, followed by passive splitters and a bank of bandpass filters.We thus adopt the low-complexity FFT circuit of Section 2.3, partially compensating for the associated performance loss by the increased guard interval.The final element of the OFFT are the EAM sampling gates.
To evaluate the OFFT performance, we compared subcarrier signals after multiplexing/demultiplexing with back-to-back (B2B) signals delivered by the DBPSK and DQPSK transmitters, respectively, in terms of bit error ratio (BER) versus receiver input power.To measure a comparable B2B signal performance, the outputs of the transmitters were gated using the same EAM as it was used in the OFDM receiver.The results depicted in Fig. 13 show no penalty compared to the B2B performance for DBPSK and only a small penalty for the DQPSK channels, owing in part to the large guard interval.The outer channels (labelled "−4" and "4") perform comparatively worse, because these subcarriers are generated with 11 dB less optical power compared to the center channel.The capacity limit for an OFDM channel using this approach is therefore mainly determined by the performance of the frequency comb generator -the number of subcarriers which are generated and the signal-tonoise ratio with which this is done.
Conclusion
We have introduced and experimentally demonstrated a practical scheme for optical FFT processing.The implemented OFDM receiver shows no penalty compared to single channel back-to-back performance.As the scheme is not subject to any electronic speed limitations it will allow Tbit/s FFT processing.Also, since the scheme relies on passive optical filters it performs processing with virtually no power consumption and this way may help to overcome the ever increasing energy demand that normally comes with higher speed.
In which the caret (^) denotes the Fourier transform.It can be split into two sums corresponding to even and odd n, Here, H pm,1 is the n-independent transfer function for the upper input of a delay interferometer with delay The remainder of expression (13) is the transfer function of the DFT of order N/2.Hence, a DFT of order N, with N = 2 p , can be implemented optically by cascading a DFT of order N/2 and a delay interferometer with delay T/N and output-specific phase shift φ pm .It can be easily verified that the DFT transfer function for N = 2 is equal to both outputs of a single DI.The DI phase φ pm for an upper arm output X m is the same as that for the lower arm output jX m+N/2 .The term describing the N/2-order FFT in ( 13) is also the same for X m and jX m+N/2 due to its periodicity.Thus both outputs of a single DI can be used to obtain different coefficients of the DFT, resulting directly in the optical FFT scheme of Fig. 3.
Fig. 1 .
Fig. 1.Four point example of the traditional fast Fourier transform and its optical equivalent.(a) Exemplary signal in time sampled at N = 4 points; (b) the structure consists of a serial-toparallel (S/P) conversion that generates parallel samples of the signal, a sampling stage to generate the time samples xn and a conventional FFT stage that calculates the fast Fourier transform of the sampled signal; (c) the optical equivalent of the circuit uses passive splitters and optical time delays for serial-to-parallel conversion; optical gates perform the sampling of the optical waveform; afterwards the optical FFT is computed using optical 2 × 2 couplers and phase shifts as described in [1].Right-hand sides of (b) and (c) show typical output signals for input signal (a).
Fig. 2 .
Fig. 2. Exemplary four-point optical FFT for symbol period T; (a) traditional implementation as in Fig. 1; (b) leading to a structure consisting of two DIs with the same differential delay; the additional T/4 delay is moved out of the second DI (c), which leads to two identical DIs that can be replaced by a single DI followed by signal splitters; (d) low-complexity scheme with combined S/P conversion and FFT.
1 N 1 N
Figure 2(b)).By moving the common delay of T/4 in both arms of the lower DI to its outputs, one obtains two identical DIs with the same input signal, see Fig.2(c).This redundancy can be eliminated by replacing the two DIs with one DI and by splitting the output.The process is illustrated in Fig.2(d).These simplification rules can be iterated to apply to FFTs of any size N.The new optical FFT processor consists only of − cascaded DIs with a small complexity of only DI 2− DIs needs stabilization, and no inter-DI phase adjustment is required.
Fig. 3 .
Fig. 3. (a) Direct FFT implementation versus (b) simplified all-optical FFT circuit for N = 8showing the arrangement of delays and phase shifts as derived in Appendix A. The order of the outputs is different from that of the conventional FFT scheme.The sub-circuits for the FFT of order 2 and 4 are also marked, respectively.
Fig. 4 .
Fig. 4. Optical FFT circuit based on Fig. 3 for the extraction of Fourier component Xn.By tuning the phases φ1, φ2 and φ3, any required Fourier component may be extracted without physically changing the setup.
Fig. 5 .
Fig. 5. Exemplary illustration of the intensity transfer functions of each stage (blue, red, and green) in the cascade of Fig. 3 and the total transfer function of the FFT circuit (black) for outputs X0 (left side) and X4 (right side).
Fig. 6 .
Fig.6.Inter-symbol interference and frequency crosstalk occurring when replacing (parts of) the DI filters by 1st-order Gaussian filters with appropriate passbands in order to extract frequency component X4.The left column shows the setup schematic, the middle column shows the real part of the impulse response and the right column shows the logarithmic intensity transfer functions of the involved DI stages (blue, red, and green), the optical filter (purple) and their cascade (black).In case of the approximations, the transfer function is not nulled for all outputs except X4, leading to frequency crosstalk (red arrows).Also with decreasing filter bandwidth, the impulse response exceeds the DFT summation interval T (marked red), leading to crosstalk/interference from neighboring time slots.
Fig. 7 .
Fig. 7. Optical spectra of (a) an OFDM signal with 4 subchannels and (b) a DWDM signal with 4 channels.Due to the overlap, the OFDM subchannels cannot be extracted by simple optical filtering and the whole of the spectrum has to be processed simultaneously.
Fig. 8 .
Fig. 8. Two examples for the implementation of an all-optical OFDM transmitter-receiver pair.(a) The output of a pulse source (e.g. a mode-locked laser) is split onto N copies and each copy of the pulse train is encoded individually with an arbitrary subcarrier modulation format before being combined in the optical IFFT circuit.(b)The output of a pulse or frequency comb source is split into its spectral components, each of which satisfies Eq. (7).Those spectral components are separately encoded with an arbitrary subcarrier modulation format and combined to form the OFDM signal.At the receiver, the subchannels are separated using the optical FFT circuit.The receiver is identical to the one above.Green insets show exemplary waveforms before WDM filtering at the transmitter with OTDM-like pulses after the optical IFFT.Blue insets show exemplary eye diagrams at various locations within the receiver.
Fig. 9 .
Fig. 9. Function of an OFDM guard interval using the setup of Fig. 8(b) and an 8-FFT.On the left-hand side, the exemplary eye diagram of a single modulated subchannel is shown, as is, on the right side, the received signal before optical gating.If the symbol duration equals the integration interval T, the signal transitions, described by the 10-90% rise/fall time Trise (red), cause inter-and intra-subchannel crosstalk and the received eye is almost fully closed within the observation window of length TS = T/N.With increasing length of the guard interval τGI, interference vanishes during T. Further increasing the guard interval increases the duration in which the orthogonality condition is fulfilled and thus increases the duration of the open "eye."
Fig. 10 .
Fig. 10.Mitigating dispersion by means of a cyclic prefix (CP).(a) An OFDM signal with data and cyclic prefix.(b) Frequency dependent delay of subcarriers after transmission due to dispersion.The amount of dispersion that can be tolerated is limited by the length of the cyclic prefix (CP) as all subcarrier symbols must stay within the FFT window.(c) By means of optical filtering, an OFDM subband has been extracted, as discussed in Fig.6.This way, only the extracted symbols must stay within the FFT window, and thus a larger amount of dispersion can be tolerated.
Fig. 11 .
Fig. 11.Achievable quality of the received signal for various FFT filter schemes performed on an 8-channel OFDM signal with 20 GBd on a 25 GHz subcarrier spacings. .The solid red plot shows the signal quality if the FFT is performed with a Gaussian filter as a function of the Gaussian filter bandwidth.The blue and green curves show the signal qualities if one and two DI cascades are used.The black curve shows how a perfect FFT can be performed if the FFT is performed with DIs only.
cascade of directional couplers and a delay line in the lower arm, | 10,283 | 2010-04-26T00:00:00.000 | [
"Business",
"Physics"
] |
Construction and Comparative Analyses of Highly Dense Linkage Maps of Two Sweet Cherry Intra-Specific Progenies of Commercial Cultivars
Despite the agronomical importance and high synteny with other Prunus species, breeding improvements for cherry have been slow compared to other temperate fruits, such as apple or peach. However, the recent release of the peach genome v1.0 by the International Peach Genome Initiative and the sequencing of cherry accessions to identify Single Nucleotide Polymorphisms (SNPs) provide an excellent basis for the advancement of cherry genetic and genomic studies. The availability of dense genetic linkage maps in phenotyped segregating progenies would be a valuable tool for breeders and geneticists. Using two sweet cherry (Prunus avium L.) intra-specific progenies derived from crosses between ‘Black Tartarian’ × ‘Kordia’ (BT×K) and ‘Regina’ × ‘Lapins’(R×L), high-density genetic maps of the four parental lines and the two segregating populations were constructed. For BT×K and R×L, 89 and 121 F1 plants were used for linkage mapping, respectively. A total of 5,696 SNP markers were tested in each progeny. As a result of these analyses, 723 and 687 markers were mapped into eight linkage groups (LGs) in BT×K and R×L, respectively. The resulting maps spanned 752.9 and 639.9 cM with an average distance of 1.1 and 0.9 cM between adjacent markers in BT×K and R×L, respectively. The maps displayed high synteny and co-linearity between each other, with the Prunus bin map, and with the peach genome v1.0 for all eight LGs (LG1–LG8). These maps provide a useful tool for investigating traits of interest in sweet cherry and represent a qualitative advance in the understanding of the cherry genome and its synteny with other members of the Rosaceae family.
Introduction
All cherry species belong to the Cerasus subgenus of the Prunus genus, within the Rosaceae family. Sweet cherry (Prunus avium L.) is an economically important crop that includes cherry trees cultivated for human consumption and wild cherry trees used for their wood, also called mazzards [1]; [2]. The majority of cultivated cherry trees belongs to the diploid (2n = 2x = 16) sweet cherry and allotetraploid (2n = 4x = 32) sour cherry (P. cerasus L.) species. Sweet cherry and the tetraploid ground cherry (P. fruticosa Pall., 2n = 4x = 32) are believed to be the parental species that gave rise to sour cherry [1]; [2]; [3]; [4].
Traditionally, the main sweet cherry breeding objectives have been to select for large, attractive and good-flavoured fruits, a short juvenile phase, abundant and consistent yields, reduced susceptibility to fruit cracking, self-compatibility and improved resistance or tolerance to diseases such as bacterial canker induced by Pseudomonas mors pv. prunorum and P. syringae [3]; [5]; [6]; [7]; [8].
Due to rapid climate change and the noteworthy reduction of chilling accumulation observed and/or predicted for several temperate zones [9], concern about climatic adaptation in temperate fruit trees has arisen [10]. Sweet cherries are cultivated commercially around the world in temperate, Mediterranean and even subtropical regions. In order to break bud dormancy, sweet cherries need to meet minimum chilling requirements in the autumn and winter [11]. Most fruit tree species are long-lived perennials, lasting more than 20 years, which implies that successful cultivars must be able to perform well despite changing climatic conditions. For this reason, climatic adaptation has become one of the major breeding objectives for many fruit crops and climatic adaptation of the parental plant material precedes breeding for commercial fruit qualities [12]. However, the long juvenile period (four to five years) in sweet cherry represents a drawback for quick and efficient breeding. Reducing the time needed to breed temperate fruit trees will be important to develop new cultivars that are able to face the challenges associated with temperate fruit production in a changing global environment. Marker-assisted breeding approaches based on genetic maps have the potential to save time and resources needed to select new sweet cherry cultivars.
The recent release of the peach genome v1.0 by the International Peach Genome Initiative [13]; [14], (GDR database: http://www.rosaceae.org/peach/genome) and the high synteny among Prunus species [15] will facilitate the characterization of genes responsible for agronomically important traits within this family. In cherry, maps from an inter-specific cross, P. avium 'Napoleon' 6 P. nipponica, have been developed [16]. From intraspecific crosses, Dirlewanger et al. [17] have developed a map from elite cultivars 'Regina' 6 'Lapins', whereas Olmstead et al. [18] have constructed a map from the great-grandparent of 'Lapins' (cultivar 'Emperor Francis'), and the wild forest cherry 'New York 549. More recently, Cabrera et al. [19] presented a consensus sweet cherry map constructed from four progenies using Rosaceae Conserved Orthologous Set (RosCOS) SNP and SSR markers. Despite the potential usefulness of genetic linkage maps for cherry, highly dense linkage maps for commercial sweet cherry cultivars have not yet been constructed. With the exception of the SNP-based map that we have recently reported [19], the majority of the cherry linkage maps have been based on low throughput molecular markers such as SSRs, AFLPs and isoenzymes. High throughput SNP genotyping technology has become available for some plant species. BeadChip or Infinium II technologies have been developed by consortiums or commercially for apple [20], peach [21], cherry, maize, tomato and Vitis [22], minimizing dramatically genotyping costs and facilitating the production of high density SNP maps. The RosBREED cherry 6K SNP array v1 for use with the Illumina InfiniumH system developed by the RosBREED consortium [23] for both diploid sweet cherry and allotetraploid sour cherry provides a new avenue for genome scanning capability in sweet and sour cherry. This publicly available genomics resource represents a significant advance for marker-locus-trait discovery and further research about the genome structure and diversity in this diploid-tetraploid crop set [23].
In this study, we develop highly dense linkage maps of two intraspecific progenies generated from the crosses of commercial cultivars: 'Black Tartarian'6'Kordia' (designated as BT6K) and 'Regina 6 Lapins' (designated as R6L), carried out in Chile and France, respectively, using the new RosBREED cherry 6K SNP array v1. To our knowledge, these are the first genetic maps using this new SNP genotyping resource developed in sweet cherry. The maps were compared with the Prunus bin map [24] and the peach genome v1.0 to assess synteny and co linearity between the studied progenies and the bin map and the peach genome v1.0. These maps provide a new framework for sweet cherry genomic information for the international community and new opportunities to discover QTLs associated with agronomical traits in this and related species, based upon the haplotype segregation in the progeny.
The field studies performed did not involve endangered or protected species, but cultivated sweet cherry trees. In addition, all the trees used in this study belong to public institutes and do not need any specific permit to be used for the analyses described in this paper.
DNA Extraction
Leaf material was frozen in liquid nitrogen and stored at 280uC for later use. Genomic DNA was extracted from the frozen tissue using the DNeasyH plant kit (Qiagen) according to the manufacturer's instructions. Genomic DNA was quantified using Accublue TM dsDNA quantification kit (BIOTIUM) according to the manufacturer's instructions for the BT6K progeny, whereas for the R6L progeny, genomic DNA was quantified using spectrophotometry Nanoview (GE Healthcare) and fluorimetry Quant-iT TM PicogreenH (Invitrogen) according to the manufacturer's instructions. Fifteen ml of DNA with a concentration between 50 ng/ml -75 ng/ml was used for subsequent analyses.
Genetic Marker Analysis
SNPs. Both sweet cherry mapping populations and their respective parents were genotyped using the RosBREED cherry 6K SNP array v1 that contains 5,696 SNPs of which 76% and 24% of the SNPs were chosen to target the sweet and sour cherry genomes, respectively [23]. SNPs were obtained from re-sequencing of a detection panel containing 16 sweet and eight sour cherry accessions. The accessions were founders, intermediate ancestors or important parents used in U.S. breeding programs [23]. SNPs were detected using SoapSNP [28] (SoapSNP: http://soap. genomics.org.cn/soapsnp.html) and validated by GoldenGateH assay [29]. Genotype differences were recorded in the iSCAN platform and SNP genotypes were determined using Genome Studio Genotyping Module (Version 1.8.4, Illumina, [30]; [31]; [32]) as described in Peace et al. [23]. All DNA samples were above the GenCall threshold of 0.15 and were therefore all used in further analyses [33]. Initial clustering was done using Gentrain2, a GenomeStudio build-in clustering algorithm ( [30]; [31]; [32]). Following the clustering by Gentrain2, all SNPs were visually examined for appropriateness of clustering, cluster separation, number of clusters, presence of null alleles and paralogs along with a check on whether the progenies follow expected genetic ratios and presence of genotyping errors. A SNP was considered ''failed'' if that have overlapping clusters or ambiguous clusters which cannot be improved by even manual clustering, more than 3 clusters suggesting presence of paralogs and very low call frequency, and positive outlier in terms of Mendelian inheritance error rates (e.g. parent-child or parent-parent-child) [33]. The failed SNPs were not used in the mapping process.
The RosBREED cherry 6K SNP array v1 markers used in this work were deposited in NCBI's dbSNP repository available at www.ncbi.nlm.nih.gov/projects/SNP [34] and each SNP was given a unique accession number that starts with the prefix ss (SNPs NCBI ss# database names). More information associated with these SNPs is available at the Genome Database for Rosaceae (GDR; www.rosaceae.org [35]). The location of the markers on the physical map of the peach genome v1.0 as well as their positions on the genetic maps of BT, K, BT6K, L, R and R6L are available in Table S1.
The MAF (minor allele frequency) distribution of SNPs heterozygous in each of the four mapping parents ('Black Tartarian', 'Kordia', 'Regina' and 'Lapins') was determined and then compared with a germplasm collection of 269 sweet cherry selections used by Peace et al. [23]. Similarly, the distribution and physical spacing (Mbp) of SNP heterozygocity along the eight cherry chromosomes for the four linkage mapping parents was compared with polymorphic SNP markers identified in the germplasm collection of the 269 sweet cherry selections [23].
Firstly, parental maps were constructed using only heterozygous markers for one parent (class ,nn 6 np. and ,lm 6 ll., less ambiguous in tracing origin of allele than ,hk 6 hk. class). 'Suspect Linkage' (recombination frequency .0.6) and 'Genotype Probabilities' (-Log10 (P) .2.0) tools were used to identify misgrouped markers and double recombination events, respectively. Spurious double crossovers were considered as missing value while revising the map. Chromosome painting was performed for each linkage group of the four parents to verify the double recombination events. Double recombination events in narrow genetic distances were omitted from subsequent mapping analyses. The x 2 (chi-square) goodness-of-fit test was used to evaluate the discrepancy with the expected segregation ratios. In a first step, all segregating markers were used for initial mapping. In a second step, isolated markers showing segregation distortions different from expected Mendelian ratios, with a probability lower than p = 0.01 were excluded from further analyses. The grouping and ordering of markers was done using JoinMapH 4.0's build algorithm for cross pollinated species, a likelihood based weighted least square procedure (with the squares of the LODs as weights) as described by Stam [38]. The inspection of proper assignment of a marker to a group was done by calculating the Strongest Cross Link (SCL) parameter. The mapping procedure implemented in JoinMap consisted of iterative process: (a) adding loci one by one starting from most informative and each added marker locus's best position is searched by comparing the goodness-of-fit of the calculated map for each tested position (b) when the goodness of fit measure decreases sharply (i.e. difference known as jump) or gives negative estimates in the map the locus is removed and process is continued till first round is completed [38]. Second and third round accommodate the marker loci that are previously removed without the constraints of maximum allowed jump level or negative values. Maps were calculated iterating different LOD values in order to minimize the Mean Chi-square contributions. LOD thresholds of six and three were used for grouping and mapping, respectively. Kosambi's mapping function [39] was used to convert recombination frequency into map distance. Although peach physical map based on peach physical genome v1.0 provided a reference for comparisons of the linkage map output, no fixed order was forced based on peach physical positions. MapChart [40] was used to draw the linkage maps and make group-wise comparisons between maps. Secondly, an integrated map of each progeny was undertaken including heterozygous markers in both parents (,hk 6 hk.) to decrease large gaps in parental LGs. New parental maps were developed including (,hk 6 hk.) markers as previously described and integrated using the ''Combine Groups for Map Integration'' function of JoinMapH 4.0 to produce the combined maps.
The MAF distribution of SNPs heterozygous in each of the four mapping parents was similar to the distribution observed within a germplasm collection of 269 sweet cherry selections used by Peace et al [23] (Figure 1). A similar proportion of SNPs for each of the MAF categories were observed. The majority of the heterozygous Table 1. Origin and characteristics of the four parental cultivars used to perform the two mapping progenies.
Cultivar Parental cultivars Country of origin Main characters Reference
Black Tartarian U 6 U United Kingdom Medium chill, early flowering, self-incompatible (S1S2), low firmness, low sugar content, small fruit, acid fruit.
Intra-specific segregating population linkage maps. To reduce gaps in parental linkage maps, two linkage maps were constructed for each intra-specific segregating population including heterozygous markers in both parents (,hk6hk. markers). Each map consisted of eight Prunus LGs (Figure 2a, 2b; Figure S4). The maps covered 752.9 and 639.9 cM, containing 723 and 687 markers, respectively (Table 3). Each LG contained multiple markers, ranging from 55 to 133 markers. The average cM distance between markers was 1.1 for BT6K and 0.9 cM for R6L.
LG1 was the longest group in both maps covering 171.9 cM with 133 markers in BT6K, and 136.2 cM with 98 markers in R6L.
LG5 was the shortest group in both maps, covering 67.4 cM with 95 markers in BT6K, and 56.8 cM with 91 markers in R6L. Maximum gap size in each chromosome ranged from 4.2 to 12.3 (Table 3). In BT6K the largest gap (12 cM) was found in LG5 between the ss490554476 and ss490554529 markers; whereas in R6L it was in LG1 (12.3 cM), between the markers ss490547111 and ss490547271. A total of 3.8% (28/723) and 3.9% (27/687) skewed markers (chi-square probability p,0.01) were mapped in BT6K and R6L, respectively. Similarly to the parental linkage maps, clusters of skewed markers were found on LG1 (16/28 in BT6K), LG2 (5/27 in R6L), LG3 (12/28 in BT6K) and on LG6 (20/27 in R6L) (Figure 2a, 2b; Figure S4). This last region on LG6 corresponds to the S locus location in Prunus [14]. In the R6L progeny, 'Regina' genotype for the S alleles being S 1 S 3 and 'Lapins' being S 1 S 4 (Table 1), all the individuals inherited the S 4' from 'Lapins' as the S 1 is incompatible in an S 1 style (Figure 3). Using the markers framing the S locus described in Peace et al. ( [23]), the segregation of a 6 cM segment containing the S locus was followed in the two mapping progenies. For three individuals, a recombination was detected between the S 1 , S 3 alleles from 'Regina'. Within the BT6K progeny, 92.1% (82/89) individuals were characterized by non-recombinant haplotypes including the four combinations: S 1 S 6 (25/89), S 1 S 3 (24/89), S 2 S 6 (19/89) and S 2 S 3 (14/89). The other seven individuals were detected with one recombination event only in the 'Black Tartarian' parent ( Figure 3).
Comparative analyses among the linkage maps. As parental maps were constructed with markers heterozygous for only one parent of the cross (i.e. either ,nn 6np. or ,lm 6ll.), markers were in common between just one parent of each cross and the two parents of the other cross. The comparison of the parental maps was based upon the position of 277 markers: 88 common makers between 'Black Tartarian' and 'Regina', 74 between 'Black Tartarian' and 'Lapins', 87 between 'Kordia' and 'Regina', and 28 between 'Kordia' and 'Lapins' (Figure S3). A perfect co-linearity of the common markers between the parental linkage maps compared was found.
Comparison between BT6K and R6L linkage maps was performed using a total 379 common markers, representing 52.4% of the markers mapped in BT6K and 55.2% of the markers mapped in R6L ( Figure S4 Table 3. Number of markers, and map size and density of each parent map and of the two consensus maps (BT6K) and (R6L). LG1 LG2 LG3 LG4 LG5 LG6 LG7 LG8 Total conserved between the two sweet cherry linkage maps constructed and only small rearranged orders were found ( Figure S4).
Comparison between the sweet cherry genetic linkage maps and the physical peach map. Anchored markers between the intra-specific sweet cherry maps (BT6K and R6L) and the peach genome revealed that the relative position of 91.8% (944/1028) SNP markers are conserved between sweet cherry and peach (Table S1), demonstrating a high level of synteny between these two species. Only a small number of markers differed in their relative position when compared to the peach physical map ( Table 4). The SNPs in this table have been ordered cronologically for the position (cM) of the SNP markers on the sweet cherry parental maps. The majority of SNP markers with the LGs are oriented such that the number increases, similar to the orientation seen on the peach physical map. The regions represented in this table have differences between the orientation of the SNP markers on the LG, when compared to the peach physical map. Black boxes represent regions of the LG outside of the region in question. Green boxes represent regions in which the orientation of the SNP markers are inverted in relationship to the rest of the LG as well as the peach physical map. Marker position (cM) of the parental maps compared to the physical peach position v1.0, revealed a small percentage of local discrepancies (8.2%). There was only partial co-linearity of markers within the distal regions of LG1 and LG4; as well as the proximal regions of LG5, LG6 and LG7 of sweet cherry and peach (Table 4; Figure S3; Table S1). In region ,42.3 Mb to ,46.6 Mb of LG1, a total of 45 markers differed in their relative position on the parental maps when compared to the peach genome ('Black Tartarian' = 25, 'Kordia' = 3, 'Regina' = 12, and 'Lapins' = 10), five of these markers demonstrate a conservation of their relative position in at least two sweet cherry parental maps. In the region between ,26.7 Mb and ,26.8 Mb on LG4, there were local discrepancies in two markers on the 'Regina' map and one local discrepancy on the 'Kordia' map. The relative distance of the local discrepancy detected in 'Kordia' is only 0.7 cM relative to the nearest marker. This difference may be due to imprecisions associated with the small population size used in creating this map (89 individuals). However, the marker was conserved between 'Kordia' and 'Regina' maps. It was not possible to confirm this discrepancy in the other two parental maps, due to a limited number of polymorphic markers in this region in the other two cultivars. Similarly, on LG5, a discrepancie were observed between ,0.7 Mb and ,2.7 Mb for eight and seven markers in 'Black Tartarian' and 'Lapins', respectively. Three markers also showed discrepancies in 'Kordia' between ,3.5 Mb to ,4.8 Mb. On LG6, a discrepancy between ,0.0 Mb and ,1.3 Mb was observed only in the 'Regina' map (one marker). Other discrepancy in the region between ,1.3 Mb and ,4.3 Mb in the same linkage group was found for 13 and five markers in 'Kordia' and 'Lapins', respectively. On LG7, in a region from ,0.0 Mb to ,4.5 Mb, three markers in both 'Regina' and 'Lapins' maps were placed in an ''inverted order'' when compared with the peach physical map. However, in 'Black Tartarian' only two of the three markers were found having discrepancy with the peach genome. This ''inverted order'' was not found in 'Kordia' map. Other region on LG7 has discrepancy in two of the maps ('Black Tartarian' and 'Kordia' maps), from ,5.5 Mb to ,6.6 Mb for two markers in each parent (Table 4; Figure S3).
Only partial co-linearity of markers within the proximal region on LG5, LG6 and LG7; and distal region on LG1 and LG4 of intra specific sweet cherry maps (BT6K and R6L) and peach were detected, similar to what we observed in the parental maps. On the other hand, 1% (10/1.028) of the SNP markers were located on different LGs in the BT6K and R6L maps than predicted based upon the peach genome (v1.0) [14] (Table 5). A comparison between the position of these ten markers in the peach genome and the sweet cherry high density linkage maps revealed a clustering of four of the SNP markers in two narrow zones on LG8 in both progenies, situated at 45.0 cM and 62.6 cM in BT6K and at 44.5 cM and ,58.4 cM in R6L (Table 5).
Syntenic regions between the sweet cherry high density linkage maps and the Prunus bin map [24] were determined by comparing the location of RosCOS markers on the maps ( Figure S5). The relative positions of the markers in the BT6K and R6L intra specific sweet cherry maps (55 and 51 RosCOS markers, respectively) and the Prunus bin map are highly conserved, with the exception of one marker which is differential positions in both the BT6K and R6L maps when compared to the Prunus bin map ( Figure S5, marker ss490559286 located on LG6). Interestingly, this marker is located in the region of LG6 in which there are discrepancies between the 'Kordia' map and the peach genome.
There is also an almost perfect conservation of the relative map positions of these RosCOS markers when comparing the BT6K and R6L intra specific sweet cherry maps with the consensus map reported by Cabrera [19]. Of a total of 39 and 45 RosCOS markers conserved between the map reported by Cabrera [19] and the BT6K and R6L intra specific sweet cherry maps, respectively, there is only one RosCOS marker (RosCOS1551) located on LG5 whose relative position is not conserved in the R6L map.
Discussion
The number of heterozygous markers observed using the RosBREED cherry 6K SNP array v1 in each of the four parents (515-634, Table 2) represent 9-11% of all SNPs on the array (28-35% of 1,825 SNPs reported to be polymorphic among the 269 sweet cherry accessions evaluated by Peace et al. [23]). These results are consistent with the heterozygosity estimation of 400-700 markers for any given sweet cherry cultivar suggested by Peace et al. [23]. The majority of these heterozygous SNPs were SNPs that would more likely be heterozygous given that the MAFs for the majority of these SNPs were .0.2. Additionally, these results sustain the choice of the accessions used for the detection panel in the RosBREED cherry 6K SNP array v1 to efficiently represent cherry breeding germplasm from different origins ( Table 1) Scalabrin, personal communication) as it was already reported for peach [21]. These regions correspond to low recombination frequency as shown in Figure S1; S2.
A linkage map based on an intra-specific cross between 'Regina' and 'Lapins' has been previously reported [17]. Using the progeny of this intra-specific cross, as well as a newly developed intraspecific segregating population produced from a cross between 'Black Tartarian' 6 'Kordia', we have constructed two highdensity sweet cherry linkage maps. Over 650 SNPs have been placed on these high-density linkage maps, using the RosBREED cherry 6K SNP array v1. The vast majority of these SNP markers have not previously been positioned on sweet cherry genetic linkage maps. Additionally, analyses of the SNP markers in the intra-specific segregating population enabled us to develop linkage maps specific for the parental lines used in these crosses. These results demonstrate that the RosBREED cherry 6K SNP array v1 may be used to efficiently conduct genome-wide genotyping of sweet cherry cultivars and their progeny.
The length of four parental sweet cherry cultivars (719.4 cM 'Black Tartarian', 788 cM 'Kordia', 619.4 cM 'Regina' and 610.1 cM 'Lapins') are similar to those of other sweet cherry cultivars that have been published previously: 799.4 cM [19]; 711 cM in 'EF' and 565.8 cM in 'NY' [18]. Similarly, the length of the BT6K and R6L maps (752.9 cM and 639.9 cM respectively) were similar to that seen in the interspecific 'Napoleon (P. avium) x P. nipponica consensus map (680 cM) [16] as well as the sweet cherry consensus linkage map developed by Cabrera et al. [19] (779.4 cM). The genetic distance determined for LG1, LG4, LG5, LG7 and LG8 are equivalent to those reported in the Prunus bin map ( Figure S5). However, LG2, LG3 and LG6 are longer in BT6K and R6L when compared to the Prunus bin map.
A high co-linearity between BT6K and R6L maps was observed and only minor rearrangements of markers in small regions of the map were detected. Due to the relatively small sizes of the progeny population, minor variation in marker order between the BT6K and R6L high density linkage maps may be due to the lack of precision in the positioning of markers located less than 2 cM from each other. Alternatively, these minor variations between the maps may be due to imprecisions in the mapping of SNP markers that are heterozygous in both parents (,hk 6 hk.). The distance between the flanking markers and the heterozygous marker may alter the exact positioning of the heterozygous marker.
Among the large number of SNPs located in the high density sweet cherry linkage maps, the segregation of a small number of markers were skewed (28/723 BT6K and 27/687 R6L). Interestingly, most of these markers showed skewed inheritance in both segregating populations and mapped to similar regions of LG1 and LG6. The clustering of the skewed markers suggests that there may be zones within the LGs that contain deleterious genes. The cluster on the top of LG6 (,4 Mb in peach v1.0 genome) in both mapping populations coincides with the position of the peach locus for male sterility [15]; [41]. In R6L, there was also a clustering of skewed markers near the bottom of LG6 (20 Mb in peach v1.0 genome) a zone that coincides with the position of the Prunus self-incompatibility (S) locus [15]. The S locus genotypes for 'Regina' and 'Lapins' are S 1 S 3 and S 1 S 4' , respectively, therefore, progeny of this cross contain only S 1 S 4' and S 3 S 4' allele combinations. Pollen with the S 1 genotype cannot germinate and fertilization cannot occur [42], resulting in the absence of S 1 S 1 and S 1 S 3 allele combinations in the progeny. This explains the skewed segregation of the markers surrounding the S locus located near the distal end of LG6. By analyzing haplotypes in the region of the S locus, it was possible to predict the S-allele combination for all individuals of the two progenies ( Figure 3). This is an example of application offered by the use of this cherry 6K SNP array that will be useful in breeding programs. The high co-linearity of the maps between the two non-related intra-specific progenies increases the likelihood that the marker order conserved in these maps is indicative of the sweet cherry genome organization. Additionally, the resulting maps (four parents and two progenies) showed high co-linearity with the Prunus bin map ( Figure S5). Previous genomic comparison studies have shown a high synteny level within the Prunus genus [15]. As expected, we found a high synteny between the four sweet cherry parents studied and the peach genome (v1.0), supporting the peach can be use as the model species for other Prunus members [13]; [43]; [44]. In addition; the reported discrepancies found on LG1, LG6 and LG7-from ,0.0 Mb to ,4.5 Mb-coincide with minor assembly errors which have been detected by the International LG4, LG5 and part of LG7. The divergences could be attributable to mapping inaccuracy due to the low number of individuals analysed. Moreover, each of these divergences have only been found on two of four parental maps. Thus, our results confirm not only the high synteny of peach and sweet cherry, but also the high reliability of the peach genome sequence for comparative studies within the Prunus species [13]; [43]; [44].
In addition to the discrepancies within the linkage-group mentioned above, a few markers were not were not located on the same linkage group in both sweet cherry and peach. One marker in BT6K and six markers in the R6L maps mapped to different LGs then predicted based upon the peach genome. In addition, three markers, ss490555352, ss490556354 and ss490550342 were mapped to LG8 in both sweet cherry high density maps, whereas these markers are located on chromosome 6 of the peach genome. These four markers are located at ,45 cM and ,60 cM on LG8, suggesting that this region may have undergone a duplication event during the divergence between peach and sweet cherry. Sweet and sour cherry are considered the cultivated Prunus fruit species most distant from peach, as predicted by their divergent evolutionary origin [43]. Further studies would be required in order to verify the local discrepancies between the sweet cherry and peach genomes. This is the first report of a linkage map using the RosBREED cherry 6K SNP array v1. As the four cultivars used in this work derive from different origins, the polymorphism observed for their progenies from genome-scanning with the RosBREED cherry 6K SNP array v1 indicates its utility for sweet cherry breeding programs with diverse germplasms. The high proportion of monomorphic SNP markers (Table 2) agrees with the monomorphic rate (58%) obtained in the 6K SNP array. The monomorphic rate of this array was higher than those found for the recently published SNP arrays on peach [21], and apple [20] probably due to the lower sequencing depth used in the 6K SNP array [23]. However, in this study we mapped ,700 SNPs in each of the unrelated sweet cherry progenies examined.
The large number of common markers mapped in both progenies (379 markers) provides an excellent framework for comparative QTL discovery in sweet cherry. In conclusion, we have constructed two high-density sweet cherry linkage maps from two progenies using the RosBREED cherry 6K SNP array v1 which are highly co-linear with the previously published consensus sweet cherry linkage map, the Prunus bin map and the peach genome. The high density of heterozygous markers on these maps has the potential to not only enable high-resolution identification of QTLs in sweet cherry, but also to predict the genotypes at a specific locus linked to an agronomical interesting characters, as illustrated for the self-incompatibility S locus. Figure S4 Comparison of the two consensus sweet cherry highly dense linkage maps of two intraspecific progenies (BT6K and R6L). Anchored markers are indicated by connecting lines and are represented in green. Markers in black are unique to each map. Distance between markers is represented in cM. Markers grouped in a different LG in comparison with peach genome v1.0 are underlined. Skewed markers mapped are represented by asterisks to indicate distortion level (* for p,0.1; ** p,0.05; *** p,0.01; **** p,0.005; ***** p,0.001; ****** p,0.0005; ******* p,0.0001). (TIF) Figure S5 Comparison between the sweet cherry highly dense linkage maps (BT6K and R6L) and the Prunus bin map. Positions of anchor loci between maps are indicated by connecting lines and are represented in green. Distance between markers is represented in cM. Markers grouped in a different LG in comparison with peach genome v1.0 are underlined. Skewed markers mapped are represented by asterisks to indicate distortion level (* for p,0.1; ** p,0.05; *** p,0.01; **** p,0.005; ***** p,0.001; ****** p,0.0005; ******* p,0.0001). Bin map markers are situated at the bottom of their corresponding bin. (TIF) | 7,484.4 | 2013-01-31T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology"
] |
Testing capital structure theories using error correction models: Evidence from China, India, and South Africa
The objective of this study is to empirically examine the capital structure theories that can explain the capital structure choice made by the firms that are operating in China, India, and South Africa. The study tests the capital structure theories as a stand-alone basis as well as an integrated framework of nested models using advanced dynamic panel data methods with a data-set of 1,183 firms with 12,187 firm-year observations spanning the period 1999–2016. Findings suggest that the firms adjust toward target leverage very quickly and trade-off theory explains the firms’ capital structure choice better than pecking order theory in the stand-alone model as well as the model nesting these two theories. This study contributes to the empirical literature of capital structure in the following way. First, this study uses error correction framework as a general specification of the widely used partial adjustment model. Second, the study uses advanced panel data estimators to estimate partial adjustment model and error correction model. Finally, the different specifications are tested using a large data-set of firms in China, India, and South Africa that has not been done so far. *Corresponding author: M. Kannadhasan, Indian Institute of Management Raipur, GEC Campus, Old Dhamtari Road, Sejbahar, Raipur, Chhattisgarh, India E-mail<EMAIL_ADDRESS>Reviewing editor: David McMillan, University of Stirling, UK Additional information is available at the end of the article ABOUT THE AUTHORS M. Kannadhasan is working as the associate professor of Finance at Indian Institute of Management, Raipur, India. His area of interest includes corporate finance, emerging stock market, and behavioral finance. He has published papers in international and national journals of repute. Bhanu Pratap Singh Thakur is a research scholar at Indian Institute of Management, Raipur, India. He is interested in Fixed Income Securities, Valuation, and Corporate Finance. His published work includes in the area of Fixed Income Securities and Valuation. C.P. Gupta, is a professor at the Department of Financial Studies, University of Delhi, Delhi, India. His areas of research, teaching, and consultancy include Investment Decisions, Risk Analysis, Project Appraisal, Security Analysis, Fuzzy Decision-Making, and Financial Modeling. Parikshit Charan is a faculty in the Operations Management area at IIM Raipur. His area of research includes operations strategy, sustainable operations, and supply chain management. He has published papers in international and national journals of repute. PUBLIC INTEREST STATEMENT Over the past several years, many theories have been developed and tested to explain the capital structure decisions taken by the firms. The main theories in the forefront are the trade-off theory and the pecking order theory. These theories have been empirically tested across various countries using different methodologies. Yet, there is no consensus that which theory better explain the financing decisions made by the firms. Using advanced dynamic panel data methods, we tests these capital structure theories as a stand-alone basis as well as an integrated framework of nested models in Chinese, Indian, and South African firms. The results suggest the presence of high adjustment speed toward the target leverage in firms and better ability of the trade-off theory in explaining capital structure choices when compared to pecking order theory. Received: 23 November 2017 Accepted: 15 February 2018 First Published: 23 February 2018 © 2018 The Author(s). This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license.
PUBLIC INTEREST STATEMENT
Over the past several years, many theories have been developed and tested to explain the capital structure decisions taken by the firms. The main theories in the forefront are the trade-off theory and the pecking order theory. These theories have been empirically tested across various countries using different methodologies. Yet, there is no consensus that which theory better explain the financing decisions made by the firms. Using advanced dynamic panel data methods, we tests these capital structure theories as a stand-alone basis as well as an integrated framework of nested models in Chinese, Indian, and South African firms. The results suggest the presence of high adjustment speed toward the target leverage in firms and better ability of the trade-off theory in explaining capital structure choices when compared to pecking order theory.
Introduction
Modigliani and Miller published their pioneering work on the capital structure in 1958. In their article, they demonstrate that, in a frictionless world where the capital markets are perfect and there are no corporate taxes, the value of a firm is unaffected by its capital structure. In other words, capital structure is irrelevant (Modigliani & Miller, 1958). Since then, researchers have attempted to establish the relevance of corporate capital structure in the presence of capital market frictions and imperfections such as gains from leverage-induced tax shields (Modigliani & Miller, 1963), bankruptcy costs (Bradley, Jarrell, & Kim, 1984;Kraus & Litzenberger, 1973), agency cost (Jensen & Meckling, 1976), and information asymmetry (Myers & Majluf, 1984). Over the past six decades, a number of theories have been developed such as trade-off theory (TOT), pecking order theory (POT), and free cash flow theory to explain the variation in debt ratios across companies, and across countries by relaxing the perfect market assumptions systematically.
According to the TOT, every company seeks to find a judicious mix of debt-equity in the capital structure of a firm, i.e. an optimum capital structure that strikes a balance between possible costs of financial distress and benefits of tax advantages associated with additional debt capital (Myers & Majluf, 1984;Warner, 1977). Myers (1984) and Myers and Majluf (1984) found inconsistencies in the TOT which lead them to propose a theory called POT. According to this theory, firms use external financing only when internal funds are insufficient to finance their investments. When faced with the external financing choice, firms prefer debt to equity because of asymmetric information and signaling problems that increase the cost of external equity (Myers, 1984).
The free cash flow theory posits that despite the threat of financial distress associated with the high level of debt, a firm uses a high level of debt when its operating cash flow exceeds its profitable opportunities (Myers, 2001). In addition to the market imperfections, floatation costs (Marsh, 1982) and adjustment costs and constraints may prevent a firm from maintaining its target/optimal debt ratio (Jalilvand & Harris, 1984). However, every firm tries to adjust toward the optimal ratio.
Existing studies (e.g. Antoniou, Guney, & Paudyal, 2008;Byoun, 2008;Dang, 2013;Flannery & Rangan, 2006;Huang & Ritter, 2009;Naveed, Ramakrishnan, Ahmad Anuar, & Mirzaei, 2015;Ozkan, 2001;Shyam-Sunder & Myers, 1999) have focused on the dynamic behavior of the adjustment process using partial adjustment model. This model captures the actual change in capital structure as a part of required change toward the target leverage of a firm. These studies found that the speed of adjustment varies from country to country, and period to period. As pointed out earlier, POT considers the problem of information asymmetry. The problem of information asymmetry arises when managers have better knowledge about the value of their firm than the rest of the market does.
In such a situation, potential investors are unable to differentiate between high-quality firms and low quality firms. As a result, potential investors price the shares of high-quality firms at a discount to protect themselves against making a worthless investment. Firms issue securities that carry the smallest adverse selection cost, i.e. they issue securities that are less risky and less responsive to valuation mistakes and subsequently least likely to be mispriced by imperfectly informed outside investors. In other words, firms issue securities that are least likely to be priced at a discount by investors. These issues lead a firm to prefer internal funds to external funds, and debt financing to equity.
The extant literature exhibits an inconclusive support to the POT (Adair & Adaskou, 2015;de Jong, Verbeek, & Verwijmeren, 2011;Frank & Goyal, 2003;Lemmon & Zender, 2010;Seifert & Gonenc, 2008;Shyam-Sunder & Myers, 1999). The TOT makes prediction about the target leverage of a firm, whereas POT does not. It is also clear from the existing literature that most of the studies tested the TOT or POT in isolation. Fama and French (2005) suggested discontinuing empirical studies on a stand-alone basis.
These theories cover some aspects of financing decisions that could guide the firms in designing and maintaining capital structure. Understanding the importance of the issue, recent studies have attempted to test both the theories simultaneously (e.g. Dang, 2013;Flannery & Rangan, 2006;Shyam-Sunder & Myers, 1999). Yet, there is no clarity on which theory (TOT or POT) can better explain the financing decisions made by a firm (Allini, Rakha, McMillan, & Caldarelli, 2017;Dang, 2013;Mai, Meng, & Ye, 2017;Serrasqueiro & Caetano, 2015). Therefore, there is a need for an integrated framework that incorporates the elements of POT and TOT (Dang & Garrett, 2015;Zhou, Tan, Faff, & Zhu, 2016).
Further, these studies examined the capital structure of the firms that are operating in developed countries such as the United State of America (USA), the United Kingdom (UK), France, and Germany. Although the developed countries are a natural ground for testing the capital structure theories, it is equally important to test the applicability of the capital structure theories in emerging economies. There is a significant gap in this regard. Therefore, this study attempts to test these theories to the firms that are located in countries that are considered as emerging economies, namely China, India, and South Africa. This study examines the firms in China, India, and South Africa for mainly two reasons. First, these countries represent the biggest economies among emerging market economies and thus are the most obvious sample for testing the applicability of capital structure theories. Second, these countries belong to two different economic systems with India and South Africa being primarily the market-based economies as compared to China that is predominantly regulated by the state during the study period.
This study contributes to the existing empirical literature of capital structure in the following way. First, this study uses error correction framework as a general specification of the widely used partial adjustment model. This framework captures the firm's adjustment toward target leverage in a better way than the existing models. Further, the study also tests the TOT and POT simultaneously by augmenting the partial adjustment model and error correction model (ECM) in a unifying framework. Second, the study uses advanced panel data estimators to estimate partial adjustment model and ECM. Specifically, the study uses the Anderson and Hsiao (1982) instrumental variable estimator (hereafter AH), Arellano and Bond's (1991) generalized methods of moments estimator (hereafter GMM) and Blundell and Bond's (1998) system generalized methods of moments estimator (hereafter SYS-GMM). Finally, the different specifications are tested using a large data-set of firms in China, India, and South Africa. Hence, the results are expected to provide fresh insights regarding capital structure theories in emerging market context.
The results provide clear evidence that the TOT outperforms POT in firms in China, India, and South Africa. The firms tend to respond very quickly to target leverage change as compared to the firms in developed countries. The study also reveals the benefit of using ECM over partial adjustment model to examine the firm's dynamic capital structure behavior. Lastly, the study also suggests that the firms in China, India, and South Africa utilize debt financing to offset a small proportion of the deficit. Overall, the nested models used in the study reveals that the firm's financing decisions are better explained by the TOT rather that POT.
Literature review
The primary focus of testing trade-off models is to know the extent and the speed of rebalancing leveraging ratios toward the optimal level of capital structure. The results of earlier studies are mixed. Studies support the trade-off models, i.e. firms do rebalance their leverage ratios toward optimal level (e.g. Flannery & Rangan, 2006;Harris & Raviv, 1991;Hovakimian, Opler, & Titman, 2001;Leary & Roberts, 2005;Welch, 2004). Fama and French (2002) pointed out that the firms do adjust their leverage ratio very slowly toward their optimal level/target range. They also said that firms do not adjust every period.
Subsequently, studies estimated the speed of adjustment using dynamic models by incorporating the presence of transactions and issuance costs. Mean reversion in debt ratios or appears to adjust toward target debt ratio using dynamic partial adjustment models (e.g. Auerbach, 1985;Drobetz, Schilling, & Schröder, 2015;Marsh, 1982;Opler & Titman, 1994;Robert & Taggart, 1977). The speed of adjustment is relative very high in UK firms (about 50%) (Ozkan, 2001) than US firms (about 7-18%) (Fama & French, 2002). Marsh (1982) and Opler and Titman (1994) used a logit regression model for understanding the mean reversion in debt ratio in the long run and Auerbach (1985) used target adjustment model with firm specific and time varying target to support the above findings. Oino and Ukaegbu (2015) also used dynamic adjustment model and found evidence in support of the POT in Nigerian firms. However, the recent studies using advanced dynamic panel data strongly support the TOT (e.g. Antoniou et al., 2008;Flannery & Rangan, 2006;Wojewodzki, Poon, & Shen, 2017). Flannery and Rangan (2006) found that US firms have a target leverage in the long run and do follow partial adjustment at relatively fast of 30% a year to achieve the target leverage. Recently, Tao, Sun, Zhu, and Zhang (2017) have also provided evidence supporting the TOT using China's mergers and acquisition deals.
POT assumes that there is no target or optimal ratio. Instead, the debt ratio is a cumulative result of hierarchical financing over a period of time. Shyam-Sunder and Myers (1999) found a strong support for POT with a sample of 157 firms. Similarly, Hovakimian et al. (2001) found support for POT only for short run. Conversely, Frank and Goyal (2003) found inconclusive support with 768 firms operating in USA by adopting model used by Shyam-Sunder and Myers (1999). However, Seifert and Gonenc (2008) investigated in developed countries (the US, the UK, and Germany) and observed the support to POT. Early studies attempted to test these theories individually using various methodologies.
The unified framework that incorporates the elements of POT and TOT is called a Modified Pecking Order Theory (MPOT) (Myers, 1984). The MPOT or nested model comprises all the variables used to test the POT and dynamic trade-off models. This nested model is considered as an error correction mechanism using panel data. This model was tested first using small companies in Italy by Bontempi (2002). Applicability of the model could be used to test the firms that operate in bank-based or market-based. Subsequently, studies have confirmed that trade-off model explains much better than POT (Dang, 2013;Frank & Goyal, 2003). Other findings are that TOT was most relevant to repurchase decisions for larger firms (de Jong et al., 2011), POT was most relevant for smaller firms (Cotei & Farhat, 2009).
It is evident from the extant literature that firm's borrowing (financing) decision is influenced by its characteristics. Myers (1984) tested the TOT by finding the relationship between debt ratios and firm characteristics such as size, asset risk, profitability, asset type, and tax status. Target leverage of sample companies is been estimated using five commonly used variables, namely the collateral value of assets, non-debt tax shields, profitability, growth opportunities, and firm size. The said variables are chosen based on the studies of Frank and Goyal (2009), Rajan and Zingales (1995), and Titman and Wessels (1988). The following section discusses these determinants of leverage.
Collateral value of assets (CVA)
CVA is measured as a ratio of fixed assets to total assets. TOT advocates that firms that have a more tangible value of assets can borrow more debt than firms that have less tangible value of assets because, assets could be beneficial as a security to alleviate the risk shifting and asset substitution problem (Jensen & Meckling, 1976). In addition, they have a lower financial/bankruptcy costs that leads to reduce the agency costs of debt. It is therefore expected to have a positive relationship between collateral value of assets and target leverage (Johnson, 1997). However, POT suggests that collateral value of assets and target leverage are negatively related as firms that have less collateral prefer to use debt over equity (Harris & Raviv, 1991). Modigliani and Miller (1963) pointed that profitable firms have good reason to use debt capital as they want to maximize the tax advantages, i.e. debt tax shields. Therefore, it is expected to have a positive relationship between profitability and leverage. Further, profitable firms could face free cash flow problem that can be mitigated using debt capital (Jensen, 1986). Although static trade-off models predict a positive relationship, dynamic trade-off models predict a negative relationship by accounting time. A firm could increase their retained earnings as profit increases, thereby, less incentive to use debt. POT predicts a negative relationship between profitability and leverage (Myers, 1984) as predicted by dynamic trade-off models. Profitability is measured as a ratio of EBITDA to total assets. Myers (1977) opined that the conflicts between bondholders and shareholders. He argues that firms underinvest while using risky debt, because shareholders get a fraction of benefits as compared with bondholders. The cost of underinvestment increases the growth opportunities. Therefore, firms facing high growth opportunities are likely to accept risky projects. Thus, it increases the costs associated with debt financing. Consequently, these firms rely on equity rather than debt. Conversely, low growth firms that are operational in mature businesses may use debt to ease the free cash flow problem (Jensen, 1986). Therefore, TOT theory expects that the leverage have positive relationship with low growth opportunities and positive relationship with high growth opportunities. The proxy of growth opportunities is measured as a ratio of market-to-book ratio. DeAngelo and Masulis (1980) pointed out that firms that have more benefit from non-debt tax shields (tax shield on depreciation) have less incentive to exploit the tax advantage of debt financing, because non-debt tax shields may substitute for debt tax shields. It indicates that there is a negative relationship between non-debt tax shields and target leverage, which is in line with TOT. The proxy for non-debt tax shields is the ratio of depreciation to total assets.
Firm size (size)
Larger firms (i.e. having higher assets) have the ability to borrow more than smaller firms as they have a lower default risk. In addition, older firms have more debt capacity and good reputation in debt market that leads to borrow more to maximize the interest tax shields. In addition, it also decreases the agency costs associated with the asset substitution and underinvestment problem (Chung, 1993). Conversely, the smaller firms tend to have a lower leverage ratio, which is due to higher agency costs. Another advantage of smaller firms is that they could liquidated the firm when they are in financial distress (Ozkan, 2001). Therefore, larger and older firms use more debt than small and newer firms. It tells that the firm size have a positive effect on target leverage (e.g. Booth, Aivazian, Demirguc-Kunt, & Maksimovic, 2001;Frank & Goyal, 2009;Rajan & Zingales, 1995;Titman & Wessels, 1988). This study uses natural logarithm of total assets adjusted for inflation as a proxy of firm size.
Various studies have investigated about the capital structure choices faced by firms using international data (e.g. Antoniou et al., 2008;Booth et al., 2001;Dang, 2013;Deesomsak, Paudyal, & Pescetto, 2004;de Jong, Kabir, & Nguyen, 2008;Rajan & Zingales, 1995). However, only a few studies have tested the capital structure theories using nested model, that to in the developed countries context. The above findings motivated us to test these theories simultaneously in emerging economies.
Research model
Early studies used various econometric techniques such as logit and probit regressions, two step approach, structural equation models, nonlinear methods, cross-sectional regression, panel data regression, generalized methods of moments (GMM), and dynamic panel threshold models. There are studies that used pooled regression with and without fixed effects (Byoun, 2008;Frank & Goyal, 2003;Shyam-Sunder & Myers, 1999). The primary objective of this paper is to test the TOT and POT simultaneously. Therefore, it is important to formulate a model that incorporates the elements of both of these theories. Since this study examines the nested model that embeds both theories and tests them simultaneously, this study used ECMs as tested by Dang (2013). This model evades various problems such as misspecification of dynamics, the target leverage estimation using historical mean, and free cash flow estimation (Bontempi, 2002).
Dynamic models of TOT and POT
Either static or dynamic trade-off models uses firm-specific characteristics as proxy for the benefits and cost of debt financing as a determinant of leverage. Conversely, POT uses event-study analysis to understand the stock price reactions to the announcement of issue of securities (debt or equity). These two quantitative approaches could not be used to test the nested model. It is important to merge the characteristics of these two models into single model. Firms seek to find an optimal capital structure and try to adjust toward the same under TOT. Therefore, the model should have a dynamic specification, partial adjustment model that encapsulate long-term target leverage with a lag in adjustment to their target leverage. The partial adjustment model is as given below: Or where TLD it = D* it −D it−1 . In Equation 1, leverage variations △D it are explained in terms of the deviation of the past debt ratio D it−1 from the target debt ratio D* it ; parameter b measures the speed of adjustment of the actual debt ratio to the target. This study assumes no mean reversion as cost and benefit of debt financing vary with its determinants and thereby target leverage ratio. If a = 0, b = 1, it indicates that TOT holds well than POT (e.g. Flannery & Rangan, 2006). The value of coefficient b lies between 0 (No change at all-adjustment cost is very high) and 1(full adjustment in the current period) indicates that the presence of adjustment. This study estimated D* it using the following equation: The error term is composed of the unobservable individual firm and or industry fixed effect; the unobservable time effect, and the white noise error (u it ). The study considered the firm and/or industryspecific effects control for time-invariant unobservable characters such as, managerial skills, firm's and product's life cycle, competitiveness, strategy, and so on (Ozkan, 2001), which has not observed in Equation 3.
This study uses a two-stage estimation procedure (Fama & French, 2002;Shyam-Sunder & Myers, 1999). First, we estimate the Equation 3 in order to get the fitted values as proxy for the target leverage (D* it ) and then we estimate the Equation 2. We use following methods to estimate Equation 2: AH estimator, GMM estimator and SYS-GMM estimator. These estimators are better than pooled-OLS and fixed effects estimators in providing unbiased estimates. The partial adjusted model has been extensively used to test the TOT (e.g. Dang, 2013;Fama & French, 2002;Flannery & Rangan, 2006;Ozkan, 2001;Shyam-Sunder & Myers, 1999). However, this method assumes that the cost of adjustment and adjusting leverage toward optimal or target level (1) is independent. This study considers the important role of cost of adjustment, which shall reduce the level of adjustment speed (Maddala, 2001). The ECM is the logical extension of partial adjustment model. It considers explicitly the changes in the target leverage (change in determinant of leverage used) and its influence on the adjustment cost and process. To estimate the error correction model (Equation 4), this study applies the same two-stage procedure used for estimating partial adjustment model. First, fitted values are used to proxy target leverage and then Equation 4 is estimated using AH, GMM, and SYS-GMM estimators.
To test the presence of POT, the study used the Dang (2013), Frank and Goyal (2003), and Shyam-Sunder and Myers (1999) empirical model. The model is: CFD is cash flow deficit or surplus for firm i in year t; error term as described the error term is composed of the unobservable individual firm and or industry fixed effect; the unobservable time effect, and the white noise error (u it ). The cash flow deficit or surplus is been calculated as follows: CF refers to the cash flow from operating activities after tax and interest; I refer to the net investment (CAPEX plus acquisitions and disposals); ED refers to the equity dividend paid; Δc refers to the net change in cash including change in working capital. If a = 0, b 0 = 1, it indicates that firms raise (retire) debt to offset the deficit (surplus) (e.g. Shyam-Sunder & Myers, 1999).
The nested model used in the study by following the spirit of Bontempi (2002), Dang (2013), Frank and Goyal (2003), and Shyam-Sunder and Myers (1999). The augmented partial adjustment model of this study is given below: In this model, if a = 0, b 0 = 0, and b 1 = 1, POT holds.
Finally, this study augments the error correction model (Equation 4) as follows: If a = 0, b 2 = 1, b 1 = b 0 = 0, it indicates the presence of POT, else TOT.
Again, we use the same two-step estimation procedure and use AH, GMM, and SYS-GMM estimators to estimate the augmented partial adjustment model (Equation 7) and augmented error correction model (Equation 8).
Data, variables, and sample
This study examines a large panel data-set of companies from China, India, and South Africa. The data are collected from Bloomberg database spanning the period 1999-2016. This study applies the standard procedure in the literature (e.g. Ozkan, 2001) to put restrictions on the data-set. First, we discard the financial and utilities firms because these firms face distinctive regulatory restrictions. (4) Second, we delete the firm-year observations that have the data missing for the required variables. Third, we retain only those firms having data for consecutive 5 years or more in order to apply AH, GMM, and SYS-GMM estimators. Fourth, all the variables are winsorized at the 1 and 99% to alleviate the effects of outliers in the sample. This leaves us with the final sample of 1,183 firms with 12,187 firm-year observations. Specifically, the sample has 412 firms (5,053 observations) for China, 675 firms (6,008 observations) for India, and 96 firms (1,126 observations) for South Africa. Table 1 reports the descriptive statistics for Chinese, Indian, and South African firms. The descriptive statistics reported shows some noteworthy facts. The firms in South Africa have low market leverage (0.273) as compared to firms in India (0.514) and China (0.398). The tangibility (i.e. collateral value of assets) for firms is almost similar for all the countries: China (0.346), India (0.369), and South Africa (0.343). The growth opportunities for Chinese (1.846) and South African (1.241) firms are higher as compared to Indian (0.907) firms. The high growth firms tend to prefer equity than debt finance. This is consistent with the above finding that market leverage for Indian firms are much higher as compared to Chinese and South African firms. Table 2 shows the results of target leverage estimation using Equation 3. Overall, the results are comparable for firms in China, India, and South Africa. Also, the results are statistically significant and in line with the expectations of the trade-off theory. The variable collateral value of assets is positively related to the leverage. Though, not significant for India and South Africa, collateral value of assets is significant for China at 10% level of significance. This result is in line with the TOT that states that the firms having high collateral value of assets tend to face lower bankruptcy costs which further helps firms to borrow more. This finding is in line with the existing empirical literature (de Jong et al., 2008;Rajan & Zingales, 1995).
Target leverage estimation
The variable non-debt tax shields shows mixed results. The variable is negatively related and significant for firms in China at 10% significance level. This is in line with the TOT that predicts that firm uses non-debt tax shields as a substitute for debt tax shields (DeAngelo & Masulis, 1980). Though, the variable is not significant for firms in India and South Africa. The results further show that the profitability is negatively related to leverage and is significant at 1% significance level (except for South African firms). These findings support the POT as it indicates that profitable firms are less inclined to use debt financing. Overall, the relation between leverage and profitability is consistent with the existing literature (Antoniou et al., 2008;Rajan & Zingales, 1995).
The variable growth opportunity is negatively related to the leverage and is significant at 1% level for all the firms in all countries. This result is in line with the TOT, which concurs that, the firms having high growth opportunities tend to use less debt financing to overcome the underinvestment problem. This result is strongly consistent with the existing empirical literature (Antoniou et al., 2008;de Jong et al., 2008;Rajan & Zingales, 1995).
The variable firm size is positively related to the leverage and is significant at 1% level (except South African firms). This is in line with the TOT which states that the large firms tends to experience lower distress and bankruptcy costs that in turn provide incentive to lever up and exploit tax shields. Overall, the results from the regression provide plausible explanations regarding the relationship between the target leverage and its determinants. Also, the empirical results are in line with the TOT. Table 3 reports the results of TOT tested by partial adjustment model that is modeled by Equation 3. Results for the Chinese, Indian, and South African firms are reported in columns (1)-(3), (4)-(6), and (7)-(9), respectively. Columns (1), (4), and (7) use the AH estimator. Columns (2), (5), and (8) use the GMM estimator and columns (3), (6), and (9) use the SYS-GMM estimator. To test the validity of these methods, we also report Sargan test and AR2 test.
Partial adjustment model
Overall, the results of the partial adjustment model are satisfactory. Sargan test and AR2 test further validates the findings by suggesting no evidence of second-order correlation and appropriateness of the instruments used. The coefficient of TLD, which represent the speed of adjustment, is significant in all the specifications for all the countries. An overall observation show that the SYS-GMM estimates are smallest followed by AH and GMM estimates and is consistent for all the countries. In economic terms, the Chinese firms seem to adjust toward their target leverage most quickly followed by Indian and South African firms. Empirically, these results are in line with the TOT, which predicts that firms actively seek adjustment toward their target leverage.
Compared to previous empirical evidence from US firms, the adjustment speeds are much faster in Chinese, Indian, and South African firms. Fama and French (2002) reported a much lower adjustment speed between 7 and 10% for US firms. Antoniou et al. (2008) and Flannery and Rangan (2006) also reported a slower speed of adjustment of 0.30 as compared to firms in China, India, and South Africa. The presence of financially constrained firms with high growth rate and large investment is the main reasons for the high adjustment speed in Chinese, Indian, and South African Firms (Drobetz et al., 2015). Further, the above target leverage and financing deficit in Indian and Chinese firms in comparison to South African firms cause the different effect on adjustment speed for the three selected countries. Overall, the results are statistically and economically significant and are in line with the TOT. Table 4 reports the results of TOT tested by error correction model that is modeled by Equation 4. Results for the Chinese, Indian, and South African firms are reported in columns (1)-(3), (4)-(6), and (7)-(9), respectively. Columns (1), (4), and (7) use the AH estimator. Columns (2), (5), and (8) use the GMM estimator and columns (3), (6), and (9) use the SYS-GMM estimator. To test the validity of these methods, we also report Sargan test and AR2 test. We also report F-statistics under the null that the coefficient on TLC it and LECM it are equal.
Error correction model
The AR2 test and Sargan test reveal no problem with the model specification. The results show that both the variables TLC it and LECM it are statistically and economically significant. However, the SYSGMM estimate of LECM it variable shows comparatively lower estimates in all specifications. According to the definition, the deviance from the target leverage, TLD it , consists of TLC it, i.e. change in target leverage and LECM it, i.e. deviation from target leverage in the last accounting period. The results show that both of these variables are significant for the firm's leverage adjustment. Also, the F-test shows that the effects of these variables are significantly different at 1% significance level. Precisely, the speed of adjustment with respect to TLD it is significantly faster than speed with respect to LECM it . For all the firms in China, India, and South Africa, firms make rapid adjustment corresponding to any change in target leverage as compared to the change in past deviation from the target.
Overall, the results indicate that the firms in China, India, and South Africa take dynamic but asymmetric adjustment toward the target leverage. The results also highlight the benefit of error correction model over partial adjustment model in analyzing the firm's dynamic adjustment toward target leverage. Table 5 reports the results of POT tested by fixed effects estimation that is modeled by Equation 5. Overall, the results suggest that the models have very less explanatory power revealed by the R 2 of the respective models. The variable CFD it is statistically significant for China and India at 1 and 5% significance level, respectively. However, the magnitude of the variable is low and undermines the economic significance. These results reveal a weak association between changes in debt levels of firms and financing deficit. The F-test, however, shows that the coefficient on the CFD it is statistically less than 1 in all the models at 1% significance level. This result is not in line with the POT that requires the coefficient to be equal to 1. Overall, the results are also in contrast to Shyam-Sunder and Myers (1999) but consistent with Flannery and Rangan (2006).
Augmented partial adjustment model and augmented error correction model
The Table 6 reports the results of TOT and POT tested by augmented partial adjustment model that is modeled by Equation 7. To test the validity of the models, we also report Sargan test and AR2 test. We also report F-statistics under the null that the coefficient on CFD it is equal to one. Overall, the results reveal that the models are well specified. As noted in the previous results, the SYSGMM estimator have the smallest value followed by AH estimator and GMM estimator. The results reveal that the TOT performs better than the POT for the firms in China, India, and South Africa. The statistical and economic significance of the variable TLD it remains similar (Table 3) even in the presence of CFD it . The speed of adjustment toward the target leverage still remains fast. Also, the F-test reveals that he coefficient on CFD it is significantly less than 1. In addition, the variable CFD it becomes insignificants for some specifications (columns 2,6,7, and 8). These results imply that the TOT dominates the POT in the current nested model. The Table 7 reports the results of TOT and POT tested by augmented error correction model that is modeled by Equation 8. Overall, the results are in line with the TOT. The speed of adjustments reflected by TLC it and LECM it are significant and are slightly affected by the presence of the variable CFD it . Also, the magnitude of TLC it and LECM it are similar to the error correction model (Table 4). In addition, the variable CFD it becomes insignificant for many specifications (columns 2, 6, 7, and 8). Overall, the results of augmented error correction model and augmented partial adjustment model are in line with the TOT. Hence, the firm's financing decisions are better explained by TOT as compared to POT.
Robustness checks
This study has used market-based measure of leverage as the main variable that is also consistent with the existing empirical literature (Titman & Wessels, 1988;Welch, 2004). However, some empirical studies have used alternate variable, i.e. book-based measure of leverage (Myers, 1984;Shyam-Sunder & Myers, 1999). Hence, the robustness check in this study includes the alternative measure of leverage, i.e. book-based measure of leverage, defined as the ratio of book value of total debt to book value of total assets. Table 8 reports the results of the target leverage estimation for book leverage and Table 9 reports the results of augmented error correction model of book leverage.
Overall, the results are similar but less significant for book leverage shown in Table 8. The variable growth opportunities become insignificant for South African firms and carries positive sign for Indian firms but still significant. This is inconsistent with the earlier findings with market measure of leverage though consistent with the TOT. The variable non-debt tax shields become significant for Indian and South African firms. This is not in line with the TOT that predicts that firm uses non-debt tax shields as a substitute for debt tax shields. The other variables carry expected sign and are generally significant that is consistent with the results for market leverage. Table 9 shows the result of augmented error correction model for book leverage and reveals that the speeds of adjustment toward target leverage remains significant. The variable CFD it also shows similar results as compared to market measure of leverage. Overall, the results are qualitatively similar to the earlier results obtained for market measure of leverage (Table 7). Thus, the findings in this study appear to be robust with regards to the choice of leverage measure.
Conclusion
Prior research on capital structure shows that most of the studies tested the TOT or POT in isolation. As suggested by Fama and French (2005), few studies have attempted to test both the theories simultaneously. Yet, there is no clarity on which theory (TOT or POT) can better explain the financing decisions made by a firm. Those studies examined the capital structure of the firms that are operating in developed countries only. The study endeavors to fill the gap by analyzing that which theory can better explain the capital structure choice made by firms that are operating in China, India, and South Africa by using an error correction model (ECM). This study contributes to the existing literature in the following way. First, the study uses the error correction model of leverage to examine the firm's dynamic leverage adjustment process in emerging countries. Second, the study results in the consistent and efficient estimates of the adjustment speeds toward target leverage using advanced econometric procedures. Third, this study is one the first empirical study, to the best of our knowledge, to test TOT and POT using data-set of firms from China, India, and South Africa.
The results provide clear evidence that the TOT outperforms POT of firms in China, India, and South Africa. Firms adjust quickly toward their target leverage. Also, the firms tend to respond very quickly to target leverage change as compared to past deviations from the target leverage as compared to the firms in developed countries. The study also reveals the benefit of using error correction model over partial adjustment model to examine the firm's dynamic capital structure behavior. Lastly, the study also suggests that the firms in China, India, and South Africa utilize debt financing to offset a small proportion of the deficit. Overall, the nested models used in the study reveals that the firm's financing decisions are better explained by the TOT rather that POT. Future studies can be focused on the role of institutional and macroeconomic framework on the choice of capital structure. Further, the studies may analyze the stability of the speed of adjustment of emerging countries. This study has also to link with corporate governance, corruption, ownership structure, which can be studied further. | 9,291.8 | 2018-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
Emergent unitarity from the amplituhedron
We present a proof of perturbative unitarity for planar N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 4 SYM, following from the geometry of the amplituhedron. This proof is valid for amplitudes of arbitrary multiplicity n, loop order L and MHV degree k.
Introduction
Unitarity is at the heart of the traditional, Feynman diagrammatic approach to calculating scattering amplitudes. It is built into the framework of quantum field theory. Modern onshell methods provide an alternative way to calculate scattering amplitudes. While they eschew Lagrangians, gauge symmetries, virtual particles and other redundancies associated with the traditional formalism of QFT, unitarity remains a central principle that needs to JHEP01(2020)069 be imposed. It has allowed the construction of loop amplitudes from tree amplitudes via generalized Unitarity methods [1][2][3][4][5] and the development of loop level BCFW recursion relations [6,7]. These on-shell methods have been particularly fruitful in planar N = 4 SYM and led to the development of the on-shell diagrams in [8] and the discovery of the underlying Grassmannian structure. Locality and unitarity seemed to be the guiding principles which dictated how the on-shell diagrams glued together to yield the amplitude. The discovery of the amplituhedron in [9,10] revealed the deeper principles behind this process -positive geometry. Positivity dictated how the on-shell diagrams were to be glued together. The resulting scattering amplitudes were miraculously local and unitary! This discovery of the amplituhedron was inspired by the polytope structure of the six point NMHV scattering amplitude, first elucidated in [11] and expanded upon in [12]. This motivated the original definition of the amplituhedron which was analogous to the definition of the interior of a polygon. The tree amplituhedron A n,k,0 was defined as the span of k planes Y I α , living in (k +4) dimensions. Here I = {1, . . . , k +4} and α = {1, . . . k}.
where Z I a (a = 1, . . . n) are positive external data in (k + 4) dimensions. In this context, positivity refers to the conditions det Z a 1 , . . . Z a k+4 ≡ Z a 1 . . . Z a k+4 > 0 if a 1 < · · · < a k+4 and C αa ∈ G + (k, n). G + (k, n) is the positive Grassmannian defined as the set of all k ×n matrices with ordered, positive k ×k minors. For more details on the properties of the positive Grassmannian, see [8,[13][14][15] and the references therein. The scattering amplitude can be related to the differential form with logarithmic singularities on the boundaries of the amplituhedron. The exact relation along with the extension of eq. (1.1) to loop level can be found in [9].
The amplituhedron thus replaced the principles of unitarity and locality by a central tenant of positivity. Tree level locality emerges as a simple consequence of the boundary structure of the amplituhedron, which in turn is dictated by positivity. The emergence of unitarity is more obscure. It is reflected in the factorization of the geometry on approaching certain boundaries. This was proved for A 4,0,L in [10]. The extension of this proof to amplitudes with arbitrary multiplicity using (1.1) is cumbersome and requires the use of the topological definition of the amplituhedron introduced in [16]. In the following section, we review this definition in some detail along with some properties of scattering amplitudes relevant to this paper. We also expound the relation between the amplituhedron and scattering amplitudes. The rest of the paper is structured as follows. In section 3, we present a proof of unitarity of scattering amplitudes for 4 point amplitudes of planar N = 4 SYM, using the topological definition of the amplituhedron. This serves as a warm up to section 4 in which we provide a proof which is valid for MHV amplitudes of any multiplicity. Finally, in section 5 we show how the proof of the previous section can be extended to deal with the complexity of higher k sectors. JHEP01(2020)069 Figure 1. A representation of the relationship between momenta and momentum twistors, taken from [19].
2 Review of the topological definition of A n,k,L The scattering amplitudes extracted from the amplituhedron defined as in (1.1), using the procedure outlined in [9], reproduce the Grassmannian integral form of scattering amplitudes presented in [13-15, 17, 18]. These necessarily involve the auxiliary variables C αa . In contrast, the topological definition of the amplituhedron can be stated entirely in terms of the 4D momentum twistors (first introduced in [11]). Consequently, this yields amplitudes that can be thought of as differential forms on the space of momentum twistors. In this section, we will review the basic concepts involved in the topological definition of the amplituhedron. We begin with a review of momentum twistors and their connection to momenta in section 2.1 and proceed to the topological definition of the amplituhedron in section 2.2. We then explain how amplitudes are extracted from the amplituhedron in section 2.3 and finally, in section 2.4, we set up the statement of the optical theorem in the language of momentum twistors. This is the statement we will prove in the main body of the paper.
Momentum twistors
Momentum twistor space is the projective space CP 3 . A connection to physical momenta can be made by writing them in the coordinates of an embedding C 4 as Z a = λ a α , µα a . Here (λ,λ) are the spinor helicity variables which trivialize the on-shell condition. p a αα ≡ λ a αλaα =⇒ p 2 a = det(λ a , λ a ) det(λ a ,λ a ) = 0 µα a = x αα a λ aα where the dual momenta x a are defined via p a = x a − x a−1 and trivialize conservation of momentum. Thus the point x a in dual momentum space is associated to a line in momentum twistor space. Scattering amplitudes in N = 4 SYM involve momenta p a which are null (p 2 a = 0) and are conserved ( a p a = 0). Momentum twistors are ideally suited to describe the momenta involved in these amplitudes because they trivialize both these constraints. Figure 1 summarizes the point-line correspondence between points in dual momentum space and lines in momentum twistor space. Thus all the points of the form are associated to the momentum p a . Note that these Z a are different from the calligraphic Z a used in (1.1)(the connection between the two is that Z a are obtained by projecting the JHEP01(2020)069 Z a through the k−plane Y in (1.1)). Thus, a set of on-shell momenta {p 1 , . . . p n } satisfying n a=1 p a = 0 can be represented by an ordered set of momentum twistors {Z 1 , . . . Z n }. Each line Z a Z a+1 corresponds to the point x a in dual momentum space as shown in figure 1.
Each loop momentum a can also be associated to a line in momentum twistor space. We denote these lines by (AB) a , where A and B are any representative points. This helps us distinguish loop momenta from other external momenta. We can express Lorentz invariants in terms of determinants of momentum twistors using the relation [14,19]. Finally, we connect the invariants involving the Z I a with the four bracket via We will utilize this connection later in section 5.4.
Topological definition
The amplituhedron A n,k,L is a region in momentum twistor space which can be cut out by inequalities. The region depends on the integers n, k and L which specify the n−point, N k MHV amplitude. n is the number external legs of the amplitude and correspondingly the number of momentum twistors which are involved in the definition of the amplituhedron. We denote these by {Z 1 , . . . Z n }. L is the number of loops and we have the lines, (AB) 1 , . . . . . . (AB) L corresponding to the L loop momenta 1 , . . . L . k appears below in the inequalities that define A n,k,L .
Tree level conditions
The first set of conditions that define the amplituhedron involve only the external momentum twistors Z a and we refer to these as the "tree-level" conditions. They are listed below along with some comments about each condition.
• The external data must satisfy the following positivity conditions.
We adopt an ordering (1, . . . n) in all definitions. We must also define a twisted cyclic symmetry for this ordering with This definition is required to ensure that ii+1n1 > 0 for odd k and ii+1n1 < 0 for even k. We will see below that this is crucial to obtain the right number of sign flips.
• We require that the sequence with i = 1, . . . n have k sign flips. The use of (2.4) is crucial in arriving at this conclusion. Since all the sequences { ii + 1i + 2j } j=i−1 j=i+3 have the same number of flips (with the appropriate twisted cyclic symmetry factors), we can use any of them in place of the sequence { 123i } i=n i=4 . In the rest of the paper, we will the sequence which is most convenient to the situation.
Loop level conditions
The next set of conditions involve both the external data and the loops (AB) a and we refer to these as "loop level" conditions.
• Each loop (AB) a must satisfy a positivity condition analogous to (2.3) Note that we must include the twisted cyclic symmetry factor (−1) k−1 here as well.
Once again, this implies that (AB) a n1 > 0 for odd k and (AB) a 1n > 0 for even k.
• We require that sequence with i = 1, . . . n have the same number of sign flips. We will make use of these sequences as convenient in the rest of the paper.
Mutual positivity condition
The final condition is a relation involving multiple loop momenta (AB) a . We must have For multi loop amplitudes, the conditions above amount to demanding that each loop (AB) a is in the one-loop amplituhedron (i.e. it satisfies conditions (2.6) and (2.7)) and also the mutual positivity condition (2.8). Finding a solution to all these inequalities is tantamount to computing the n−point N k MHV amplitude. The complexity of solving the mutual positivity condition shows up even in the simplest case of n = 4. Indeed, its solution is at the heart of the four point problem, as explained in [10]. The topological definition is well suited to exploring cuts of amplitudes (which correspond to saturating some of the inequalities in (2.3)-(2.8) by setting them to be equal to zero). This formalism has been exploited to investigate the structure some cuts of amplitudes that are inaccessible by any other means. The results of some classes of these "deep" cuts are obtained to all loop orders in [20,21]. JHEP01(2020)069
Amplitudes and integrands as canonical forms
The inequalities (2.3)-(2.8) define a region in the space of momentum twistors. The goal of the amplituhedron program is to be able to obtain the amplitude from purely geometric considerations. More precisely, we can obtain the tree level amplitude and the loop level integrand for planar N = 4 SYM. In contrast to generic quantum field theories, the planar integrand in N = 4 SYM is a well defined, rational function as shown in [7,22]. The conjecture here is that the Canonical form associated to the amplituhedron is the loop integrand. The Canonical form associated to a region is the differential form with logarithmic singularities on all the boundaries of that region. For more details on Canonical forms, their properties and precise definitions, see [23]. The discovery of amplituhedron-like geometric structures (for e.g. [24][25][26][27]) in other theories lends further support to the idea that amplitudes can be thought of as differential forms on kinematic spaces. Some consequences of this are explored in [28]. It is interesting to note that a topological definition of the amplituhedron has been found directly in momentum space [29]. This allows for the possibility of expressing N = 4 SYM amplitudes as differential forms in momentum space rather than momentum twistor space.
It is illustrative to show the calculation of the canonical form for the simple case of Fixing this redundancy, we arrive at the following parametrization.
The solution to the inequalities in (2.9) is The boundaries are located at α 1 = α 2 = β 1 = β 2 = 0 and the differential form with logarithmic singularities on all the boundaries is just This is the integrand for the 1-loop four point MHV amplitude as conjectured.
At higher points, the situation is more complicated cases. There are multiple ways in which the sequence S loop (2.7) can have k + 2 sign flips. It is useful to triangulate the JHEP01(2020)069 complete region by enumerating all such patterns. This procedure works extremely well for n−point MHV amplitudes and is fleshed out in section 7 of [16]. Specifically, if we parametrize i<j For another derivation of this integrand, please refer to [7].
Finally, an important property of these forms is that they are all projectively well defined. They are invariant under the re-scaling Z i → t i Z i of each external leg. We will make use of this property in section 5.
Unitarity and the optical theorem
The relationship between the singularity structure of scattering amplitudes and unitarity has been the subject of a lot of work. [1-5, 22, 30-32]. It is well known that the branch cut structure of amplitudes is intimately tied to perturbative unitarity. This is encapsulated in the optical theorem which related the discontinuity across a double cut to the product of lower loop amplitudes.
The presence of branch points in loop amplitudes is due to the pole structure of the integrand. This is governed by the boundary structure of the amplituhedron. The structure of boundaries and their relation to branch points has been studied extensively in [33][34][35][36][37][38][39][40]. The discontinuity across a branch cut is calculated by the residue on an appropriate boundary of the amplituhedron. The optical theorem thus translates into a statement about the factorization of the residue on this boundary. We expect this factorization to emerge as a consequence of the positive geometry.
JHEP01(2020)069
Let us begin by rewriting the optical theorem, specifically for N = 4 SYM in the language of momentum twistors. For now, we will focus on MHV amplitudes. We are interested in the case where one of the loops, AB, cuts the lines ii + 1 and jj + 1 and all other loops (which we denote by (AB) a ) remain uncut. Thus we are calculating the residue of the n − point MHV amplitude on the cut ABii + 1 = ABjj + 1 = 0. It is convenient to parametrize AB as where Z is an arbitrary reference twistor. The terms in the L-loop integrand which contribute to this cut (which has the required poles) can be written as The dependence of f on the external twistors has been suppressed. The residue of M L n on the cut ABii + 1 = ABjj + 1 = 0 is where g(x, y) is a Jacobian which is irrelevant to our purposes. Unitarity predicts that the function f (x, y, 0, 0, (AB) a ) is related to lower point amplitudes (see figure 2) and is of the form We will show that this structure emerges from the geometry of the amplituhedron. We will first present a proof for the four point case. This is just a rewriting of the proof found in [10] using the topological definition. This proof will then admit a generalization to amplitudes of higher multiplicity.
Proof for 4 point amplitudes
In this section we will examine the unitarity cut AB12 = AB34 = 0, at four points and show that the residue can be written as a product of lower loop, 4-point amplitudes.
Specifically, we will show that the defining conditions of the amplituhedron (2.3)-(2.7) can be replaced by two disjoint set of conditions which define a "left amplituhedron" with external data {Z 1 , A, B, Z 4 } and a "right" amplituhedron with external data {A, Z 2 , Z 3 , B}.
We will show that the mutual positivity conditions in (2.8) which seemingly connect the "left" and "right" amplituhedra are automatically satisfied once the defining conditions for the "left" and "right" amplituhedra are met. This suffices to prove that the canonical form on the cut is where are the canonical forms of the "left" and "right" amplituhedra respectively. A suitable parametrization of (AB) is This ensures the cut conditions are satisfied. To compute the canonical form on the cut, we must solve the remaining inequalities. The tree level constraints ( Here we have used the parametrization (3.1) for A and B. The consequences of this inequality are best understood by considering the related quantity ( (AB) a 2B (AB) a A3 ). Using (3.1) for A and B, we can rewrite this as follows.
The above equation implies (AB) a 2B (AB) a A3 > 0 as each term on the right hand side is individually positive due to (3.2) and (3.4). The two possible solutions are A particular loop (AB) a may satisfy either (3.6) or (3.7). In a generic case, there will be L 1 loops, (AB) a 1 which obey (3.6) and L 2 = L − L 1 − 1 loops, (AB) a 2 which obey (3.7). There are no restrictions on what values L 1 and L 2 can take. Consequently, the complete region satisfying the inequalities (3.2) and (3.4) is a sum over all values of L 1 and L 2 with L 1 + L 2 = L − 1. We will now show that the canonical form for a region with fixed L 1 and L 2 can be written as a product of forms for amplituhedra A 4,0,L 1 and A 4,0,L 2 . .
Both of these follow from (3.1) and (3.2) The flip condition is the only one left to be verified and follows from the Plücker relation This can be derived by noting that the 5 twistors {B, A a 1 , B a 1 , Z 1 , Z 2 } are linearly dependent which leads to the condition Contracting this with (AB)a 1 Z 3 yields (3.9). Note that the r.h.s. of (3.9) is negative since (AB) a 1 B3 = − (AB) a 1 34 < 0 while the signs of the terms in the l.h.s. are This forces (AB) a 1B < 0 and ensures that the sequence in (3.8) has 2 flips. Similarly, the external data for the right is the set {A, Z 2 , Z 3 , B} and a loop (AB) a 2 which belongs to it satisfies the following conditions.
JHEP01(2020)069
Clearly, the conditions (3.8) and (3.11) define the amplituhedra A 4,0,L 1 and A 4,0,L 2 with canonical forms M L 1 L (Z 1 , A, B, Z 4 ) and M L 2 R (B, A, Z 2 , Z 3 ) respectively. To complete the proof that the canonical form on the cut is just the product of these forms, we must show that mutual positivity between the loops (AB) a 1 and (AB) a 2 imposes no further constraints. To see this, we can expand the loop (AB) a 1 in terms of and compute (AB) a 1 (AB) a 2 which yields, The positivity of all the terms except for (AB) a 2 A4 and (AB) a 2 B1 immediately follows from (3.11). For these two, we have Therefore, (AB) a 1 (AB) a 2 > 0 imposes no new constraints and the canonical form on the cut factorizes into M L and M R .
Proof for MHV amplitudes of arbitrary multiplicity
We will extend the above results to amplitudes of arbitrary multiplicity. However, the existence of higher k sectors beginning with n = 5 complicates the proof. In this section we will focus on a proof of unitarity for MHV amplitudes. This allows us to sketch the essentials of the proof without additional complications. In the next section, we modify the proof to account for higher k sectors. We are examining the residue of the MHV amplituhedron A n,0,L (Z 1 , . . . , Z n ) on the cut ABii + 1 = ABjj + 1 = 0. For the rest of the paper, we will assume j = i + 1. 1 The defining conditions for the amplituhedron A n,0,L {Z 1 , . . . Z n } are Tree Level ijkl > 0 for i < j < k < l (4.1) There are clearly many patterns of signs for which the sequences S has two sign flips. We refer to each pattern as a configuration of the amplituhedron. A configuration for the MHV amplituhedron is specified by giving the signs of all entries of the sequence S. We would like to show that for each configuration, the canonical form can be written as the product of canonical forms of a left and right amplituhedron. To begin, we can parametrize the cut loop AB as To show that the canonical form on this cut can written as a product of canonical forms for lower loop, "left" and "right" MHV amplituhedra A L n 1 ,0,L 1 and A R n 2 ,0,L 2 (with L 2 = L−L 1 − 1), we need precise definitions of these objects. This is provided in the following section. The above sequence lends itself to easy comparison with the sequence S in (4.1). However, for consistency, we must also verify that the following sequences have the same number of JHEP01(2020)069 This ensures that the definition of the amplituhedron is independent of the choice of sequence, similar to section 2.2.2. The positivity conditions on the loop data ensures that all the first and last entries of these sequences are positive. Furthermore any two sequences in the above set, all of which are of the form { ak } and { a + 1k }, satisfy ak a + 1k + 1 − ak + 1 a + 1k = aa + 1 kk + 1 > 0 (4. 3) The equality of sign flips now follows from the analysis in appendix A. This shows that the left amplituhedron can be consistently defined at tree level. The mutual positivity and the loop level positivity conditions for all the loops in the left amplituhedron are automatically satisfied because of (4.2) and (4.1). The flip condition defines the criterion for any uncut loop (AB) a to be in the left amplituhedron. We will present a detailed analysis in section 4.2.
The right amplituhedron
The external data for the right amplituhedron A n 2 ,0,L 2 is R = {A, Z i+1 , . . . , Z j , B} and the defining inequalities are listed below. a, b, c, d ∈ R and ij ≡ (AB) a ij with (AB) a being an uncut loop.
Tree Level abcd > 0, Ai + 1ab > 0, abjB > 0, (4.4) In order to compare S L (4.3) to S, we introduce the sequence and call the number of flips in this sequence k L flips. The motivation behind introducing this is that S L and S L are connected by the Plücker relation (similar to (4.3)) ik i + 1k + 1 − ik + 1 i + 1k = ii + 1 kk + 1 > 0 (4.6) Following the analysis in appendix A, the relation between k L and k L is determined entirely by the signs of the first and last elements
JHEP01(2020)069
where k L is the number of sign flips in S L . If i − 1i + 1 > 0, then k L = k L − 1 otherwise k L = k L . S now looks almost like a juxtaposition of S R and S L . Each flip pattern of S determines whether the corresponding loop (AB) a belongs to the left or the right amplituhedron as shown below.
Once again, it is simple to show that S R has two sign flips and S L has 0 sign flips in this configuration.
Trivialized mutual positivity
We have shown that for every configuration of the amplituhedron, each loop belongs either to the left or the right. While we can consistently define left and right amplituhedra, it remains to be shown that the mutual positivity between a loop (AB) L , (L = 1, . . . L 1 ) in the left amplituhedron and a loop (AB) R (R = 1, . . . L 2 ) in the right amplituhedron doesn't impose any extra constraints. JHEP01(2020)069 This is easiest to see if we expand each loop (AB) L and (AB) R using (2.10) as with r 1 < r 2 ∈ {A, Z i+1 , . . . , Z j , B} and l 1 < l 2 ∈ {Z 1 , . . . Z i , A, B, Z j+1 , . . . , Z n }. On expanding (AB) L (AB) R , every term is of the form l 1 l 2 r 1 r 2 with l 1 < l 2 < r 1 < r 2 .
Since the external data are positive, i.e. ijkl > 0 for i < j < k < l, we are assured that This completes the proof of factorization for MHV amplituhedra on the unitarity cut. In the next section, we will extended this proof to the higher k sectors.
Proof for higher k sectors
The proof of unitarity for higher k is similar in spirit to that for the MHV sector. However, there are a lot additional details that we must take into account. Firstly, we must modify (2.13) to include products of "left" and "right" amplituhedra with different k. Suppose the left amplitude has g L negative helicity gluons and the right amplitude has g R negative helicity gluons, then we have g L + g R = g + 2. With the MHV degrees defined as k L = g L − 2, k R = g R − 2, k = g − 2, this equation reads k L + k R = k. Recall that we introduced the function f (x, y, 0, 0, (AB) a ) in (2.12) and stated the optical theorem in terms of it. Including sectors of different k, this becomes, We expect that unitarity emerges from a factorization property of the geometry in a manner similar to the MHV case. In order to make this statement more precise, we will have to define analogues of the left and right MHV amplituhedra for N k MHV external JHEP01(2020)069 data. A n,k,L , the N k MHV amplituhedron defined by the conditions Tree level ii + 1jj + 1 > 0, ii + 1n1 (−1) k−1 > 0 and the sequence (5.2) S tree : ii + 1i + 2i + 3 , . . . ii + 1i + 2i − 1 (−1) k−1 has k sign flips.
Loop level (AB) a ii + 1 > 0, ABn1 (−1) k−1 > 0 and the sequence It is worth re-emphasizing that we wish to prove that the canonical form on the cut (which is computed by solving the inequalities (5.2) for the uncut loops (AB) a along with ABii + 1 = ABjj + 1 = 0) can be written as in (5.1). For this to happen, we want to show that the set of inequalities in (5.2) can be replaced by two sets of inequalities which define lower loop amplituhedra, A L n 1 ,k L ,L 1 and A R n 2 ,k R ,L 2 . It is not essential that the external data for these is a subset of {Z 1 , . . . Z n }. In particular they can be rescaled by factors Z i → σ(i)Z i and still yield the same canonical form due to projective invariance as discussed in section 2.3. In fact, as we show below, this rescaling plays a crucial role in ensuring that the left and right amplituhedra have k of both even and odd parity.
On the unitarity cut ( ABii + 1 = ABjj + 1 = 0 with i + 1 = j), there is a natural division of the external data into "left" and "right" sets, {Z 1 , . . . , Z i , A, B, Z j+1 , . . . , Z n } and {A, Z i+1 , . . . , Z j , B}. However, insisting that this be the external data for the left and right amplituhedra imposes too many constraints. To see this, suppose that the "left" set has MHV degree k L . We must have ABn1 (−1) k L −1 > 0. But (5.2) implies that ABn1 (−1) k−1 > 0. This forces (−1) k+k L > 0 and restricts k L to be the same parity as k. Similarly for the right set, we have j − 1jBA (−1) k R −1 > 0 and again (5.2) implies ABj−1j > 0 which forces (−1) k R > 0. In order to avoid these extra constraints on k L and k R , we must allow for arbitrary signs on the Zs and define two the sets of external data as where σ(k) = ±1. These signs will be determined by conditions like (5.2) which define the left and right amplituhedra along with the appropriate twisted cyclic symmetry. We will then show that the canonical form for every configuration in A n,k,L can be mapped into a product of canonical forms on suitably defined left and right amplituhedra A L n 1 ,k L ,L 1 and A R n 2 ,k R ,L 2 . We must demand that the set L satisfies all the conditions in (5.2). In addition, this must also be compatible with the fact that the Z i are the external data for A n,k,L .
AB and the Zs automatically satisfy aa + 1bb + 1 > 0 and ABaa + 1 > 0. Thus we have, σ L (a)σ L (a + 1)σ L (b)σ L (b + 1) > 0 and σ L (A)σ L (B)σ L (a)σ L (a + 1) > 0. Furthermore, we have new constraints on A and B coming from Finally, since the set L is the external data for A L n 1 ,k L ,L 1 , it must satisfy a twisted cyclic symmetry This divides into two cases iABj The solutions to these constraints are Table 1. Parametrization of (AB) in the four regions.
which again requires σ L (A)σ L (B) > 0 and turns (5.5) into This has the following solutions Each of these regions is characterized by particular signs for iAkk + 1 and Bj + 1kk + 1 along with a pattern of sign flips for the sequence Each region allows parametrization of the line (AB) as A = ±Z i ± xZ i+1 and B = ±yZ j ± Z j+1 with x > 0, y > 0. In table 1, we list the different possibilities.
It is crucial to remember that the canonical form is independent of the choice of σ(i) and parametrization of A and B. In all these cases the canonical form is that of A n 1 ,k L ,L 1 .
The right amplituhedron
A similar analysis of the effects of (5.2) on the set R yields the following constraints on {σ R }.
Once again, each region is characterized by different pattern of sign flips of the sequence where we have ignored an overall factor of σ R (A)σ R (i + 1)σ R (i + 2). We list the various parametrizations and sign patterns of S tree R in table 2. The canonical form is independent of the choice of σ(i) and parametrization of A and B.
Factorization of the external data
We will show that, on the unitarity cut, for every allowed sign flip pattern of the sequence S tree , there exist regions L i , R i such that S tree L and S tree R have the flip patterns necessary for A L n 1 ,k L ,L 1 and A R n 2 ,k R ,L 2 . The analysis that follows is similar to the one is section 4.2. The sequence S R is similar to the left part of S tree and can be compared directly. In order to compare S tree L with S tree , it is necessary to introduce another sequence S tree L . This is analogous to what we did in (4.5).
Let k, k L , k L , k R be the number of flips in S tree L , S tree L , S tree R , S tree respectively. k L and k L are related to each other due to the following Plücker relations Table 3. Relation between k L and k L determined according to appendix A.
It is easy to see that these hold in all regions (L i , R i ). As shown in appendix A, we can conclude that the relation between k L and k L depends only on the signs of first and last terms which are encoded in the matrix below.
The relation between k L and k L is tabulated in table 3. It is helpful to label all the allowed flip patterns of S tree as S tree ab where a and b are the signs of ii + 1i + 2j and ii + 1i + 2j + 1 respectively. The different possibilities are shown below. Table 4. (k L , k R ) in all regions for the configuration S ++ .
S tree Table 6. (k L , k R ) in all regions for the configuration S −+ . For each configuration, S tree ab , the sequences S L and S R have the following signs depending on the region (L i , R i ). For the configuration S ++ , we have k 1 +k 2 = k. Thus regions which For the configuration S −+ , we have k 1 + k 2 = k − 1. Thus regions which satisfies For the configuration S −− , we have k 1 +k 2 = k. Thus regions which satisfies k L +k R = k are (L 1 , R 3 ), (L 1 , R 4 ), (L 2 , R 3 ), (L 2 , R 4 ), (L 3 , R 2 ), (L 4 , R 1 ).
JHEP01(2020)069
We see that for every configuration S tree ab , there are regions (L i , R i ) that satisfy k L + k R = k. Thus every configuration in the original amplituhedron can be covered by these regions consistent with the expected factorization. The remaining regions exist because they are related to amplitudes via inverse soft factors and have identical canonical forms. However, these are not necessary to cover all regions of the original amplituhedron.
Factorization of loop level data
At loop level, we need to show that each loop (AB) a belongs either to the left or the right amplituhedron. The relevant sequences are (denoting (AB) a ij as ij ) Similar to before, it will be convenient to introduce the sequence S loop Let the number of flips in S loop R and S loop L be k r and k l respectively. These are not k R and k L , which are the number of flips in the tree level sequences S tree R and S tree L respectively. The flip patterns of S tree can be organized as follows.
S loop : i + 1i + 2 , i + 1i + 3 , . . . , i + 1j i + 1j + 1 , . . . , i + 1i (−1) k−1 S loop ++ : We showed in the previous section that on the unitarity cut, the external data factorizes such that k L + k R = k with k L , k R ∈ {0, . . . k}. It is trivially true that each loop belongs either to the left or the right amplituhedron. We must show that if a loop (AB) a belongs to the left amplituhedron, then it cannot belong to the right amplituhedron. First, note that in each configuration, we will have k l = k 2 + l and k r = k 1 + r with r, l = 1 or 2. Now suppose that (AB) a belongs to both the left and right amplituhedra. Then we must have k l = k L + 2 and k r = k R + 2. Expressing k l and k r in terms of k 1 , l, k 2 and r, and using k L + k R = k, we get
JHEP01(2020)069
Clearly, l + r = 5 is impossible since l, r = 1 or 2. We just need to show that l + r = 4 is impossible. Note that this is possible only if l = r = 2. In this case, the following hold true. In all these cases, we must have σ R (A)σ R (B)σ L (A)σ L (B)(−1) k R < 0. It is easy to verify from section 5.1 that this is always false. Thus each loop belongs solely to the left or the right ampltuhedron.
Mutual positivity
To complete the proof of factorization, we need to show that the mutual positivity between a loop in A L n 1 ,k L ,L 1 and one in A R n 2 ,k R ,L 2 is automatically satisfied. This is easier to see while working with (k + 2) dimensional data. We can re-write all the four brackets using Z s and the k−plane Y as described in section 2.1. For more details, see section 7 of [16]. A loop in the left amplituhedron can be parametrized as a k L + 2 plane Y L Similarly, a loop in the right amplituhedron can be thought of as a k R + 2 plane Y R 1 . . . Y R k R A b B b and parametrized as Y R µ = (−1) µ−1 σ R (A) A + α µ σ R (i µ ) Z iµ + β µ σ R (µ + 1) Z i µ+1 (5.10) with µ ∈ {1, . . . k R }, Z iµ ∈ {A, Z i+1 , . . . , Z j , B} and with j 1 < j 2 < . . . j k R +2 . This reduces the mutual positivity condition Y L (AB) a Y R (AB) b > 0 to a condition involving k + 4 brackets of the form ijklm . It is easy to see that with positive k + 4 dimensional data ( i 1 . . . i k+4 when i 1 < i 2 < . . . i k+4 ), mutual positivity is guaranteed. The signs σ L (k) and σ R (k) are crucial in making this work.
Conclusions
We have shown that unitarity can be an emergent feature. The positivity of the geometry inevitably leads to amplitudes identical to those derived from a unitary quantum field theory. This lends further support for the conjecture that the amplituhedron computes all the amplitudes of N = 4 SYM. It also suggests that the notion of positivity is more fundamental than those of unitarity and locality which are the cornerstones of the traditional framework of quantum field theory.
A Restricting flip patterns
Consider a pair of sequences {a 1 , . . . , a n } and {b 1 , . . . , b n } which have an equal number of terms. Further suppose that they are connected by the Schouten identity and satisfy a postivity condition, i.e. there exists a relation a i b i+1 − a i+1 b i = ab > 0. We will show that the number of sign flips in these sequences, k 1 and k 2 respectively, are related and that the relation depends only on the signs of a 1 , a n , b 1 and b n .
Firstly, we note that the positivity forces each block in the pair of sequences a i a i+1 b i b i+1 to take one of the following forms. | 9,650.2 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Aerodynamic analysis of several insect-proof screens used in greenhouses
Insect-proof screens constitute a physical means of protecting crops and their use has become widespread in recent years. There is no doubt as to their efficiency in controlling insects, but they do have a negative influence on greenhouse ventilation as they obstruct air-flow. It is therefore necessary not only to evaluate their efficiency as a means of protecting crops, but also to estimate the degree to which they obstruct airflow. To this end the present work analyses the aerodynamic characteristics of these screens, carrying out experiments with two devices which force a flow of air through them, thus providing data of the pressure drop as a function of air velocity. The analysis of these data has provided simple ratios of the permeability and the inertial factor as a single function of porosity. Additional key words: agrotextiles, airflow resistance, climate control, ventilation.
Introduction
The installation of plastic screens in greenhouse vents has become the most widespread means of keeping insects out in recent years.In some parts of the Mediterranean basin, plastic meshes were first used in order to limit damage to the crops at the edges of the greenhouse caused by strong winds, and also to prevent birds from entering.At that time these textiles had a very low thread density (6 × 6 or 6 × 9 threads cm -2 ).Their great potential as a physical means of protecting crops was soon discovered and textiles with a greater density of threads started to appear, capable of preventing or limiting the entrance of certain species of insect pests.Several authors have studied the efficiency of agrotextiles as a mean of controlling whitefly, aphids and viruses in different crops.Results show that agrotextiles may be beneficial during the coldest part of the year.Nevertheless, if not removed in time they can have a negative effect, provoking premature flowering and reducing weight per plant (Nebreda et al., 2005).Barrier crops have been investigated by several authors (Ross and Gill, 1994;Kittas et al., 2002), resulting in a wide range of divergent conclusions on their effectiveness.These can be an effective crop management strategy to protect against virus infection, but only under specific circumstances (Fereres, 2000).At present, in areas with a high density of greenhouses, where high losses can be incurred due to the activity of insects, the use of insect-proof screens is compulsory.
The main advantage to be obtained by using protective screens is the reduction in insect populations inside the greenhouse, and therefore a lower incidence of diseases and the possibility of reducing the amount of phytosanitary treatments (Baker and Jones, 1989;Berlinger et al., 1991).On the whole, it may be said that any means of physical protection which leads to a reduction in the amount of treatments with phytosanitary products will be of both financial and environmental benefit, as well as reducing the health risk to those workers applying the treatments and increasing trust in the market.
The present study is a contribution to the determination of airflow characteristics of insect-proof screens by means of permeability and inertial factor coefficients of Forchheimer's equation.Some commercial screens were tested using two experimental devices that force a flow of air through the porous mesh (one for velocities lower than 1 m s -1 and the other for higher velocities).The results allow to obtain ratios between coefficients of Forchheimer's equation and porosity, which can then be used to calculate the airflow resistance of these screens.
Material and Methods
The efficiency of insect-proof screens as a physical barrier depends on the dimensions of the pores between the threads of the mesh.The screens are often named acording to the number of threads per surface unit.However, the density of the threads alone does not suffice to determine the average dimension of the pores; the diameter of the fibres must be also known.With the above information it is possible to calculate the average length of the pores in the two main directions of the mesh: [1] where L px and L py are the average length (in m) of the pores in the two main directions; D h is the average diameter (in m) of the threads which make up the mesh; ρ x and ρ y represent the number of threads per unit of length (threads m -1 ) in each of the two main directions.
The main disadvantage of protective screens is that they reduce the surface area of the greenhouse devoted to ventilation.Installing screens on greenhouse vents impedes the air-flow, reducing the ventilation rate, and therefore affecting climatic variables.The reduction in ventilation surface is inversely related to the porosity of the mesh.The porosity α (m 2 m -2 ) expresses the relationship between the surface area of the pores (S p ), and that of the total mesh (S t ): [2] The analysis of commercial protective screens is therefore of interest on two counts.Firstly it is important to know the eff iciency of these materials as a physical barrier preventing the entrance of insects to the greenhouse; secondly there is a need to ascertain to what extent the mesh affects the ventilation, and by extension the microclimate inside the greenhouse.The present work focuses on the latter.The analysis of the efficiency of these screens as a means of protecting crops basically consists of the geometrical study of the pores and the comparison of the measurements obtained with the characteristic sizes of the most harmful pest species.Analysis of the mesh's resistance to airflow is carried out using devices which force a current of air through the porous meshes, thus allowing the measurement of the pressure loss caused by the porous material as a function of air velocity.
Several authors have studied the flow-resistance characteristics of various screening materials and tried to find their effect on greenhouse ventilation.In those studies, the resistance to airflow caused by the screens was determined either by using equations derived for free and forced fluid flow through porous materials or by means of a coefficient of discharge incorporated into Bernoulli's equation (Teitel, 2001).Miguel et al. (1997) studied the characteristics of several types of mesh used in greenhouses by means of experiments in a wind tunnel.The data obtained related the pressure drop due to the mesh with air velocity.This approach, since used by several authors (Dierickx, 1998;Miguel, 1998a,b;Muñoz et al., 1999), is based on Forchheimer's equation, which describes the airflow through a porous medium: where P is the pressure (in Pa), x is the screen thickness (in m), µ is the dynamic viscosity (in Pa s); K is the permeability of the screen (in m 2 ); u is air velocity in (m s -1 ); ρ is the air density (in kg m -3 ); and Y is the inertial factor (dimensionless).
For velocities of airflow which imply Reynolds numbers over 150 (Teitel, 2001), the viscous forces do not dominate the flow, and therefore the first term of the second member of Eq. [3] can be discarded, obtaining the following Bernoulli equation: [4] where F m is a pressure loss coeff icient due to the presence of the screen, related to the discharge coeff icient C dm through the pores of the mesh as follows: [5] In any case, as Muñoz et al. (1999) stated based on the empirical relationships obtained by Miguel et al. (1997), the values of pressure drop given by Eqs.[3] and [4] show very small differences for the wind speed interval of between 0 and 3 m s -1 .
There are different expressions to estimate the value of the pressure loss coefficient.Brundrett (1993) proposed the following for metallic mesh:
Wind tunnel
The experimental devices used force a current of air through the porous mesh, allowing the resulting pressure drop to be measured as a function of air velocity.The first experiments were carried out in a 4.74 m long low-speed, open-circuit wind tunnel (Fig. 1) of circular cross-section (38.8 cm diameter).This device is able to produce an airflow of up to 10 m s -1 , and is divided into the following parts: flow conditioner, contraction, test section, diffuser and fan.The contraction section is the most important element of the wind tunnel (Fang, 1997).Conical in shape, its purpose is to accelerate the airflow as it advances towards the test section.Generally, the contraction area ratio is the most important factor to bear in mind, as it affects the uniformity of flow, the possibility of flow separation and downstream turbulence level (Fang et al., 2001).Once the contraction ratio is determined, the nozzle shape and length also play an important role in the design, affecting the uniformity of the speed-profile and the boundary layer Pilot tube (Morel, 1975).The tunnel used in this experiment has a contraction ratio of 1:5.32 and the coefficient between the entrance diameter and the length of the contraction section is 0.92.The airflow was propelled by an HCT-45 axial fan (Sodeca S.A., Sant Quirze de Besora, Barcelona, Spain) which can reach a velocity of 2,865 rpm and a maximum flow volume of 12,800 m 3 h -1 .A Micromaster 420 AC Inverter was used to regulate the fan speed (Siemens España S.A., Madrid, Spain) at an output frequency of between 0 and 50 Hz and at a resolution of 0.01 Hz.
Control of air velocity in the test section of the wind tunnel was carried out according to the analogical input of the inverter.These inputs are fed by a continuous current between 0 and 10 V, and there is a linear relationship with the response of the inverter (from 0 to 50 Hz).The inputs are controlled by means of an electronic circuit incorporating a microprocessor which receives instructions from a PC.In this way the inverter can be operated remotely and automatically.
Two 4 mm diameter Pitot tubes (Airflow Developments Ltd, Buckinghamshire, England) were used to measure the static pressure.These were placed 450 mm upstream and downstream from the central axis of the assay section.The static pressure outlets of both Pitot tubes were connected to a differential pressure transducer SI 727 (Special Instruments, Nörlingen, Germany).The pressure transducer range was 200 Pa and its accuracy ± 0.25% full scale.Hysterisis and reproducibility were ± 0.1% full scale, and temperature error was ± 0.025% K -1 , with a 0-10 V signal output.
Air velocity was measured 950 mm upstream from the test section using an EE70-VT32C5 directional hotfilm anemometer (Elektronik, Engerwitzdort, Austria).This sensor has a range of between 0 and 10 m s -1 and a precision of 0.1 m s -1 .It can also measure the temperature of the flow in a range of between 0 and 50ºC and with an accuracy of ± 0.5ºC.
Suction device
For speeds of below 1 m s -1 a suction device was used to analyse the mesh (Fig. 2).This consists of a circular test duct (115 mm diameter, 220 mm length) made of transparent PVC.Twenty samples of the same mesh were placed in the test duct separated by PVC rings (10 mm thick, 115 mm external diameter and 70 mm internal diameter).By testing several samples simultaneously, this device allows pressure drops to be produced which can be detected by the measurement apparatus.
Downstream from the test duct, the device was connected to a water reservoir by means of a flexible tube.The measurements were based on the pressure drop caused by natural suction of air through the samples as a result of water flow induced by gravity (Miguel et al., 1997).Air velocity can be controlled by regulating the flow of water.Nevertheless, using the water reservoir this device is both complicated to assemble and difficult to regulate.As a result, an alternative was considered: the airflow through the mesh could be produced using a small NMB-4715KL fan (NMB Technologies Inc., Chatsworth, USA) directly connected downstream from the test duct.In order to vary air velocity an electronic circuit was designed which allows the direct control of the speed of a DC motor.This was achieved by cutting the power supply by means of a Pulse Width Modulation (PWM).The circuit generates a square wave with a variable on-to-off ratio, enabling the motor speed to be modif ied.No statistical differences were observed between tests made with the fan or the water reservoir.For the sake of simplicity, therefore, the fan was used for air supply.
In order to measure the drop in air pressure on passing through the mesh, two Pitot tubes and a pressure transducer were used, similar to those used in the wind tunnel.The Pitot tubes were placed 150 mm upstream and 90 mm downstream from the test duct.Air velocity was measured with an EE70-VT31C3 hot f ilm anemometer (Elektronik, Engerwitzdort, Austria) with a range of between 0 and 2 m s -1 and a precision of 0.05 m s -1 .This sensor was placed 200 mm downstream from the test duct and allowed the temperature of the flow to be measured, offering values ranging from 0 to 50ºC and with an accuracy of ± 0.5ºC.
Results and Discussion
Eleven different types of mesh have been tested.The experimental designs described above allowed pairs of data to be obtained relating the pressure drop caused by the mesh and the approach speed of the airflow.At speeds below 1 m s -1 the values were measured with the suction device, whereas at higher speeds measurements were recorded in the wind tunnel.Fig. 3 shows pressure drop versus air velocity.Good agreement was found, with high coefficient of determination (R 2 ).
It can be observed that the best fit equation of the relationship between the two variables is a second order polynomial: [9] Equating the first and second-order terms, respectively, of the experimental polynomial [Eq.9] and Forchheimer's equation [Eq.3], the following expressions are obtained, allowing the permeability value K and the inertial factor Y to be determined.The presence of the free term in the fits [Eq.9] and its nega-tive sign are due to the tendency of the curve between the maximum airflow velocity in the assays and the minimum measurable velocity threshold, which is conditioned by the accuracy of the anemometer.The value of the free term is neglected.
[10] where ∆x is the thickness of the mesh (in m).The values of ∆x and the remaining geometrical parameters were obtained using the Euclides v.1.4software designed for this purpose (Department of Rural Engineering, University of Almería, Spain).Table 1 summarises the geometrical characterisation analyses.
Once the thickness of the mesh (∆x) was known, as well as coefficients a and b of the first and second order terms in Eq. [9], and bearing in mind the density ρ and dynamic air viscosity µ for the experimental conditions, the permeability and inertial factor could be calculated for the 11 types of mesh assayed (Table 2).
As Table 2 shows, permeability values tend to increase as the porosity of the mesh increases.The inertial factor, on the other hand, tends to decrease as porosity increases.If the geometric characteristics of the mesh are accepted to have a very slight influence on values K and Y (Miguel et al., 1997), both parameters can be obtained as a function of porosity.
Figs. 4 and 5 show the permeability and inertial factor of the mesh versus porosity.The best fit equations for the experimental data are as follows: In other words, for the porosity range 0.29 to 0.48 (which includes the characteristic α values of these protective meshes) the best fit equation described by the relationship between permeability and porosity of the mesh is a second order polynomial, and in the case of the inertial factor and porosity the best f it is a potential equation.
The pressure drop-velocity data pairs f it second order polynomial functions, with coefficients of determination very close to unity, as described by Forchheimer's equation.The permeability and inertial factor of the meshes tested have been obtained based on these fits.Considering that the geometrical characteristics of the mesh have a negligible bearing on the values of these parameters, expressions of K and Y have been obtained as functions of porosity.The results obtained show that in the range of porosity assayed the best fit for the permeability/porosity ratio is a second order polynomial, whereas for the relationship between inertial factor and porosity it is a potential equation.These results do not agree with those obtained by other authors, since Miguel et al. (1997) found that the best fits for both parameters corresponded to potential equations valid for porosity values between 0.04 and 0.90.
[6] where σ mo and σ c are the kinetic energy and momentum correction factors and Re is the Reynolds number.Based on this equation,Bailey et al. (2003) obtained a similar expression to the previous one for the specific case of insect-proof screens:[7]Linker et al. (2002) also proposed an expression to determine the pressure loss coefficient as a function of porosity and the Reynolds number:[8]
Figure 1 .
Figure 1.Diagram of the wind tunnel (all figures are in cm).
Figure 4 .Figure 5 .
Figure 4. Permeability K of the mesh versus porosity α for all the screen samples tested.
Table 1 .
Geometrical characteristics of the 11 meshes analysed
Table 2 .
Coefficients a, b and c for the best fit equation (∆P = au 2 + bu + C), coefficient of determination R 2 , permeability K and inertial factor Y calculated for the 11 types of mesh analysed | 4,016.6 | 2006-12-01T00:00:00.000 | [
"Engineering"
] |
Correctness conditions for high-order differential equations with unbounded coefficients
We give some sufficient conditions for the existence and uniqueness of the solution of a higher-order linear differential equation with unbounded coefficients in the Hilbert space. We obtain some estimates for the weighted norms of the solution and its derivatives. Using these estimates, we show the conditions for the compactness of some integral operators associated with the resolvent.
Since the coefficients ρ and r j (j = 0, k) are smooth functions, the operator L 0 is a closable operator (see [1,Sect. 6 of Chap. 2]). We denote by L the closure of L 0 .
A function y(x) is called a solution of differential equation (1) if there exists a sequence {y m (x)} ∞ m=1 ⊆ C (k+1) 0 (R) such that y m → y and Ly m → f in the norm of L 2 as m → ∞. It is clear that y ∈ L 2 .
A number of problems of stochastic analysis and stochastic differential equations lead to singular elliptic equations and ordinary differential equations and their systems with unbounded intermediate coefficients. Specific representatives of such equations are the stationary equations of Ornstein-Uhlenbeck (see [5]) and Fokker-Planck-Kolmogorov (see [6]). In the case k = 1, equation (1) is the simplest model of Brownian motion of particles with a covariance matrix determined by the function ρ(x), and r 0 (x) is called the drift coefficient.
For applications of equation (1) to various practical processes, it is important to investigate the correctness of equation (1) with coefficients ρ(x) and r j (x) (j = 0, k -1) from wider classes. In the case that the intermediate coefficients do not depend on the potential and the diffusion coefficient and can grow as a linear function, the correctness of the secondorder singular elliptic equations was studied in [7][8][9][10]. The correctness conditions for the second-order and third-order one-dimensional differential equations with rapidly growing intermediate coefficients were obtained in [11][12][13][14][15][16]. However, in [11][12][13][14][15][16] the condition of weak oscillation is imposed on the intermediate and senior coefficients. In this paper, sufficient conditions for the existence and uniqueness of a solution y(x) of (1) are obtained. Moreover, for the solution, we proved the following inequality: Using this estimate, we obtained compactness conditions for operators θ (x)L -1 and θ (x) d α dx α L -1 (α = 1, k -1). The difference between this result and the results in [7][8][9][10][11][12][13][14][15][16] is that equation (1) is high order, and the coefficients r j (x) (j = 0, k -1) can grow rapidly, and all coefficients can be fluctuating (see Example 4.1). In addition, the leading coefficient can tend to zero at infinity. In other words, the cases of some degenerate equations are covered. Note that the criteria for the existence of positive periodic solutions for differential equations with indefinite singularity and pseudo almost periodic solutions of an iterative functional differential equations, respectively, were found in [17] and [18]. We introduce the following notation: The following statements are the main results of this paper.
be a continuous function, and the following conditions be satisfied: Then, for any f ∈ L 2 , equation (1) has a unique solution y and Changing variable in Theorem 1.1, we obtain the following result.
times continuously differentiable, and r k (x) be a continuous function, and the following conditions be satisfied: Then, for any f ∈ L 2 , equation (1) has a unique solution y and holds.
Some auxiliary statements
Let C (s) . The next lemma is a particular case of Theorem 2.1 in [19].
Lemma 2.1 Suppose that the functions g(x)
, v(x) = 0 (x > 0) are continuous, and for a natural number s, holds. Then, for any y ∈ C (s) where Remark 2.1 If s = 1 and C is the smallest constant for which inequality (5) is valid, then, instead of (6), the inequalitiesT g,v,s ≤ C ≤ 2T g,v,s hold (see [20]).
holds. Moreover, if C 1 is the smallest constant for which (7) is valid, then Proof Changing variable in Lemma 2.1, we obtain the desired result.
Remark 2.2 If s = 1 and C 1 is the smallest constant for which inequality (7) is valid, then the inequalitiesM u,h,s ≤ C 1 ≤ 2M u,h,s hold.
The following statement is proved by application of Lemmas 2.1 and 2.2.
Lemma 2.3 Let continuous functions u(x)
, v(x) = 0 (x ∈ R) satisfy the conditionsT u,v,s < ∞, M u,v,s < ∞ for some natural number s. Then, for any y ∈ C (s) 0 (R), Moreover, if C 2 is the smallest constant for which inequality (8) is valid, then Remark 2.3 If s = 1 and C 2 is the smallest constant for which inequality (8) is valid, then
On a two-term differential operator
Let l 0 be a differential operator from the set C (k+1) 0 (R) to L 2 , which is defined by We denote its closure by l. holds.
Since the operator l and the generalized differentiation operator are closed, we have y ∈ D(l), z = y (k) ∈ D(L) and Thus,L is a closed operator. The proof follows from the following equalities:
Lemma 3.4 Suppose that the functions ρ(x) and r 0 (x) satisfy conditions (a) and (b) of Theorem 1.1. Then l is invertible and its inverse l -1 is bounded.
Proof By Lemma 3.1, l has an inverse l -1 . Since l is a closed operator, using (9), we deduce that R(l) is a closed set. By Lemma 3.3, it suffices to prove R(L) = L 2 . If R(L) = L 2 , then, according to [1, p. 284], there is a nonzero element v(x) ∈ L 2 such that (Lz, v) = z,L * v = 0 (whereL * is the adjoint ofL) for any z ∈ D(L). Since C (1) 0 (R) ⊆ D(L), the set D(L) is dense in L 2 . Therefore, Since v = 0, we have C = 0. Taking into account condition b) of Theorem 1.1, we have that |v(x)| ≥ |C| Hence v / ∈ L 2 . This is a contradiction.
Proofs of the main results
Proof of Theorem 1.1 Set x = mt,ŷ(t) = y(mt),ρ(t) = ρ(mt),r j (t) = r j (mt) (j = 0, k),f (t) = f (mt)m -(k+1) (m > 0). Then equation (1) changes to Letl be a closure ofl 0 , wherel 0 : D(l 0 ) → L 2 is defined bŷ By the conditions of the theorem, we can choose a number m so that Then, according to condition c) of the theorem, Lemma 2.3, and estimate (14), we obtain that, for anyŷ ∈ D(l), By (19) and Lemma 3.1, we get that According to Lemma 3.4 and Remark 3.1, the operatorl is invertible, and its inversel -1 is defined on the whole L 2 . Then, by inequality (20) and the well-known statement on small perturbations [21, Chap. 4, Theorem 1.16], the following operator is also closed and invertible, and the inverse operator P -1 m is defined on the whole space L 2 . So, it follows that, for eachf ∈ L 2 ,ŷ = P -1 mf ∈ D(P m ) andŷ is a solution of equation (17). By (19), we deduce that Using the substitution t = m -1 x, we obtain that the function y(x) =ŷ( 1 m x) is a solution to equation (1). Inequality (21)
implies (3).
Proof of Theorem 1.3 Let the conditions of Theorem 1.1 be satisfied. Without loss of generality, we assume that θ (x) is a real function. Let By Theorem 1.1 and (3), for any y ∈ C (k+1) 0 (R): Ly 2 ≤ 1, we obtain These inequalities are valid for any y ∈ D(L) such that Ly 2 ≤ 1, since L is a closed operator. Therefore, Q j is bounded in L 2 . Let us show that Q j is compact in L 2 . By the Frechet-Kolmogorov theorem, it suffices to show that, for each ε > 0, there is a number N ε such that, for any y ∈ C (k+1) 0 (R), Ly 2 ≤ 1, and N ≥ N ε , the following inequality holds. We have that According to Lemma 2.1, Similarly, using Lemma 2.2, we obtain Set A s,r 0 ,j (N) = max sup t≥N T θ, √ r 0 ,j (x), sup τ ≤-N M θ, √ r 0 ,j (τ ) . | 2,002 | 2021-05-01T00:00:00.000 | [
"Mathematics"
] |
The academic condition and its enemies
In the name of the certification of ‘quality’ and ‘excellence’, the University of Minho now seems condemned only to carry out procedures which, in education and research, certify routine and conformity, efficiency and utility, thus confirming the hegemony of instrumental reason. It is, however, my purpose in this study to reflect on academic freedom in the university. This issue demands that one should address one’s questions to the nature of the university itself, to the academic profession, as well as to its vocation and mission. What is the university today? What are the forces that traverse it? What blows has it sustained? What are the threats it is exposed to? What are its contradictions? What demands must it comply with? What should its response be?
Introduction
Although the reflection on the university's vocation and mission are currently the order of the day, as was recently demonstrated by Zara Pinto Coelho and Anabela Carvalho (2013: 4-14) 1 in their recent work, my perspective resumes a debate introduced by Max Weber almost one century ago on the occasion of his two conferences, one in 1917, another in 1919: "Wissenchaft als Beruf"; and "Politik als Beruf" 2 .I have returned to Max Weber since, in the discussion about the university, I believe it is interesting to understand both what divides science and politics, as well as what equally unites them.As was pointed out by Raymond Aron (1974: 8) in the introduction to the book Le savant et The academic condition and its enemies .Moisés de Lemos Martins le Politique, by Max Weber: "One cannot simultaneously be a man of action and a man of study, without compromising the dignity of the one or the other profession, without failing in the vocation of the one or the other.One can, however, assume political positions outside the university, as well as the possession of objective knowledge; though this is perhaps not indispensable, it is certainly favourable to reasonable action".
A theory of action always constitutes "a theory of risk and also a theory of causality" (Ibidem); it is precisely for this reason that "the real has not previously been recorded in writing", and that the course of history depends on actual people and on specific circumstances (Ibid.: 9).Yet, the need to make contextual options does not force thought to depend on "essentially irrational" decisions; neither is existence fulfilled in a freedom which "refuses to submit itself to Truth" (Aron, 1959: 52) 3 .
Our era has been traversed by a dominating and shaping force.I am referring to the technological mobilization directed at the market.These world kinetics were called, firstly by Jünger (1930Jünger ( [1990]]) and then by Sloterdijk (2000), a 'total' and 'infinite' mobilization aimed at the market.On the other hand, undone was the myth that constituted a foundation for the western world, the myth of the word, a myth associated to a space of promise 4 .
Promise projected an idea of a future and provided guarantees for it.It launched a forward purpose and gave us a sure footing (essence, substance, God, transcendence, subject, man, existence, consciousness…) (Derrida, 1967: 410-411), a familiar territory (between a genesis and an apocalypse, a story of salvation, for example, the Kingdom of God, a classless society, a society enlightened by the Lights of Progress, with reason imposing itself on superstition)5 and a stable identity (that we are created in God's image, or rather, that we aspire to fraternity, that man is no longer a wolf-man…).
In contrast, technologies have deployed us to the urgency of the present -these are the kinetics of the world, a mobilization directed towards the present (Martins, 2010).In a technological civilization, a civilization centred on numbers, the words of promise are 3 However, the view we hold concerning truth removes the basis for the concept of truth, which still makes it presence felt in Raymond Aron's text.Indeed, we subscribe to the principles of historicity and hermeneutics where, due to the complete invasion of the field of knowledge by discourse, the truth is a mere discursive function.(consult Martins, 1994: 5-18). 4The word, quintessentially, constitutes the great myth of western civilization.This is the perspective I defend in "Ce que peuvent les images.Trajet de l'un au multiple" (Martins, 2011 a).In effect, our reason is discursive, both in the Greco-Latin tradition as well as the Judaeo-Christian tradition.For Aristotle, for instance, man is defined by language.And since language is the path which leads us to the other, man is a "political animal", an expression found both in Politics and in Nicomachean Ethics.Yet, those before Socrates already considered the word to be something that saved.Consider, for example, what is stated by Roland Barthes (1970) in "L'ancienne réthorique", regarding the drafting of the first treaty in argumentation by Corax and Tisias.As for the Judaeo-Christian tradition, one is immediately confronted with a proclamation of discursive reasoning at the beginning of St.John's Gospel (1, 1): "In the beginning was the Word and the Word was God".This inheritance has always accompanied us and it is with it that we have reached Modernity.This can be seen in Nietzsche (1887, II, paragr. 1), for whom we are animals of promise, the only animals capable of promising.This is also visible in Jorge Luís Borges, with promise being fulfilled in the illocutionary dimension of language.In his poem The Unending Gift, Borges states that "in a promise there is something that does not die".E George Steiner (1993: 127) does not say this differently in Real Presences: "Language exists […] because 'the other' exists".Or be it, the word is the path leading to an encounter with the other and constitutes our fate.
followed by the numbers of promise, which are always the numbers of economic growth, those of the Gross Domestic Product (GDP) and the numbers of exports, namely, the superavit numbers of the Trade Balance.What now constitutes promise, is provided by economists, engineers and managers; it is they who are the current wizards, and no longer the politicians, priests and jurists.
Current world kinetics and the university
It is in this context that one finds universities.They are subjected to the same world kinetics, that of the mobilization of technologies for the market, which is translated into a response to the demands of a civilization of numbers (Martins, 2013(Martins, , 2003(Martins, , 1993)).
Traditionally, the promise of the university was: to serve the Truth 6 .This is where its main objective -research -ensued from, since truth can only be reached by those who systematically seek it.Yet, the truth was beyond science, to the extent that it was from this domain that the university derived the following objective: to serve culture, showing that it was capable of educating man as a whole.Furthermore, the truth is transmitted and, to this end, the university had to consecrate itself to education.Even the teaching of professions was ordered by the principle of comprehensive training.
Nevertheless, what we now observe is the idea of applying marketing to the education system.This means that the university is placing products on the market, which are highly likely to be purchased.It is thus that education has become a business, that lecturers have become service professionals and consultants, with its commercial directors -namely, the directors of Schools and Faculties -at the centre of the management of this business.The assessment of the product, its 'profile', is determined from above in accordance with bureaucratic criteria; these are dependent on the laws of the market, business and marketing, as well as on their newsworthy visibility.And the education projects deemed to be more 'fragile', those which are directed at restricted groups of 'consumers', are mercilessly eliminated.
And the same can be said of fundamental research.From the beginning of the 90s there has been a constant and increasing tendency to make the scientific validity of research projects depend on their positive contributions to practical social needs.Even in the case of social and human sciences, research projects have not been able to evade the market's pressures by being directed at 'quality', 'excellence', 'competitiveness', efficiency', 'relevance', 'entrepreneurship', 'employability', 'economic development and the generation of employment', as well as the use of English as the only language of science 6 As previously referred, in his introduction to Le Savant et le Politique, Raymond Aron provides the framework for Max Weber's thoughts by relating them to the greater category of Truth, when he advocates that existence cannot be fulfilled in a freedom "that refuses to submit itself to Truth." (Aron, 1959: 52).Our perspective, however, advocates the deconstruction of the concept of truth, thus moving away from Aron.As was pointed out by Derrida (1967: 412), who acts as our reference, the deconstruction of the concept of truth constitutes an achievement for our time.One of the most noteworthy names associated to this achievement is that of Nietzsche (and his criticism of metaphysics, namely his idea of the game, interpretation and signs without a present truth); as well as Freud (and his criticism of self-presence, namely the criticism of consciousness, the subject, one's sense of identity, proximity and self-propriety); and, further, Heidegger (and the destruction of metaphysics, the destruction of ontotheology, the destruction of the being's determination as a presence).(Shore & Wright, 1999;Power, 2000;Martins, 2008Martins, , 2012Martins, a, 2012Martins, b, 2013;;Martins &Oliveira, 2013;Nóvoa, 2014).
Indeed, our world does not seem to have any other world beyond the needs of the market and its financial demands.This is also the conclusion one reaches when one is confronted with the new European Union Framework Programme for Research and Innovation, Horizon 2020.The key challenge stated is that of "stabilizing the financial and economic systems, while measures are taken to create economic opportunities" (European Commission, 2013).In fact, what is being dealt with now is the complete submission of European scientific policy to corporate strategy.This dependence is reinforced in a recent document issued by the Commission, entitled Research and innovation as sources of renewed growth (COM(2014)339 final).The section "Increasing impact and value for money", is precise in its objectives: "Raising the quality of public spending on research and innovation".And, among the conclusions pointed to by the document, one that deserves to be highlighted is the following: "Investment [in R&I] must be accompanied by reforms that enhance the quality, efficiency and impacts of public R&I spending, including the leverage of business investment in R&I" (p.12)7 .
Besides the European Commission, other financing Agencies (I am referring to the Brazilian CAPES and CNPq, and the Portuguese FCT), as well as businesses, no longer condone what they do not consider to be of social interest.Civil society undoubtedly does so too, and the same can be said of editors, who will not hear of publishing fundamental research, arguing that they will have no readership.It is a fact that all the sectors of collective life have today placed the university under surveillance, under the guise of 'accountability', which is measured as an "economic value" (Barr, 2012: 438-508).
In sum, what is occurring due to this technical explosion is that our times have accelerated and been deployed for the market.And the very same process is happening at universities, through their current policies for education and research, accompanied by the technological control of the science of information.I am referring, for instance, to the constant demands they are subjected to, by means of computer platforms, the hastened mobilization of lecturers and students directed at the market and rankings.As is well highlighted by Hermínio Martins, we have been trapped by the discourse of "The University of Excellence-as-a-business" with a maximum "Throughput" (H.Martins, 2004), a description of the university which is now far removed from its description by Eliot Freidson (1986: 436): "notable social inventions to support the work that has no immediate commercial value ".
Since our time is one of technological deployment, a new type of teacher and student is now required, as well as a new type of researcher.With increasingly fewer social The academic condition and its enemies .Moisés de Lemos Martins rights, lecturers, researchers and students are currently confronted with the condition of permanent mobility, thus crossing over to the market's needs. 8 And there they are, the new researchers, included in programmes of mobility, from country to country and from university to university.They have to be competitive and enterprising; they must promote self-employment, or general employment; they are required to create spin offs, for example.Additionally, they have to be productive and achieve success 9 .
It is then that the legion of doctorate and post-doctorate graduates emerge, youngsters seeking the redemption of a research grant which, at most, will allow them to move from congress to congress, from one research project to another, knocking on the doors of scientific journals and running after some or other ranking or a mirage of a scientific award.In order to justify this crossing, characteristic of a nomadic condition and without social rights, the official discourse has taken on new arguments: it is then added that the economy, namely businesses, do not absorb them; that lecturers and students are a surplus in the work market and thus expendable; that they are suitable for emigration 10 .
The academic day-to-day and the governance of universities
Since this is the present context, one should reflect on everyday academic activities, as well as on the governance of universities.
What constitutes the nature of universities today is commercial ideology: universities are businesses; education is a service area; teaching and research are business opportunities; lecturers are service professionals or consultants; students are customers.And with the financial and work market rumbling fantastically above its head, the university breaks into the headlines with the 'excellence' of its courses and lecturers, namely it advertises its 'quality'.Yet, what is this 'excellence' that everyone is talking about?Excellence is measured through the institution's demand indexes.It is also quantified by the entrance marks required for a said university.It is, furthermore, related to school education's success rates.And, additionally, that of the employment indexes of former students, as well as a constituted and extended network of successful alumni.
8 I have taken the image of a 'crossing' from João Guimarães Rosa (2001Rosa ( ) [1967]], in O Grande Sertão: Veredas.One can, for example, make the passage of a river from one bank to the other.During this experience, one does not expect to have to surpass great incidents or obstacles; one expects a smooth trip, unless one has to swim across, as mentioned by João Guimarães Rosa (2001: 51).A passage is, indeed, synonymous to a habitual and familiar path.However, the experience of a crossing is somewhat different, since its danger always produces some anxiety.Danger is what fundamentally characterizes it: we undertake the crossing of the ocean, of a sea of temptations, of a desert… (see also, Martins, 2011 b: 60-61). 9Universities themselves are becoming business incubators, supporting their former students, using their own non-refundable capital in the development of business activities.
10 According to the "Diagnosis of the research and innovation system: Challenges, strengths and weaknesses en route to 2020", carried out by the Fundação para a Ciência e a Tecnologia (FCT) -the Foundation for Science and Technology, the percentage of graduate students in national companies is 2,6% [http://www.fct.pt/esp_inteligente/diagnostico].The official gazette, Diário de Notícias, dated 13th May 2013, reported the public conference in which the results of this Diagnosis were presented, pointing out that "From a group of 10 European countries, Portugal has the lowest graduate employment rate, with 2,6%, in comparison, for example, to 34% in Holland and Belgium".http://www.dn.pt/inicio/portugal/interior.aspx?content_id=3216596&page=-1 (consultation undertaken on 13th May 2013).
The academic condition and its enemies .Moisés de Lemos Martins The university's 'excellence' can no longer do without a position in the ranking of the best 100 universities, according to Times Higher Education, or in the ranking of the best 100 universities under 50.They can also not dismiss a place in the Academic Ranking of World Universities (also known as the Shanghai Ranking), and be listed among the one thousand best universities in the world, or in the more recent CWTS Leiden Ranking, established on the basis of ISI citations.
Nevertheless, academic 'quality' does not end here.It is also measured by ISI articles, by Thomson Reuters, or by the Scopus articles, by Elsevier or, still, by citations on Google Scholar.Additionally, it cannot ignore the importance of awards received by lecturers, as well as citations in journals with an impact factor, the capacity to attract funding and the obtention of international projects.Also essential is the university's potential visibility in the public space, which is established by the news published about it in the media.
An example of this is the institutional site for the Universidade de Minho in the north of Portugal, which dedicates eight rubrics to the University's presence in the public space: (1) Nós -an on-line newspaper.The editors present it thus: " UMinho on review monthly.Here you will find news features, interviews, life paths, opinions and an agenda of the main events"; (2) On agenda.Namely, "Everyday activities, the academic calendar and all other events -congresses, seminars, campaigns, ceremonies, awards and various events"; (3) Current affairs: "At every moment, up-to-date information about the most important happenings at the Universidade do Minho"; (4) Clipping: "The media's perspective of UMinho.All that is broadcast on TV, the radio and published in the press and on the internet is available here"; (5) Profile: "Here you will get to know the stories of students, lecturers, researchers and employees at UMinho who have distinguished themselves in the most varied areas"; (6) Photo gallery: "The pictures that show UMinho".(7) Press Area: "This area is dedicated to professionals in the field of communication.Contact us and present your issues, doubts, suggestions"; and, finally, (8) What the media says about us.The University as news: weekly, a repository of information, which marks the presence of lecturers in the news plateaux: in studios, on radio and television, as well as in newspaper editing. 11n the meanwhile, it is in the governance of universities that the "managerial and economic" models have prevailed over the "classic collegial models" (Ruão, 2008: 15).It is in this way that the university's identity has taken on a format which is merely instrumental and that the communication strategies developed at the university have become increasingly concerned with the production of strategic effects (Ruão, Ibid.: V).
These circumstances -the control of communication to produce strategic effects -are today the task of the Press Office at universities, which are also called the Communication and Image Department.Most universities now have a Pro-Rectory for Communication and Image.And its objective is that of administering university policies in the public space.
Since everything is asked of them, universities have nonetheless proved to be more and more incapable of providing a response to the mounting pressure of social demands.Universities are asked to provide a response to: the needs of economic development; the The academic condition and its enemies .Moisés de Lemos Martins creation of jobs; the country's modernization; technological innovation; international competitiveness; the need to foster social cohesion; the fight against ethnic and gender disparities; the promotion of minority inclusion; and even the need to combat media and digital illiteracy.
And we have resigned ourselves to the fact that university policies are today restricted to management strategies and that the need for growth has been adjusted to responses of a mere techno-instrumental nature.Indeed, nothing at the university today points to learning and teaching to see; nor is there learning and teaching to think, as Nietzsche taught in The Twilight of the Idols (Nietzsche, 1988(Nietzsche, /1888: 67-68): 67-68).Learning or teaching to see, or be it: getting the eyes used to calmness, to patience, allowing things to draw closer to us; learning to postpone judgement, skirting around and approaching the particular case from all angles.Additionally, learning and teaching to think means learning and teaching a technique, a study plan, a will to master -that thinking must be learnt as one learns to dance, like a type of dance… Readings, however, expresses some concern: how can one envisage an institution "whose development tends to make thought increasingly difficult and less necessary?"(Readings, 1996: 175).And yet, the academic ideal has been unable to meet the present operative, financier-oriented and economic mobilization without resorting to thought, without social and political commitment, and without the ethical criteria of the disquiet of criticism.
I believe the university should be seen as a place of unrestricted freedom.The university's mission is that of safeguarding the possibilities inherent to the adventure of thought.It holds the responsibility of making teaching and science an idea, which embodies the principle of the resistance of criticism and the force of dissidence, both commanded by what Jacques Derrida (2001: 21) once called, "the justice of thought".
Nevertheless, it is within this framework that academic policies have found themselves confined to management strategies and that the need for growth has accommodated itself to responses which are merely of a techno-instrumental nature.It is also in this context that Portuguese universities have set up their Vice-Rectories for Quality and Excellence.
In Portugal, the Universidade do Minho was one of the first universities to have a SIGAQ (Sistema Interno de Garantia da Qualidade -Internal System of Quality Assurance) and a Vice-Rectory, which ensures the operation of this system, with the institutionalization of a Quality Plan and a Quality Manual 12 .This Internal System of Quality Assurance was audited in October 2012 by the A3ES -Agência de Avaliação e Acreditação do Ensino Superior (the Agency for Assessment and Approval of Superior Education) in Portugal, and was certified by this Agency in January 2013 for the duration period of six years.
From a strictly academic perspective, I would however say that the practical effect of the SIGAQ is one of exercising dominance over lecturers, namely that of technological control and a mobilization directed towards the market (and towards ranking, which is a consequence of the market).
The academic condition and its enemies . Moisés de Lemos Martins
What SIGAQ produces in everyday academic activity is the enthronement of corrective and orthopaedic procedures, which certify the existence of routines and conformity, efficiency and utilities in teaching and research.Furthermore, with regard to projects and the extension of the university, they record and file information and thus ensure institutional overheads, which are crucial in the self-financing policy of a university and when public funding seems to have entered a phase of irreversible restriction.
The regulations for the performance assessment of lecturers
The creation of the RAD (Regulamentos de Avaliação do Desempenho dos Docentes -Regulations for the Performance Assessment of Lecturers) is linked to the SIGAQ but is not set exactly within its framework.This ensues from a general decree, a Law of the Portuguese State (Law n. 205/2009, dated 31 st August), which is still in place and has resulted in a thorough overhaul of Superior Education, the Legal Framework for Institutions of Superior Education (RJIES) (Law n.º 62/2007, dated 10 th September).The RAD have customized this Law to meet the specific conditions of each of the universities in the country, and even those of each Faculty or School in a university.
The Regulations for the Performance Assessment of Lecturers at the Universidade do Minho (RAD-UM) were approved in the Official Gazette on the 18 th June 2010.The process includes the lecturers' self-assessment, which is expressed quantitatively, as well as a countless set of questions established by a board of assessment belonging to the university.Each of the Schools tailors the general requirements to its context.Full professors also intervene in this process by approving it; they can, however, change the marks when they consider self-assessment to be somewhat inaccurate.
In accordance with this Law, all the Regulations for Performance Assessment cover four rubrics which include Research, teaching, university extension and university management.Academic performance consists of the lecturer's compliance with the set of requirements for each of the rubrics, which are established by a board of assessment 13 .
Two models of the Regulations for the Performance Assessment of Lecturers will be considered.The first is that of the Social Science Institute at the Universidade do Minho.This is a model which has allowed all its lecturing staff to assess themselves, without great effort, as having demonstrated excellent performance (above 80 points in 100) in each of the areas: research, teaching, academic extension and academic management.I would say that this constitutes a bureaucratic model, which meets the administrative purposes and is, thus, a model which does not present great academic criteria 14 . 13The assessment process itself is indexed to a remuneration system which will determine progression on the scale of academic categories.However, this remuneration system was not actually implemented, due to the freezing of careers in civil service in Portugal from the spring of 2011 onward: This was when the country was ruled by an austerity programme, decreed by the international institutions from whom "financial assistance" was sought.It was on 3rd May 2011 that the Prime Minister of Portugal, José Socrates, announced the austerity measures decided by the European Commission, the European Central Bank and the International Monetary Fund (Troika), within the framework of a programme of "financial assistance".
14 I have included an annex which presents the parts that constitute the assessment form for lecturers at the Institute of Social Sciences at Universidade do Minho, in compliance with the Regulation for the Performance Assessment of Lecturers at Universidade do Minho (RAD-UM), approved in the official gazette Diário da República, 2nd series, n. 117, dated 18th June 2010.
The academic condition and its enemies .Moisés de Lemos Martins The second model is that of the Universidade da Beira Interior.I will focus on the parts which are common to all the Faculties, as well as on the specific aspects that the model includes for the Faculties of Human and Social Science, and Arts.It is anchored on the principle of a "qualitative differentiation of scientific production", a principle which determines that the "higher evaluation of scientific performance corresponds to more demanding levels of scientific production, in detriment of massified scientific production, the levels of which are considered to be scientifically less relevant".
Although this proposal has generated great academic concern, it is didactic in nature.It stipulates the following: "the successive levels of stringency must be reached through worthy and moderate scientific activity".Yet, what it requires is that "lecturers, especially those who are still weaker in terms of scientific production, are not forced to waste much of their time with those levels"; instead, they should be "motivated to reach the next level until they reach category A, which is obviously demanding but not unattainable, otherwise this would tend to be ignored".
Still in the same line, and both of great academic and didactic concern, the Regulation proposes that it should be possible to "saturate the sum of points given to categories D, C and B", attributing "relatively high scores to tasks which are fundamental to lecturing activities" but that "if considered to be on an equal footing with more relevant international activities, they would have to be calculated rather parsimoniously as internal and national scientific activities".
Four classification categories are proposed, with category A being the most demanding.In these circumstances, the Regulation's proposal is as follows: "category A is the most visible 'face' of the University's strategic options and of the stringent level of the Universidade da Beira Interior".It is for this reason that the matter "will be decided centrally by the evaluation coordination board, which will standardize the same level of rigor in all the faculties".
I shall analyze category A of the academic performance assessment, focusing on the rubric for research.In accordance with a university ideal, shared by all the Faculties in the University, this ideal is considered in all its complexity and scope; namely, it includes the stringent criteria of the internationalization of science, the criteria of international comparability and, still, the criteria of funding which points to the importance of scientific projects: Chapter of a book in a study of international reference (maximum of two authors) 25 The academic condition and its enemies .Moisés de Lemos Martins Coordinator of an H2020 European project or of an international project which includes a minimum of two universities or research centres in three different countries and funding above the sum of 150 000 Euros 40 National coordinator of a European project or of an international project which includes the universities or research centres of at least three different countries and funding above the sum of 150 000 Euros 20 Individual international bursary obtained in a competitive context 15 Technical reports in great projects of international cooperation (involving more than three countries) 15 Exhibition or presentations at international events (congresses, museums, art galleries, festivals, displays, etc.), individual or collective, evaluated by an appraisal requested by the board of assessment 50 Table 1 * -Variable assessment with a maximum score of up to 100 points proposed by the Assessment Board and approved by the Coordinating Board of Assessment.
-The proposal for the classification of an author book in Category A must be accompanied by an appraisal requested by the Assessment board.
-Work of international reference consists of work published abroad by a publisher of reference, acknowledged as such by the assessment board.
-International exhibition or presentation refers to an exhibition or presentation undertaken abroad or, in the case of Portugal, with the participation of at least 50% of foreign artists or organized in conjunction with a foreign entity. 15
Final note
Our modernity has seen instrumental reasoning become hegemonic.In fact, it was the hegemony of the epistemological paradigm that led to technical rationality and to economicism (Martins, 1993: 345).The University then became a simultaneously local and total reality.It is either a heterogeneous and specific reality or, in turn, a homogenous and global reality.The university has undoubtedly resulted in a fragmented reality, which is a consequence of the crisis in the fundamental and truth theories.Yet, at the same time, it is a reality which has been enriched by a translocal condition, since this has always been its condition and mission.Nevertheless, let us hope that the rampant technological mobilization directed at the markets, statistics and ranking, as well as the enthronement of the corrective and orthopaedic procedures that certify the routine and conformity of education and research, do not submerge thought nor drown out the very idea of university.
Internationally relevant scientific award *A scientific book, published by an author/group of authors, of compatible merit, pointed out by an independent appraisal requested by the board of assessment 70 Edition and/or translation of sources and of classics, with an introduction and critical commentary, evaluated by an appraisal requested by the board of assessment 50 The academic condition and its enemies .Moisés de Lemos Martins In the event of having carried out one of these activities during the year concerned, write the number 1 | 7,137.2 | 2015-06-23T00:00:00.000 | [
"Education",
"Philosophy"
] |
Plasma-induced non-equilibrium electrochemistry synthesis of nanoparticles for solar thermal energy harvesting
Abstract Rapid plasma-induced non-equilibrium electrochemistry (PiNE) at atmospheric pressure was used to prepare surfactant-free gold nanoparticles and copper oxide quantum dots. A suite of chemical and physical characterisation is carried out to assess the as-prepared materials. Nanofluids comprised of these nanoparticles in ethylene glycol have been prepared. The energy absorptive properties of the prepared nanofluids were investigated as a potential additive to the traditional working fluids used in solar thermal collectors. The application feasibility has been assessed by calculating a value of power which could be transferred to the thermal fluid. This work demonstrates an alternative and rapid method to produce nanofluids for solar thermal conversion.
Introduction
With a continuing increase in demand for energy and the concomitant environmental pollution caused by the use of fossil fuels for its production, it is essential to obtain as much energy as possible from zero-carbon, renewable sources. Amongst these renewable options, solar energy represents a significant potential source for future energy needs and recently advanced methods of solar energy harvesting have been modelled. Examples include non-equilibrium plasma-assisted solar absorption for the thermochemical decomposition of CO 2 (Elahi et al., 2020) and also concentrated photovoltaics/solar thermoelectric generators/Stirling engine hybrid systems (Mohammadnia et al., 2020), however, simple photoelectrical or photothermal processes are much more common. Alongside photovoltaic cells, solar thermal collectors are the most commonly used solar energy collection devices, converting absorbed solar energy to thermal energy. A typical collector consists of a selectively coated surface to absorb solar radiation, which is transferred to a working fluid in the form of heat by conduction and convection. The solar thermal conversion efficiency of this multi-step process is an issue, as not only does the effectiveness of the heat exchange process between the absorber and the fluid affect the amount of actual heat supplied, but the rate of heat transfer into and out of the fluid also plays a crucial role in the efficiency (Otanicar et al., 2010;Phelan et al., 2013).
As an attempt to reduce heat loss in traditional solar collectors, directly absorbing solar collectors (DASCs) were developed in the 1970s, where the absorption media is the heat transfer fluid (Minardi and Chuang, 1975). In order to overcome the poor absorption in the visible range of typically used working fluids such as water, ethylene glycol (EG) and diathermic oils (Bertocchi et al., 2004;Minardi and Chuang, 1975), additives must be incorporated in the fluid to trap the incident solar energy. With around 90% of the total incident radiation in the visible range transmitted through water and EG (Otanicar et al., 2009), the criticality of altering the fluid properties to absorb this significant portion of incident energy is apparent (ASTM International and ASTM, 2012). µm-sized carbonaceous particles were previously used, yielding improved absorptive properties over the base fluid. Nonetheless, issues arose with the poor stability, clogging of pumping equipment and fouling of the transparent cover which hindered the overall system efficiency and lifetime (Bertocchi et al., 2004;Minardi and Chuang, 1975). As such focus has shifted towards nanoparticleladen fluids, known as nanofluids, which have been proposed as potential absorbing fluids to improve the absorption across this high energy area of the solar spectra, with significantly reduced pump clogging compared to micron-sized particles (Chen et al., 2016a;Otanicar et al., 2009;Tyagi et al., 2009;Wang and Mujumdar, 2007).
It is well known that the optical properties of nanoparticles, such as scattering and absorption, can be tailored through careful selection of nanoparticle material, shapes, size and concentrations (Khlebtsov et al., 2005). The plasmonic resonance peak of metallic gold nanoparticles (Au NPs) in particular, is a well-documented phenomenon which can greatly enhance scattering and absorption of incident light (Beicker et al., 2018;Chen et al., 2016a;El-Sayed, 2000, 1999;Patel et al., 2013;Sharaf et al., 2019), though many other materials have been investigated such as gold nanorods (Jeon et al., 2016), aluminium (Tyagi et al., 2009), carbon nanotubes (Beicker et al., 2018;Delfani et al., 2016;Karami et al., 2014), carbon nanohorns (Bortolato et al., 2017;Gimeno-Furio et al., 2019), graphene/graphene oxide (Rose et al., 2017;Vakili et al., 2016;Chen et al., 2017), silver (Asmussen and Vallo, 2018;Mallah et al., 2018;Otanicar et al., 2010;Taylor et al., 2011), copper oxide nanoparticles (CuO NPs) (Karami et al., , 2015Menbari et al., 2016) or other metallic nanoparticles Pustovalov and Astafyeva, 2018), as well as blended or mixed nanomaterials (Mehrali et al., 2018;Qu et al., 2019;Ulset et al., 2018). As such it is possible to produce tailored nanofluids in terms of absorptive properties within the visible light range by careful selection of additive nanomaterials. Whilst it stands to reason that increasing the concentration of nanoparticles would increase the amount of energy harvested, this is not always true, and the concentration must be also carefully selected as both too high and too low a concentration can be detrimental to performance (Beicker et al., 2018;Taylor et al., 2011). This can be attributed to the reduced penetration depth (for the case of too great a concentration) of the incident light due to the added nanoparticles. This trapping of energy in the top few millimetres of the fluid leads to a higher average surface temperature of the receiver, which presents favourable conditions for loss of thermal energy to the surroundings (Hewakuruppu et al., 2015;Khullar et al., 2014;Taylor et al., 2011). Ultimately, this re-emitted energy constitutes a system loss and results in a less efficient nanofluid.
Au NPs are a well-studied material, with a size-tunable plasmonic absorption peak in the visible region of the spectra (Beicker et al., 2018;Chen et al., 2016a;El-Sayed, 2000, 1999;Patel et al., 2013;Sharaf et al., 2019) and good chemical stability (Patel et al., 2013), and as such lend themselves as a model material to add to our nanofluids. Whilst gold is a costly material, the plasmonic absorption effect is of such magnitude that even at a very low concentration of Au NPs a significant attenuation of visible light between 520 nm and 580 nm can be achieved (Haiss et al., 2007). Many works have focused on producing Au NPs with a very narrow size distribution ( ± 5 nm) (Lu et al., 2008;Steinigeweg et al., 2011;Suchomel et al., 2018), often requiring time-consuming post-processing steps, which results in dispersions that absorb very strongly in a very narrow range. For this application it is vital to absorb as many of the high energy photons in the visible light region as possible, and as such a narrow size distribution may hinder the applicability. To this end, a larger size distribution is acceptable, even beneficial. As such an extremely rapid PiNE system can be used to rapidly fully reduce gold salts, synthesising surfactant-free Au NPs within 10 min.
In addition, CuO quantum dots (QDs) are investigated as a complementary additive to Au NPs as these particles have been demonstrated to greatly increase the attenuation of light (up to four-fold) at the shorter wavelength end of the visible region into the UV region (Karami et al., , 2015Velusamy et al., 2017). Furthermore, these particles can be created at very low cost from a copper foil submerged in ethanol and exposed to a cathodic microplasma discharge of only a few Watts of power. These nanoparticles have been shown to be very small, with a small size distribution (Velusamy et al., 2017) as well as remaining stable for a period of over 1 year.
Herein we report the synthesis of surfactant-free Au NPs and CuO QDs by PiNE. The produced particles are physically and chemically characterised by a suite of techniques. These particles are then dispersed into ethylene glycol and the optical properties are assessed. Finally, the stability of the nanofluid over a period of 11 weeks is assessed.
Gold nanoparticle synthesis
The synthesis of the Au NPs was performed on the basis of the work by Patel et al. (Patel et al., 2013). Briefly, Au NPs were synthesized by PiNE where a plasma was generated above the surface of an aqueous solution with various concentrations of HAuCl 4 (Fig. 1a). The process yields Au NPs within a few minutes, with the concentration of particles increasing with the processing time until the precursor salt is depleted. In the previous work (Patel et al., 2013) it was found that the absorbance peak of the synthesised Au NPs was greatest at a salt concentration of 0.6 mM. As such a stock solution of 6 mM HAuCl 4 in 15 MΩ cm −1 deionised water was prepared and mixed with additional deionised water to produce an electrolyte of 0.6 mM for processing. No other reagents, such as reducing agents or surfactants were added to this gold stock solution, and all glassware was cleaned with aqua regia solution prior to usage. A direct current (DC) powered helium microplasma was generated across a gap of 1 mm between a nickel capillary tube (inner diameter of 0.7 mm and an outer diameter of 1 mm) and the surface of a 10 mL aqueous solution of gold (III) chloride trihydrate (HAuCl 4 ·3H 2 O). The helium gas flow of 25 sccm (controlled by a mass flow controller, MKS Instruments, UK) ensures that the plasma is largely formed in helium gas, although a degree of turbulent mixing is expected with the surrounding air. The power supply (Matsusada AU-10*15) delivers an initial voltage of 900 V, which was applied to a carbon rod (5 mm diameter) submerged in the solution~1 cm away from the capillary to complete the circuit and allow generation of a microscale discharge. All samples of Au NPs were synthesized at a constant current of 5 mA and for 10 min, with the applied voltage dropping during the process as the solution became more conductive. During synthesis, the solution is seen to change colour from pale yellow to a red/purple as the salt is reduced and Au NPs form. This colour change can be measured by ultraviolet-visible spectroscopy (UV-Vis) as a peak in the red/purple portion of the visible light range and was consistently used to verify the production of Au NPs throughout the study and for the quick screening of a larger number of samples by ensuring the optical characteristics were reproduced accurately.
Copper oxide nanoparticle synthesis
The synthesis of CuO QDs was carried out on the basis of previous work (Velusamy et al., 2017) where the synthesis mechanisms have been discussed in more details. This is also based on a PiNE process which uses a plasma generated at the surface of an ethanol solution. A 0.1 mm thick, 1 cm wide sacrificial copper electrode is used and immersed in the solution where QDs are synthesized. This ribbon was then submerged in 5 mL of ethanol in a Pyrex cylinder and acted as the anode. The cathode, in this case, was the helium gas discharge from a nickel capillary (50 sccm). This is shown schematically in Fig. 1b. Initially, the voltage read 3 kV with less than 0.05 mA of current, with the current rising as the solution conductivity increases. After reaching 0.5 mA, the voltage was reduced to maintain a constant current for the remainder of the discharge; this is controlled by the power supply, which enters constant current mode once the set current is reached. The solution was processed for a total of 30 min, with interruptions every 10 min to allow for ethanol vapour to disperse, with additional ethanol being added, refilling the cylinder to its volume before processing.
Preparation of the nanofluids 2.3.1. Weighing the nanomaterials
Photographs were taken during the steps to prepare the nanofluids and are shown in Fig. 2. In Fig. 2a, four vials with 10 mL of processed 0.6 mM salt solution are shown as prepared. Likewise, in Fig. 2b, five vials of CuO sol are presented as-synthesized. The samples were dried in pre-weighed glass containers placed on top of a hotplate set to 75°C (Stuart, Hotplate Stirrer SB162-3). A total of 100 mL of Au NP and 50 mL of CuO were dried in sample containers separately. As the solvents for the Au NP and CuO QD sols are water and ethanol respectively, they evaporated at a different rate, with the ethanol evaporating more rapidly. As soon as the solvent is evaporated, the vial was removed from the hot plate. This heating process took approximately 3 h for the CuO QD vial and 8 h for the Au NP sols. While Au NPs have shown to be stable after synthesis, it was observed that Au NPs settled to the bottom of the vial during heating within several hours as can be seen in Fig. 2c (left). The CuO sols do not appear to settle to the same degree as the Au NP sols, but there is a small amount of material settling out of the solution. This could imply that they are agglomerating and hence falling out of suspension (Fig. 2c, right). The mass of the dried material was measured again, allowing for the determination of the mass of the sample (see Supporting Information) and thus the volume. Both materials form a dry powder-like coating on the base of the glass container as can be seen in Fig. 2d. The equation = V m was used to calculate the volume of the additive particles, where V is the volume, m is the mass and ρ is the density. Using the values of 19.32 g cm −3 for Au and 6.315 g cm −3 for CuO volumes of 0.55 µL and 0.17 µL were determined for the Au and CuO particles respectively (National Center for Biotechnology Information, 2018a, 2018b).
Mixing the nanofluid
The10 mL of EG was added to the dried material and a vol:vol percentage was calculated. The same process was followed for forming the blended nanofluids, where the volume of both additives was calculated from mass and density and added to 10 mL of EG to form sols with vol:vol percentages of each component (see Supporting Information). The vials containing 10 mL of EG and the dried material were then vortexed at 1000 revolutions per minute (RPM) for 1 min (Fisher scientific, TopMIX FB15013), yielding the nanofluids in Fig. 2e. From Fig. 2f it can be seen that whilst most of the dried particles can be redispersed into EG to form nanofluids, some dried mass is seen on the base of the vials, which constitutes a loss of NP in the process.
Nanoparticle characterization
To assess our as-prepared materials we have employed a suite of characterisation equipment, utilising X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD) and transmission electron microscopy (TEM) analysis, including selective area electron diffraction (SAED).
The as-made Au NP dispersion contains a range of shapes and sizes (Fig. 3a), with the majority of particles consisting of small spherical nanoparticles, however, some nanorods and larger triangular particles were observed. In order to measure the size of different shapes of NPs we considered the size of rods as that of the longest dimension and the size of triangular particles was considered as the largest distance from the tip to the opposite side (see Supporting Information). When the particle size is assessed inclusive of all the different shapes a mean particle size of 27 nm is obtained from a log-normal fitting (see Supporting Information). The CuO particles are found to have a mean particle size of 3.3 nm, to be nearly spherical and with little to no agglomeration ( Fig. 3b and Supporting Information). The mean diameter was determined by TEM imaging of~300 particles and fitted with a log-normal distribution. As the Bohr exciton radius for CuO is expected to lie between 6.6 nm and 28.7 nm (Borgohain and Mahamuni, 2002), the CuO particles produced in this work are expected to fall within the strong quantum confinement regime and are therefore referred to herein as QDs. This mean size of 3.3 nm is slightly larger than the mean size of 1.9 nm in our previous work (Velusamy et al., 2017) and can be attributed to the change in the electrode spacing in the present work. It has been observed that reducing the electrode spacing from 3 cm to 1 cm (this work) the time required to reach the set current of 0.5 mA is reduced. This has the effect of enhancing the absorbance of the CuO dispersion formed for a given treatment time as seen in Fig. 4, and is likely the cause of the increased particle size.
The crystallinity of the produced materials was verified with XRD and SAED measurements (Fig. 5 and Supporting Information). For the Au NP samples (Fig. 5a), both of these techniques show five features which could be assigned based on face-centered cubic gold, the {1 1 1} plane appears to be predominant, with four other peaks appearing corresponding to the {2 0 0}, {2 2 0}, {3 1 1} and {2 2 2} planes (Biao et al., 2018;Kumar et al., 2014;Verma et al., 2010). For the CuO QDs, the XRD diffractogram (Fig. 5b) can be fit on the basis of monoclinic CuO in accordance with JCPDF 89-5896. A total of 10 peaks can be fitted and amongst these several match the Miller indices obtained from the electron diffraction pattern. Several of these planes can be observed in the SAED, notably, {−1 1 1}, {2 0 2} and {2 2 0}, which corroborates that we have produced monoclinic CuO.
The chemical state of the produced particles was assessed by XPS, presented in Fig. 6. To confirm the reduction of the gold salt a sample of the untreated salt is also compared with the spectrum of the Au NPs sample (Fig. 6a). It is this Au 4f region that is of significant interest in the analysis as deconvolution of this region can elucidate any presence of Au 3+ or Au + ions within the sample, which would suggest that the Au salt is not fully reduced. For the Au NPs sample only two peaks could be fitted in the spectra, Au 4f 5/2 at 87.76 eV and Au 4f 7/2 at 84.09 eV, which correspond to metallic Au 0 and indicate that the treatment by microplasma is capable of fully reducing the HAuCl 4 precursor (Anthony and Seah, 1984;Turner and Single, 1990). The spectra of the untreated sample exhibit two peaks shifted to the higher binding energy (84.5 e.V and 88.2 eV), suggesting the oxidation state of Au + (Abad et al., 2005;Corma and Garcia, 2008). Whilst the expected oxidation state of Au in HAuCl 4 is Au 3+ , the photoreduction of Au 3+ to the lower oxidation state of Au + by the XPS instrument (Abad et al., 2005) could result in the absence of Au 3+ in the untreated salt sample. Regardless, it is clear that no peak can be attributed to the Au 0 state with the untreated salt sample and the Au NPs sample does not show any unreduced or partially reduced Au; as such a clear distinction can be made between the salt sample and the plasma-treated sample. The Cu 2p narrow region spectra for the CuO QDs is presented in Fig. 6b, which shows a strong peak at 933.2 eV which is a typical position for Cu 2+ . This peak is shifted by at least 0.5 eV from the peak centres expected for either metallic Cu 0 (932.6 eV) or Cu + (932.7 eV) and thus these two states are not present in the sample. In addition to the main Cu 2+ peak, two shake-up satellites appear at 940.9 eV and 943.4 eV which are characteristic of CuO and do not appear strongly for Cu 2 O or at all for Cu 0 (Hassanien et al., 2016;Hernández et al., 2007;Jiang et al., 2013;Wang et al., 2015). Therefore, we have confirmed that the material produced is Copper (II) oxide.
Optical properties of the nanofluid
Within a nanofluid, incident power from solar radiation can interact in two ways in addition to being transmitted with no interaction; firstly, the radiation can be absorbed by the fluid or the NP additive. Alternatively, the light may be scattered by the NPs, which results in either the light scattering out of the fluid body and subsequently being lost from the system, or in light being absorbed in the scattered path. The absorption coefficient is of great interest to this work and has been calculated from the UV-Vis data (see Supporting Information) and illustrates how a collimated beam of light is attenuated due to absorption by the sample within a given path length. Likewise, the scattering coefficient is calculated in order to assess the attenuation due to scattering of light by the nanofluid. A number of nanofluids with Au NPs Fig. 3. Representative TEM images of (a) as-prepared Au NP sample and (b) CuO QDs. R. McGlynn, et al. Solar Energy 203 (2020) 37-45 only (volume:volume percentages of 2.2 × 10 −3 %, 3.3 × 10 −3 % and 5.5 × 10 −3 %), with CuO QDs only (of 0.7 × 10 −3 %, 1.0 × 10 −3 % and 1.7 × 10 −3 %) and with mixed Au NP & CuO QDs (5.1 × 10 −3 % Au NPs and 2.0 × 10 −3 % CuO QDs) have been considered and the corresponding absorption and scattering coefficients are reported in Fig. 7. The concentrations of NPs/QDs has given the expected behaviour with increasing coefficient values for higher volume concentrations of the same nanofluid (Fig. 7). It is noted that the Au NPs nanofluid presents an absorption coefficient dominated by the plasmonic absorption between 540 nm and 560 nm (Fig. 7a, red curves) that is well above the corresponding scattering coefficient (Fig. 7b, red curves) for most of the spectral range. Contrarily, nanofluids with CuO QDs exhibit scattering and absorption coefficients that are comparable throughout (blue curves in Fig. 7a-b). The multicomponent nanofluid also presents interesting features and it can outperform single component nanofluids for a limited spectral range. Below 588 nm, the blended nanofluid seems to benefit from the simultaneous presence of both Au NPs and CuO QDs while the opposite is observed above 588 nm. This result shows that both positive and negative synergies can occur for blended nanofluids and that understanding the combined mechanisms is not a simple additive process. The reason for high absorption (< 588 nm) can be attributed to a degree of complementarity between Au NPs and CuO QDs absorption coefficients, where the plasmonic absorption peak of the Au NPs is still dominant; however, above~588 nm, the combined scattering of Au NPs and CuO QDs is clearly attenuating the light to a level that is negatively impacting absorption. It is not coincidental that the absorption coefficient of the blended nanofluid becomes lower than that of Au NPs-only nanofluid within the same wavelength range (see where the green curve crossovers with the full red curve at~590 nm in Fig. 7a) where the scattering coefficient of CuO QDs becomes stronger McGlynn, et al. Solar Energy 203 (2020) 37-45 than its absorption coefficient (~588 nm); for instance, compare 1.7 × 10 −3 % CuO QDs (full blue curves) in Fig. 7a and b and note that the absorption coefficient decreases to 0.2 cm −1 just below 600 nm. On the basis of the nanofluid physical parameters, we will now calculate the absorbed power for different fluid depth under solar irradiation (see Supporting Information). While a fraction of scattered light in the nanofluid can be re-absorbed, in our calculations we have considered scattering as a loss and therefore our results provide a lower bound to the absorbed power. Fig. 8 reports the calculated absorbed power for the different nanofluid concentrations and for varying fluid depth. We report these results both in absolute values of power per surface area (Fig. 8a) as well as a percentage of the total irradiated power (Fig. 8b). For the Au NP nanofluids, a small increase in the fluid depth from 0.1 cm to 1 cm gives a dramatic increase in power absorbed to over 300 W m −2 , with higher volume concentrations giving a greater power absorbed for a given fluid depth (Fig. 8a). Beyond this depth, tending towards 3 cm, the increase of absorbed power with fluid depth begins to plateau. This can be considered an effect of the high absorption and scattering coefficients observed for the Au NPs nanofluids which result in very large portions of incident radiation being attenuated at small fluid depth. Fig. 8b, however, shows that the nanofluid is not able to absorb all the solar power available, the reason is attributed to the strong scattering coefficient at longer wavelengths, which will always ensure that a fraction of the incident light escapes the nanofluid. A similar trend can be observed for the CuO nanofluids, with an increase in power absorption between 0.1 cm and 3 cm, albeit less rapid, and higher volume concentration nanofluids absorbing more power. The performance of CuO as a nanofluid additive in the calculations appears to be inferior to that of Au NPs, where, even for comparable concentrations (2.2 × 10 −3 % Au NPs and 1.7 × 10 −3 % CuO QDs), the Au NP nanofluid absorbs more than double that of the CuO nanofluid. Interestingly, the blended nanofluid of 5.1 × 10 −3 % Au NPs and 2.0 × 10 −3 % CuO QDs is outperformed by the nanofluid consisting of 5.5 × 10 −3 % Au NPs alone. As an example, at a fluid depth of 1.5 cm, the blended nanofluid can absorb~50% of the incident power (Fig. 8b), whilst the 5.5 × 10 −3 % Au NP nanofluid can harvest close to 60%. The performance of the multi-component nanofluid in Fig. 8 is the outcome of competing mechanisms, which have shown to benefit absorption at lower wavelengths (< 588 nm) but are counterproductive at longer wavelengths as per our discussion of Fig. 7. Our calculations of absorbed power indicate that, overall, the negative impact of the scattering phenomena introduced by the CuO QDs are significant to the point that nanofluid with Au NPs only are preferable.
Nanofluid stability
The ability of a nanofluid to retain its absorptive properties throughout its working life is paramount to its industrial uptake. As such the stability of the prepared nanofluids, in terms of power absorption capability, was assessed over time for the CuO QDs and Au NPs nanofluids with the highest concentrations as well as for the nanofluid with blended NPs. The samples were stored in a dark cupboard and were not shaken, sonicated, vortexed or otherwise disturbed during the storage period of 11 weeks. The stability of each of the nanofluids was investigated by re-measuring the optical properties of the samples after a few seconds of shaking and calculating the potential to absorb power. A reduction in the absorption coefficient for each nanofluid is seen in Fig. 9a, with the decrease in absorption coefficient most severe for the CuO nanofluid and blended nanofluid, with a very slight decrease for the Au nanofluid. This would suggest that some of the NPs and/or QDs may have agglomerated and sedimented to the point where they are no longer active in absorbing incident power. In Fig. 9b the scattering coefficient is observed to remain constant for the Au NPs nanofluid and increases substantially for the CuO QDs nanofluid. Interestingly, the scattering coefficient is seen to decrease in the case of the blended nanofluid. Whilst the behaviour of the Au NPs nanofluid and the blended nanofluid are expected with a stable nanofluid and one where some material is sedimenting, the behaviour of the CuO nanofluid is more complex as the scattering coefficient is larger after 11 weeks. The increase of the scattering component can possibly be explained through the equations for the scattered light intensity by Rayleigh scattering Eq.
(1) (Seinfeld and Pandis, 2006 (2) where I S is the intensity of scattered light, I 0 is the intensity of incident light, R is the distance from the source to the particle, θ is the scattering angle, λ is the wavelength, n is the refractive index of unpolarised light, d is the diameter of a spherical particle, is the size parameter and m is the complex refractive index. This equation highlights the sixth power relationship between the diameter d of a sphere and the scattered light intensity I S . Additionally, the scattering efficiency, Q S , is known to be linked to the fourth power of the nanoparticle size α parameter, equation (2) (Chen et al., 2016b;Saidur et al., 2012;Taylor et al., 2011). Therefore, if the CuO QDs were to grow even slightly by agglomeration, this enhanced scattering could be accounted for. We now considered the effect of time on absorbed power and compare the power absorbed by a nanofluid of 3 cm depth over the 11week period considered earlier (Fig. 10). Whilst the CuO QDs nanofluid absorbed power has diminished, both the Au NPs and blended nanofluids are able to retain their absorptive properties over the 11 weeks of storage. This is of course very interesting in terms of a DASCs as it suggests that these surfactant-free Au NPs could maintain high absorptive properties over extended periods of operation, with no additional mixing required beyond that of the pumping system. This stability will result in lower maintenance costs, making the solution much more economically attractive. It is interesting to note that in this specific case, the performance degradation of the CuO QDs is not affecting that of the blended nanofluid, which indicates no detrimental physical Fig. 10. The stability of nanofluid was assessed after 11 weeks with both the gold and the gold-copper oxide nanofluids retaining their affinity for power absorption for a fluid depth of 3 cm. A decrease in power absorption for the copper oxide alone sample is observed.
interactions between Au NPs and CuO QDs. While this specific combination of absorber-scatterer could outperform single component nanofluid only in a limited spectral range, a more careful selection of NPs with more complementary properties could bring greater benefits.
Conclusion
Au NPs and CuO QDs nanofluids have been produced by PiNE. The produced nanofluids are capable of increasing the amount of energy captured from the incident light on a solar thermal collector by the fluid when compared to the EG base fluid alone, with larger particle concentrations yielding greater absorption. A strong plasmonic absorption centred between 540 nm and 560 nm is present for the Au NP nanofluid, which when coupled with the strong scattering coefficient at longer wavelengths leads to a large fraction of the incident light escaping the nanofluid for a given length. However, as the absorption coefficient is much larger than the scattering coefficient a significant proportion of the attenuated photons is absorbed by the nanofluid. The performance of CuO as a nanofluid additive in the calculations (where scattering is considered a system loss) is demonstrably lower than that of Au NPs, where even for comparable concentrations, the Au NP nanofluid absorbs more than double that of the CuO nanofluid and can be attributed to the comparable scattering and absorption coefficients of the CuO nanofluid. Interestingly, the blended nanofluid is outperformed by the nanofluid consisting of a comparable concentration of Au NPs alone. As an example, at a fluid depth of 1.5 cm, an additional 10% of the incident power can be absorbed by the Au NP only nanofluid. However, the blended nanofluid possesses superior absorption in the < 588 nm region which highlights a degree of complementarity between the two additives materials. Contrarily, at wavelengths longer than~588 nm, the combined scattering of Au NPs and CuO QDs is attenuating the light to a level that is negatively impacting absorption.
The stability of the dispersions was tested after storage for 11 weeks, with the Au NP and the blended nanofluid retaining their full absorptive capabilities. The CuO-based nanofluid only managed to retain approximately one-third of its power absorption capabilities. This serves to highlight that a nanofluid based on Au NP is a viable option for DASC application. While these CuO-nanofluids or a mixture of the two particle types did not produce better results than Au NPs nanofluid, there is scope to investigate combinations of "optically complementary" NPs in blended nanofluids. Importantly the full understanding of optical interactions among different NPs in nanofluid is more complex than a simple additive process.
The PiNE system used in this work has been demonstrated to be highly suitable for producing additives for nanofluids with attractive optical properties. The flexibility of reagents in the PiNE system can further facilitate a range of alternative additives such as titanium-based NPs or other plasmonic silver NPs, with no additional equipment requirements. | 7,586.6 | 2020-06-01T00:00:00.000 | [
"Environmental Science",
"Materials Science",
"Engineering",
"Chemistry"
] |
Application of the CRISPR/Cas9 System to Study Regulation Pathways of the Cellular Immune Response to Influenza Virus
Influenza A virus (IAV) causes a respiratory infection that affects millions of people of different age groups and can lead to acute respiratory distress syndrome. Currently, host genes, receptors, and other cellular components critical for IAV replication are actively studied. One of the most convenient and accessible genome-editing tools to facilitate these studies is the CRISPR/Cas9 system. This tool allows for regulating the expression of both viral and host cell genes to enhance or impair viral entry and replication. This review considers the effect of the genome editing system on specific target genes in cells (human and chicken) in terms of subsequent changes in the influenza virus life cycle and the efficiency of virus particle production.
Introduction
Influenza A virus (IAV) causes acute respiratory infections in humans and remains a continuous and severe threat to public health. Defining and understanding mechanisms for regulating IAV replication is an effective strategy for developing new ways to fight the virus. Currently, there are various strategies for regulating gene-directed replication of influenza virus in mammalian cells, including human cells: utilization of antisense oligonucleotides, application of the RNA interference methods, and clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein (Cas) system.
Antisense oligonucleotides (ASOs) are small-sized single-stranded nucleic acids, which can be employed to modulate gene expression. There are two main mechanisms of ASO action: one is an RNase H-dependent pathway, and the other is a pathway for steric blocking of splicing or translation [1]. Currently, in the fight against viruses such as IAV, the viral RNAs are the main target of ASOs [2][3][4]. On the other hand, regulation of the host-cell genes expression to reduce viral titers can be another target for the application of antisense derivatives [5][6][7]. The main advantages of the ASO strategy are the fast degradation of the target transcript and the simplicity of oligonucleotide design, allowing for rapid gene delivery [8]. However, the use of ASOs is associated with serious problems: high propensity of oligonucleotides to degradation, lack of cell specificity, cytotoxicity, etc. In addition, ASOs can be used to induce transcript degradation, inhibit translation, cause aberrant splicing, or interfere with miRNA maturation [9].
Another way to regulate transcript levels is the RNA interference (RNAi) method. The mechanism of RNAi is realized through the delivery of small synthetic RNAs, such as small could target more than 90% of all coronaviruses. The PAC-MAN strategy could be used for a possible treatment for COVID-19. However, the further application of this strategy requires additional experiments on the choice of effective in vivo delivery method, evaluation of off-target effects, testing of antiviral efficiency, and specificity in relevant preclinical models [33]. Currently, the most studied CRISPR/Cas system is CRISPR/Cas9, which provides efficient knockout of target genes and allows for investigating the mechanisms of gene regulation during viral infection. Additionally, the CRISPR/Cas9 genome screening technology is the most promising approach to finding targets because this method offers almost unlimited flexibility, fast and efficient targeting of individual genes, and the ability to target multiple genes simultaneously [34]. This review is devoted to the regulation of influenza virus replication due to the "point-like" CRISPR/Cas9-mediated control of gene expression in the host cell. This review considers the genes of the innate immune response, pattern recognition receptors, cell receptors, the genes responsible for the viral penetration into the nucleus, and RNA processing factors as the targets for the regulation of influenza virus replication (Figure 1). of single-stranded RNA cleavage in eukaryotic cells [29]. Recently, Abbott et al. [33] developed a prophylactic antiviral CRISPR in human cells (PAC-MAN) strategy based on the CRISPR/Cas13 system, which can degrade SARS-CoV2 RNA fragments and live influenza A virus. The authors found that six crRNAs could target more than 90% of all coronaviruses. The PAC-MAN strategy could be used for a possible treatment for COVID-19. However, the further application of this strategy requires additional experiments on the choice of effective in vivo delivery method, evaluation of off-target effects, testing of antiviral efficiency, and specificity in relevant preclinical models [33]. Currently, the most studied CRISPR/Cas system is CRISPR/Cas9, which provides efficient knockout of target genes and allows for investigating the mechanisms of gene regulation during viral infection. Additionally, the CRISPR/Cas9 genome screening technology is the most promising approach to finding targets because this method offers almost unlimited flexibility, fast and efficient targeting of individual genes, and the ability to target multiple genes simultaneously [34]. This review is devoted to the regulation of influenza virus replication due to the "point-like" CRISPR/Cas9-mediated control of gene expression in the host cell. This review considers the genes of the innate immune response, pattern recognition receptors, cell receptors, the genes responsible for the viral penetration into the nucleus, and RNA processing factors as the targets for the regulation of influenza virus replication ( Figure 1).
Figure 1.
Possible target genes for CRISPR/Cas9 system to adjust the sensitivity of cells to influenza virus. In the figure, the lightning symbol indicates genes whose knockout leads to changes in the virus's life cycle and affects the susceptibility of cells. RNA marked in red is viral mRNA; RNA marked in blue is viral negative-sense RNA. An important factor that affects the immune response against the influenza virus is interferon (IFN), which helps to eliminate viruses at the early stages of infection. The genes of the interferon regulatory factor (IRF) family play a key role in the regulation of the interferon activation cascade.
In a recent study, Tuerxun et al. showed the IRF family to be involved to varying degrees in response to influenza virus infection [35]. During IAV infection, the largest change was observed in the IRF7 and IRF9 expression levels, which were significantly increased. However, in influenza B virus (IBV) infection, increased IRF2, IRF7, and IRF9 and decreased IRF4 and IRF5 expression levels were found [35].
In work published by Komissarov et al., the CRISPR/Cas9 system was used to knock down the IRF7 gene, and the impact of this gene on the regulation of antiviral response was demonstrated. The IRF7 gene knockout was shown to increase the sensitivity of human HEK293FT cells to IAV [36]. It was also shown that knockout of the IRF7 gene alters the levels of the IFITM (interferon-induced transmembrane) and IFIT (interferon-induced proteins with tetratricopeptide repeats) family genes expression, which play an important role in viral infection inhibition.
In a clinical study [37], it has been demonstrated that among 22 patients with severe influenza infections, one patient had compound heterozygous mutations in IRF7 and that both of her IRF7 alleles fail to upregulate IFN type I and type III. Thus, in case of mutation in both IRF7 alleles, the type I IFN pathway fails to upregulate, leading to an increase in the virus level in fibroblasts and an abolition in type I and III IFNs expression in pDCs (plasmacytoid dendritic cells) [37].
The same pattern is observed for homozygous loss of function IRF9 gene mutations. It has been shown that the IRF9-deficient patient was not exposed to the respiratory syncytial virus (RSV) or human rhinovirus (HRV), which indicates its ability to control various common viruses in vivo, including respiratory viruses other than IAV [38]. As a result of IRF9 deficiency, cells of all types demonstrate an inability to respond fully and effectively to type I IFNs and, by inference, to type III IFNs. The connection between IRF7 and IRF9 deficiency mechanisms suggests the essential role of type I and III IFN responses driven by IRF7 amplification and mediated by IRF9 signaling in influenza virus defense.
Unlike mammals, birds only have IRF7 as a modulator of the type I IFN pathway, missing IRF3. However, they also have other protection systems. In their recent study, Tae Hyun Kim et al. examined the ability of chicken IRF7 to regulate the host response against avian influenza virus (AIV). The authors used a CRISPR/Cas9 system to completely knock out (KO) the IRF7 gene in the DF-1 cell line and performed RNA sequencing on the knocked-out and wild-type cells after H6N2 infection. It was found that the replication rate of AIVs in vitro increased significantly in IRF7 -/cells. Additionally, the effect of IRF7 KO on mTOR (mammalian target of rapamycin) and MAPK (mitogen-activated protein kinase) signaling pathways was studied. The mTOR signaling pathway coordinates metabolic processes, and the MAPK pathway controls a wide range of cellular responses such as proliferation, differentiation, and immune response. The study showed that there was a negative correlation between mTOR and IRF7, whereas p38 MAPK (MAPK13) was upregulated due to IRF7 KO. However, in the case of MHC II family genes, there was a positive correlation to IRF7 expression. Based on the findings, Tae Hyun Kim et al. suggested that IRF7 plays a significant role in the host antiviral response against avian influenza virus (AIV) infection, and the MAPK and mTOR signaling cascades allow modulation of the antiviral response against this virus [39].
IFIT
Previous studies using the siRNA or ASO method suggested that the interferoninduced proteins with tetratricopeptide repeats (IFIT) target the translation and replication suppression of various viruses, including IAV [40][41][42][43][44]. However, using a CRISPR/Cas9 system, Tran et al. [45] have shown that IFIT2 promotes the gene expression of the influenza virus. Moreover, due to the binding of IFIT2 to viral mRNAs, IAV replication is enhanced. In addition, the authors have demonstrated that IFIT3, like IFIT2, has proviral activity and is able to regulate the expression of viral proteins during IAV infection.
IFITM
The interferon-induced transmembrane (IFITM) family consists of a number of proteins located in cytoplasmic and endosomal membranes [46]. One of the main functions of these proteins is to inhibit the release of the viral genome from endosomes into the cytosol [47,48].
The influence of the IFITM family genes on influenza virus infection has been established in vitro using the CRISPR/Cas9 system [49]. The infection of HeLa cells with influenza virus showed that a decrease in the expression of IFITM2 or IFITM3 increased the cell susceptibility to the virus by about two or four times, respectively, compared with control cells. At the same time, HeLa clones with a complete knockout of both IFITM2 and IFITM3 or the three isoforms simultaneously demonstrated the increase in the infection level by more than 20 times. No significant difference in influenza virus infection of IFITM2/3 knockout clones or clones lacking all three genes (IFITM1-3) was observed in both intact and IFN-α treated cells, suggesting that IFITM1 does not play a significant role in the prevention of IAV infection. Moreover, it has been demonstrated that increasing the amount of IFITM3 protein in IFITM2/3 knockout cells reduces the infection of cells with the influenza virus, depending on the level of the added IFITM3 protein. In addition, a decrease in infection was also observed when the level of IFITM2 was restored. Thus, the authors have found that IFITM3 acts in a dose-dependent manner to restrict a broad spectrum of viruses [49]. Earlier, in the study by Bailey and colleagues, it was shown that the IAV susceptibility in genetically modified mice lacking genes equivalent to human IFITM family members was increased [48]. A similar effect was obtained for mice lacking only IFITM3. These results demonstrate that within the IFITM family, IFITM3 plays the predominant role in the regulation of IAV infection.
The ability of the IFITM family members to limit influenza infection has also been demonstrated in a mouse model. Infected mice lacking IFITM3 displayed fulminant viral pneumonia due to uncontrolled replication of the virus in cells. A similar effect has been confirmed in vitro. It has been shown that antiviral functions are restored after the addition of the IFITM3 protein. The involvement of the IFITM3 protein in influenza virus infection has also been confirmed in humans. It was found that a statistically significant number of hospitalized individuals have the C allele of the rs12252 single nucleotide polymorphism (SNP rs12252-C) that alters a splice acceptor site [50].
Pattern Recognition Receptors (PRRs)
During influenza infection, the innate immune system recognizes viral antigens using a range of pattern recognition receptors (PRRs) such as Toll-like receptors (TLRs) and RIG-like receptors (RLRs). The RLR family is composed of three cytoplasmic RNArecognizing proteins: LGP2 (Laboratory of Genetics and Physiology 2), MDA5 (Melanoma Differentiation-Associated protein 5), and RIG-I (Retinoic acid-inducible gene I). It is also known that RIG-I and MDA-5 are involved in the activation of interferon in response to the influenza virus [51].
TLR
Toll-like receptors (TLRs) recognize a large number of structurally conserved molecules derived from bacteria and viruses and activate cellular immune responses. Moreover, TLR3 and TLR7/8, both located in endolysosomes, recognize double-stranded RNA (dsRNA) and single-stranded RNA (ssRNA) ligands, such as influenza virus and others [52]. Han and colleagues created TLR7, TLR8, and TLR3 knockout human-induced pluripotent stem cell lines (hiPSCs) using the CRISPR/Cas9 system and characterized the morphology and karyotype of these cells, the level of corresponding knockout TLR protein, and differentiation potential of these cell lines in vivo. The obtained knockout cell lines are promising models for further study of the specific role of TLRs in various viral infections [53][54][55]. Lee and colleagues performed siRNA-mediated gene silencing of cMDA5, cTLR3, and cTLR7 as well as CRISPR/Cas9-mediated individual and double knockout of cMDA5 and cTLR3 in chicken embryonic fibroblast DF-1 cells to determine their roles in the IFN-mediated innate immunity in response to avian influenza virus infection. It was shown that the activity of the IFN-β promoter fell by about 10-fold, 3-fold, or 1.8-fold after infection with AIV if cMDA5, cTLR3, or cTLR7, respectively, was silenced. These results suggested that cTLR7 was neither involved in IFN-β signaling in response to AIV in chicken compared to cMDA5 and cTLR3 nor played a significant role in sensing RNA ligands. In addition, as a result of studying knockout lines, the authors found that cMDA5 played a pivotal role in the capture of RNA ligands in DF-1 cells while cTLR3 played only a complementary role. It is important to note that the AIV titer in the clone with the cMDA5 knockout was shown to be about 30-fold higher than that in intact DF-1 cells [56].
MDA-5
MDA-5 is a member of the evolutionary conserved RIG-like helicase family of PRRs (pattern recognition receptors). To date, RIG-I is thought to be the chief booster of the immune response during influenza infection in mammals. Since RIG-I is absent in chicken and ChMda5 (chicken Mda5) knockdown does not appear to affect influenza proliferation, it is assumed that ChMDA5 could compensate for this function [57]. However, the knockout of this gene using the CRISPR/Cas9 system resulted in significant growth of AIV in cMDA5lacking cells. These data indicate that cells become more permissive to virus growth due to reduced IFN-mediated antiviral activity and because of the impaired production of IFN-β in particular [56].
RIG-I
The role of RIG-I in the regulation of the immune response to the influenza virus was confirmed in a mouse model [58]. RIG-I-KO mice, which were generated by homologous recombination of ES cells [59], showed failure in the production of IFN type I and the subsequent activation of the adaptive immune response (T-cell responses) against influenza viruses [58].
The study conducted by Thulasi Raman et al. has identified a RIG-I analog located in the nucleus. The nuclear arrangement of the RIG-I protein makes it significant for the restriction of nuclear replicating viruses, such as the influenza virus [60]. Recently, the RIG-I-dependent signaling pathway was explored by Yap et al. using the CRISPR/Cas9 mediated knockout of RIG-I and ANXA1 [61]. Knockout of ANXA1 in A549 cells led to a decrease in the expression of interferon-stimulated genes, such as IFIT1 upon IAV infection and a decrease in the expression of RIG-I basally and post-infection. At the same time, the RIG-I knockout cells demonstrated the basal level of ANXA1 compared to control A549 cells. For both knockout cell lines, no change in cell viability was observed upon IAV infection, while infected control cells exhibited a significant reduction in cell viability. The authors concluded that ANXA1 plays a role in RIG-I-mediated IAV-induced apoptotic cell death [61].
OAS Family
Proteins of the OAS family (2 -5 oligoadenylate synthase) constitute another mediating factor responsible for the activation of interferon in response to influenza virus and induce the activation of ribonuclease L (RNase L), leading to the degradation of viral RNAs [62].
In a recent study [63], the authors knocked out the OAS1 and OAS3 genes in a human leukemia monocytic cell line and HT-1080 fibrosarcoma cell line using the CRISPR/Cas9 Viruses 2022, 14, 437 7 of 14 system. OAS1 and OAS3 were first shown to inhibit the expression of chemokines and interferon-responsive genes in response to viral infection. However, the stimulations of the two OAS proteins yielded different results, indicating that the TLR3 and TLR4 receptors served as stimulation for OAS1, while RIG-I and MDA5 stimulated OAS3. These data demonstrated the OAS-dependent regulation of innate immune signaling in human macrophages, suggesting wide possibilities for viral infection treatment [63]. Knockout of the OAS family genes in the A549 cell line by the CRISPR/Cas9 system increased the viral replication as compared with control cells. Additionally, inhibition of the OAS3 gene caused the highest increase in influenza virus titer, which suggests that OAS3 is the main OAS isoform responsible for the activation of RNase L [64].
Sialic Acids
Sialic acids (Sia) are nine-carbon monosaccharides typically found in invertebrates. They are commonly presented as terminal residues of cell surface glycans and are known to underlie diverse biological events ranging from cell adhesion to immunity and inflammation. It was demonstrated earlier that IAV, IBV, ICV, IDV, and enterovirus D68 use sialic acids to penetrate cells [65]. To identify the main host factors affecting the replication of IAV (avian strain H5N1), Han et al. have knocked out genes involved in the biosynthesis of sialic acids throughout the genome (GeCKO screening) in human lung epithelial cells (A549) [66]. This method has shown that the SLC35A1 sialic transporter is an important factor for the penetration of IAV into cells and that capicua protein (CIC) is a major negative modulator of cell-mediated immunity.
Another actively studied feature of sialic acids is their O-acetyl modifications. The effect of Sia-O-acetyl modifications on IAV, IBV, ICV, and IDV infection has been investigated in various human and canine cells [67]. Using CRISPR/Cas9 technology, the sialate O-acetyltransferase gene (CasD1) was knocked out or overexpressed, and the sialate Oacetyl esterase gene (SIAE) was knocked out. Modulations in CasD1 and SIAE expression revealed that these genes partially regulate the level of O-acetyl modifications; however, there are other ways of regulation. Low levels of O-acetyl modifications of sialic acids are obstructive for ICV or IDV to penetrate the cell, whereas they had no obvious effect on IAV and IBV infection.
Genes Responsible for Viral Penetration into the Nucleus
The study by Luo et al. has shown the PLSCR1 (Phospholipid Scramblase 1) protein to be an interacting partner of the influenza nucleocapsid protein (NP) in mammalian cells. This interaction is important for the regulation of IAV replication, as siRNA knockdown or CRISPR/Cas9-mediated knockout of PLSCR1 expression increases virus replication. It was found that the inhibitory effect of PLSCR1 involves the formation of its complex with the viral NP and importin α, which prevents the incorporation of importin β into the complex, impairs the nuclear import of NP, and suppresses the virus [68].
Splicing Regulating Genes
Splicing factors are essential for influenza virus replication. IAV uses alternative splicing to produce several of its proteins. The functional roles of the dedicator of cytokinesis 5 (DOCK5) protein [69] and CLK1 protein (CDC Like Kinase 1) in the influenza virus life cycle were studied using the CRISPR/Cas9 genome editing system [70].
Dock5 protein is a large polypeptide implicated in an intracellular signaling network and located in the plasma membrane to stimulate cell proliferation and migration [71]. To elucidate the role of DOCK5 in the replication of influenza A (H1N1 and H3N2) viruses, this gene was knocked out in A549 human lung epithelial cell lines using the CRISPR/Cas9 system. The knockout led to a decrease in the titer of the influenza virus. It has been shown that DOCK5 is a host factor capable of regulating the traffic, replication, and splicing of influenza virus genes in the cell. Another possible function of this protein may be to suppress host defense responses [69].
CLK1 is a human protein kinase that plays the role of essential host factor in IAV replication and can affect the splicing of viral mRNA. CLK1 gene knockdown in A549 cells resulted in reduced IAV replication and decreased virus titer. This effect was observed in both A549 human cells and mouse models. Moreover, it was shown that among the CLK family kinases, only the CLK1 isoform is required for the successful replication of IAV in vitro and in vivo [70].
Genes of RNA Modification
RNA modifications are another factor affecting the life cycle of IAV. Courtney's research (2017) is the main study of the effect of RNA modifications on IAV replication and pathogenicity [72]. In this work, the effect of an N6-methyladenosine (m6A) modification on the expression and replication of the influenza A genome has been examined. The expression of the methyltransferase METTL3 gene, which is responsible for introducing m6A modifications, was blocked by the CRISPR/Cas9-mediated knockout in the A549 human lung epithelial cell line. METTL3 knockout was found to cause a decrease in viral structural protein levels as well as in virus titer and viral mRNA levels. In contrast, overexpression of YTHDF2 (YTH domain family 2) induced a significant increase in virion production in A549 cells, but this effect was not observed with overexpression of YTHDF1 or YTHDF3. Thus, the study conducted by Courtney et al. has provided the first evidence that m6A RNA modifications upregulate the replication of IAV; however, the underlying mechanism of the effect has not been presented and justified.
Later Cortney's data were confirmed in a study aimed at exploring the mechanism of m6A control via modifying the innate immune response to infection. In their work, Winkler et al. (2019) described the pathway of m6A-mediated destabilization of interferon β (INFβ), which can affect the propagation of viruses [73]. IAV infection of cells with the CRISPR/Cas 9 mediated knockout of METTL3 or YTHDF2 was accompanied by an increase in the IFNβ level, a higher expression of interferon-stimulated genes, and inhibition of the expression of IAV genes. Thus, the CRISPR/Cas9 technology aimed at the path of RNA modification allows the regulation of the influenza virus replication. Further, the use of the CRISPR/Cas system would help to understand the mechanism of the effect of RNA modifications, including m6A, on the expression of viral genes.
CRISPR/Cas9 Genome Screening
Now the CRISPR/Cas9 screening technology is being actively developed and applied in many scientific studies. Genome-wide screening clarifies which specific gene should be knocked out to increase or suppress viral replication in each case.
One such study is the research by Heaton et al., in which the authors performed a CRISPR/dCas9 genome-wide overexpression screening on the cell line A549 to identify the host factors that block the IAV infection. They found that the B4GALNT2 (Beta-1,4-N-Acetyl-Galactosaminyltransferase 2) gene, which encodes glycosyltransferase and is not usually expressed in the lungs, can inhibit various strains of AIV. In addition, B4GALNT2 overexpression caused a significant reduction in both initial virus infection and expansion. Thus, altering B4GALNT2 expression is a promising way to completely prevent IAV infection [74].
In work [75] by Karakus et al., a CRISPR/Cas9 genome-wide screen has been used to study the genes involved in the entry of novel bat IAVs (H17N10 and H18N11). The combination of CRISPR/Cas9 technology and transcriptomic profiling of susceptible versus non-susceptible cells revealed that the major histocompatibility complex class II (MHC-II) human leukocyte antigen DR isotype (HLA-DR) is an important entry mediator for bat IAVs. In addition, the authors have generated MHC-II-deficient mice and have confirmed that these mice are resistant to bat IAV infection. Another study [76], using CRISPR/Cas9 genome-wide screening, has identified that underexplored host factors WDR7, CCDC115, and TMEM199 are necessary for influenza virus entry and are not essential for cell viability. These genes are shown to be important during the early stages of the IAV infection, and their knockout reduces viral infection. In addition, a human mRNA cap methyltransferase CMTR1 has been revealed as a new host dependency factor of IAV. The knockout of this gene increases the IFIT gene family expression in infected cells and inhibits viral replication [76].
In the research [77], the authors have presented a novel pooled genome-wide CRISPR/Cas9 screening strategy that uses a replication-defective reporter virus and Fluorescence Assisted Cell Sorting (FACS). This technology was applied to identify restriction factors in a given vaccine production cell line as a method validation for the potential implementation in generating high viral yields in cell-based vaccine production systems. Using the HEK-293SF cell line, the authors have discovered 64 putative influenza restriction factors, including RAE1, NUPL2, TSC1, DDX6, DPC2, SMG9, and UPF2. However, a relatively minor impact on virus replication has been found for well-known factors, such as interferon and RIG-I. In addition, the authors suggest that the host cell metabolic state plays a key role in the maintenance of influenza virus reproduction.
In a recent study [78], using a genome-wide CRISPR/Cas9 gene knockout screen, Song et al. have identified a novel host factor-the single-pass type I transmembrane protein immunoglobulin superfamily DCC subclass member 4 (IGDCC4) that facilitates the endocytosis of lethal H5N1 influenza virus infection. The authors have performed in vitro and in vivo studies and have shown that IGDCC4 reduced the replication of the virus in A549 cells, and IGDCC4-knockout mice increased resistance to H5N1 virus infection. However, they have specified that the cellular protein IGDCC4 does not involve in the initial attachment of the influenza virus to the cell surface. Moreover, the authors suggest that influenza virus attachment and endocytosis may be mediated by different cellular proteins.
The experimental application of CRISPR/Cas9 screening generates some issues worth mentioning. One of them is the presence of some false-positive or false-negative results that are difficult to analyze and interpret. Moreover, the results of various screenings are often not in agreement with each other. Therefore, the gene sets of several previous screenings were compared in the supplementary information of a study by Sharon et al. The comparison of the screenings had practically no overlapping genes. This confirms that the results obtained with the CRISPR/Cas9 screening approach depend on the experimental conditions, including the type of genomic library, the virus strain used to infect cells, the type of cell line in which the experiment is carried out, and others.
The Applications of CRISPR/Cas9 System
At the present time, the CRISPR/Cas9 system is the most promising and widespread technology for detecting viruses, including IAV [79] and studying their assembly [80][81][82], budding [83], release [84], and spread [85].In a study [85], using a CRISPR/Cas9-mediated knockout technique, the authors have generated the tumor protein 53 (p53) null A549 cells to explore the susceptibility of these cells to IAV infection. They showed that p53null cells significantly reduced the viral spread when infected with influenza A compared with control cells [85]. Another research group demonstrated that tumor susceptibility 101 (Tsg101) gene-deficient A549 cells, created by the Cas9/CRISPR method, were resistant to IAV infection [84]. The authors suggested that Tsg101 plays a critical role in the transport of HA on the cell surface prior to the release of IAV [84]. In a study [83], Zhao et al. have investigated the relationships of IAV and two genes: cytidine monophosphate N-acetylneuraminic acid synthetase (CMAS) and ST3 beta-galactoside alpha-2,3-sialyltransferase 4 (ST3GAL4). Using the CRISPR/Cas9, they created knockouts of these genes in newborn pig tracheal epithelial (NPTr) cells. The results showed that the loss of both genes significantly inhibits IAV adsorption, and CMAS and ST3GAL4 are required for IAV attachment and entry [83]. In addition, the CRISPR/Cas9 method can be used to study IAV assembly. For example, in the work by Han et al., it was found that the Rab11 is important for influenza virus genome assembly and production of infectious virus particles [80].
Another promising direction for the application of the CRISPR/Cas9 system is the creation of DNA base editors and prime editors. The advantages of these technologies compared to the original CRISPR/Cas9 system are decreasing cytotoxic effects, low offtarget effect, and no serious long-term immune complications [86,87]. Both gene tools have great therapeutic potential, so base editing technologies have shown promising preclinical results in devastating genetic disorders such as Hutchinson-Gilford progeria [86,88].
Moreover, the CRISPR/Cas systems can be used as tools for antiviral therapy [89] and for identifying drug on-and off-targets [90].
Conclusions
A major problem in the fight against IAV is the steady evolution of surface antigens in response to force from the host's immune system. Identification of host factors required for IAV reproduction and study of IAV regulation mechanisms will permit to suppress the viral replication at different stages of the viral life cycle.
The CRISPR/Cas9 screening shows limited sensitivity but excellent specificity in detecting host factors that act very early in viral replication. Another important property of this approach is multiplexing (targeting multiple loci), which allows combating the possible evolution of viruses and permits to simultaneously inhibit and overexpress the distinct genetic targets [91]. However, the presence of the off-target effects limits the application of the CRISPR/Cas9 system [32]. Currently, the approaches of reducing the off-target activity of the system are being widely developed. One of the main methods is the introduction of modifications into guide RNAs [92,93], the creation of mutant Cas proteins [94,95], and the finding of new methods of genomic editing based on the CRISPR/Cas9 system [96,97].
The CRISPR/Cas9 system provides a point knockout of genes in the host cell, thereby suppressing or enhancing viral replication at different stages of the viral life cycle. Studies using this approach show the possibility of promoting viral infection. However, a single gene point knockout does not cause the effect necessary to significantly change the virus viability. Creating a cell line with a simultaneous knockout of multiple targets could be a possible solution to this issue, provided that these targets are not involved in any vital processes in human cells. The genes considered in this review can act as the potential targets: IRF, IFIT, IFITM, TLR, MDA-5, RIG-I, OAS family, genes of sialic acids synthesis, PLSCR1 gene, splicing regulating genes, and genes of RNA modification.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,100.8 | 2022-02-01T00:00:00.000 | [
"Biology"
] |
Superradiant Searches for Dark Photons in Two Stage Atomic Transitions
We study a new mechanism to discover dark photon fields, by resonantly triggering two photon transitions in cold gas preparations. Using coherently prepared cold parahydrogen, coupling sensitivity for sub-meV mass dark photon fields can be advanced by orders of magnitude, with a modified light-shining-through-wall setup. We calculate the effect of a background dark photon field on the dipole moment and corresponding transition rate of cold parahydrogen pumped into its first vibrational excited state by counter-propagating laser beams. The nonlinear amplification of two photon emission triggered by dark photons in a cold parahydrogen sample is numerically simulated to obtain the expected dark photon coupling sensitivity. 1 ar X iv :1 90 9. 07 38 7v 1 [ he pph ] 1 6 Se p 20 19
Using two stage atomic transitions, this paper proposes a new method to enhance the detection of dark photons produced by lasers shining through walls. Our proposal involves a dark photon field produced in a laser cavity, then passed through a sample of quasi-stable coherently excited atoms whose E1 dipole transitions are parity-forbidden. Under these conditions, during the brief time that the excited atoms are coherent, the diminutive field of the dark photon can resonantly trigger two-photon electronic transitions. As compared to traditional light-shining-through-wall experiments, we project a large gain in sensitivity to dark photons with µeV-meV masses. These sensitivity gains appear within reach using preparations of parahydrogen (pH 2 ) coherently excited by counter-propagating nanosecond laser pulses [32,33].
The use of two stage superradiant atomic transitions for the production and detection of weakly coupled particles was proposed and studied extensively by Yoshimura et al. [32][33][34][35][36][37][38][39][40]. These authors have pointed out that macroscopic quantities of coherently excited atoms may be employed to measure neutrino properties [36,39,41]. The use of multi-stage atomic transitions for the discovery of axion dark matter has recently been considered in [42,43]. In contrast, the experiment we propose is sensitive to any U(1) vector bosons kinetically-mixed with the Standard Model photon, whether or not dark matter is comprised of a dark photon.
Coherent superradiant emission by atomic systems was formalized by Dicke in [44]. However, the possibility that superradiance might be observed in macroscopic amalgams of material has received increased attention in the last decade after being proposed as a method to measure certain neutrino properties [35,36]. Before proceeding further, we will develop some physical intuition about classic (aka Dicke) versus macro (aka Yoshimura) superradiance. A formal derivation can be found in Appendix A. Let us consider a group of atomic emitters with number density n occupying volume V , which have been prepared in excited states, such that each excited atomic emitter is indistinguishable from the next (we want them to have the same phase). Let us suppose that some atom in volume V de-excites and emits a photon with momentum k 1 . Then if a single, isolated atom has a photon emission rate Γ 0 , and for the moment neglecting superradiant effects (superradiant effects would indeed be negligible if the spacing between atomic emitters is much greater than wavelength of photons emitted, n −1/3 k −1 1 ) the emission rate of photons from volume V follows trivially, Γ tot = nV Γ 0 .
However, if the wavelength of the emitted photon is larger than the inter-atomic spacing, there will be a superradiant enhancement to the rate of photon emission. In fact, in the case that the emitted photon's wavelength is much greater than the volume itself (k −3 1 V ) the total rate of photon emission will be Γ tot = n 2 V 2 Γ 0 , because of superradiance. This superradiant enhancement can be understood from basic quantum principles as follows. Firstly, momentum conservation tells us that an emitting atom in its final state will have a momentum of size ∼ k 1 . The uncertainty principle tells us that the atom is only localized over a distance ∼ 1 k 1 . Altogether these imply that it is not possi-
Dicke Superradiance
Macro Superradiance k1 k2 → → -Δk = k1+k2 = → → → k1 → -Δk = k1 → → Figure 1: Illustration of single photon emission, aka Dicke superradiance and two photon emission, aka Yoshimura superradiance. Crucially, the volume for superradiant emission is determined by the final state momentum ∆k of the atomic emitters. For classic Dicke superradiance, this is just the momentum of the emitted photons ∆k = |k 1 |. For two photons emitted back-to-back, the final state momentum can be tiny, ∆k ≈ |k 1 − k 2 | → 0. Of course, superradiant emission will also depend on the linewidth of lasers used to excite the atoms, material dephasing effects, and other factors, see Section 2.
ble to determine which atom in the volume k −3 1 emitted the photon. But we know that quantum mechanics tells us that the probability for an event to occur is the squared sum of all ways for the event to occur, and if we cannot distinguish between atoms, then we must sum over all atoms in the coherence volume k −3 1 , and square that sum to obtain the probability for emission. From this we obtain an extra factor of nV in our superradiantly enhanced emission rate. This is illustrated in Figure 1.
In the preceding heuristic argument, it is crucial to realize that it is the final state momentum and corresponding spatial uncertainty of the emitting atoms that determines the volume over which superradiant emission can occur. Therefore, a process that somehow reduces the final state momentum of atomic emitters, can potentially result in superradiant emission over volumes larger than the wavelength of emitted photons. Perhaps the simplest such process is two photon emission. In two photon emission, the final state momentum of the emitting atom will be the sum of emitted momenta ∆k = k 1 + k 2 . For back-to-back two photon emission where k 1 ≈ −k 2 , the superradiant emission volume ∆k −3 can in principle be arbitrarily large -it is only limited by the difference in momentum of the emitted photons. This is illustrated in Figure 1. In practice, dephasing of atoms, the related decoherence time of the atomic medium, and the linewidth of lasers used to excited the atoms will also limit the superradiant emission volume. It is interesting to note that much of the preceding logic about cooperative emission of photons could be equally applied to cooperative absorption.
The remainder of this paper proceeds as follows. In Section 2, we calculate coherence in two stage electronic transitions, and study how much coherence is attained in cold preparations of parahydrogen excited by counter-propagating lasers. We find that the coherence necessary to begin realizing our proposal has been obtained in a number of experiments. The interaction of a dark photon with coherently excited two stage atomic systems is derived in Section 3.2. Enthusiastic readers may wish to skip to Section 4, which includes a schematic description of the experiment, along with its sensitivity to a kinetically mixed dark photon. Determining this dark photon sensitivity requires numerically integrating the dark and visible photon field equations in a background of coherently excited atoms. Conclusions are presented in Section 5. Throughout we use natural units where = c = k B = 1.
Coherence in two stage atomic transitions
Pulses from high-power lasers allow for the preparation of atoms in coherent excited states, from which they can be cooperatively de-excited. Before investigating how the weak electromagnetic field sourced by a dark photon can be detected by cooperatively de-exciting coherently prepared atoms, it will be useful to examine under what conditions counter-propagating lasers excite highly coherent atoms in the first place. After deriving the coherence of atomic states excited by counter-propagating pairs of photons, we will examine how laser power, atomic density, and temperature alter this coherence. The derivation given below can be found in many prior references [45,46]. Our aim here is to quantify the experimental capability, in terms of coherently excited atoms, that will be needed to detect dark photons.
Quasi-stable excited states
We first consider an atomic system with ground state |g and excited state |e . For the atomic systems we are interested in, for example vibrational modes of parahydrogen, and electronic states of Ytterbium, or Xenon [41], both states |g and |e will have even parity, meaning that E1 dipole transitions between the two states are forbidden. However, it will be possible to excite state |g to state |e through multiple E1 dipole transitions, and similarly de-excite |e to |g . So besides states |g and |e , we consider intermediate states, |j + and |j − , where +(−) will indicate excitation into a positive (negative) angular momentum state by a circularly polarized photon. Figure 2 illustrates the basic setup. In physical realizations, there will be many j states to transition through, for example the = 0, 1, 2, 3... electronic angular momentum states of hydrogen. Since by design these excited states will lie at energies beyond those provided by the input lasers, transitions |e> |g> |e> |g> |j ± > X Figure 2: Illustration of the energy levels of an atomic system, with ground state |g and excited state |e . E1 dipole transitions between |g and |e are forbidden; the two-step process of transtioning from |g to |e through a virtual state |j ± is shown.
through these states will be virtual. Defining our atomic Hamiltonian as where H I is the interaction Hamiltonian and H 0 is defined by H 0 |g = ω g |g , H 0 |e = ω e |e , H 0 |j ± = ω j |j ± . With our states specified, we define the wavefunction for this simplified atomic system We have added a phase δ to account for detuning of the lasers; in other words, the laser beams exciting the atoms will be off resonance by a factor ∼ δ. The laser-atomic interaction Hamiltonian will depend on the orientation and quality of the impinging laser beams. Experimental setups similar to the one we are outlining (for example [33]), employ counter-propagating beams which have been circularly polarized. Therefore, we will consider two counter-propagating laser beams propagating along the z direction with electric fields given as where r , l are unit normalized right-and left-handed polarization vectors for the laser beams. Then the laser-atom interaction Hamiltonian is where d is the polarization of the atom. The actual dipole coupling and transition rate are experimental inputs in these formulae. Here we define the expectation value of the dipole transitions, with the assumption that both counter-propagating pump lasers will have left-handed circular polarization, using the convention that left-handedness is defined along the direction of the beam propagation. More explicitly, sinceẼ 2 is electric field of a laser beam propagating in the +z direction, while also forẼ 2 The same relations hold forẼ 1 , except with l ↔ r , sinceẼ 1 is the electric field of the laser beam propagating in the −z direction. Finally, assuming that the two lasers will carry the same frequency (up to a detuning factor δ), we define ω ≡ ω 1 = ω 2 = ω eg /2, and we use the convention throughout this paper that ω ik for any states i, k is defined as ω ik = ω i − ω k , to arrive at the following Schrödinger equations for this multi-state atomic system where we have incorporated the spatial part of the electric fields into "barred" quantities, The sum over all intermediate states j is implicit in Eqs. (8)- (11). To find the time evolution of this system, we first integrate Eq. (8) and Eq. (9) over t. We will be using the so-called Markov approximation, treating c g , c e as constant in the resulting integral. This standard approximation is justified, so long as we expect virtual transitions through c j+ and c j− to be sufficiently rapid compared to changes in c g , c e , which should be satisfied so long as the frequency of the |g → |e transition is substantially smaller than the frequency of higher energy atomic states, ω eg ω je , ω jg . For example, in the case of pH 2 the frequency of the first vibrational state, ω eg ∼ 0.5 eV, can be compared to the lowest lying electronic excitations, ω je , ω jg ∼ 10 eV, from which we conclude that the Markov approximation is justified. Using similar logic, we approximate the electric fields of the laser beams as being constant in this integral, since the laser frequency is also small compared to the transition frequencies to intermediate j states. Setting the initial condition c j ± ,0 = 0, we find the time evolution of c j+ and c j− , Substituting these solutions for c j+ and c j− into the Schrodinger equations for c g and c e , we invoke the slowly varying envelope approximation, i.e. we assume that since the development of the electric fields around the atoms is slow compared to the frequencies of all transitions, all time-dependent exponentials of the form can be set to zero. With the slowly varying envelope approximation, the two state system can be compactly expressed as with the effective Hamiltonian where Ω ge is the Rabi frequency of the system, and we have defined interstate dipole couplings as in [46], Applying the density matrix of the atomic system ρ = |e e| |e g| |g e| |g g| = ρ ee ρ eg ρ ge ρ gg (23) to the von Neumann equation i∂ t ρ = [H ef f , ρ], leads to the Maxwell-Bloch equations The final terms in Eqs. (24), (25), and (26) have been added to account for spontaneous |e → |g transitions and the decoherence time of the mixed state. As such, T 1 is the excited state lifetime and T 2 is the decoherence time.
To better quantify the coherence of this system, we define the Bloch vector r = Tr(σρ) where σ are the Pauli matrices. This implies By construction, r 1 and r 2 quantify the degree to which the atoms are coherently excited of the system, with maximum coherence attained when r 1 = 1 and r 3 = 0. The Bloch vector direction r 3 indicates the population difference between the excitation and the ground states. Note that r 1 , r 2 , and r 3 are all real numbers. Applying the Bloch vector basis to Eqs. (24), (25), and (26) we obtain where we note that a eg is assumed to be real.
Quantifying coherence in quasi-stable excited states
Using the Bloch vector time evolution given by Eqs. (30)-(32), we can now determine the degree and duration of coherence in cold atomic preparations excited by two counterpropagating lasers with electric fieldsẼ 1 andẼ 2 . We consider an excited set of atoms with an expected spontaneous deexcitation time (not including superradiant enhancement) of T 1 and a decoherence time T 2 . In the case of the first vibrationally excited state of pH 2 , the total lifetime has been observed to be T 1 ∼ 10 µs at ∼ 10 K temperatures [47][48][49], which will be appreciably longer than the decoherence time of the first pH 2 vibrational excitation at these temperatures, where this decoherence time will be of order ∼ 1 − 100 ns [50]. In more detail, the decoherence time (T 2 ) of pH 2 has been studied extensively for a variety of temperatures and densities [33,36,40,50]. In some regimes, it is accurate to use the mean interaction time of hydrogen atoms as an estimate of the decoherence time resulting from pH 2 collisions at number density n, where this expression has approximated the velocity of pH 2 using the temperature (T ) and mass (m H ) of hydrogen, and the cross-section using the Bohr radius, σ H ≈ πr 2 bohr . While Eq. (33) is remarkably close to the measured decoherence time for a sample of pH 2 prepared at T ∼ 80 Kelvin and density n ∼ 3 × 10 19 cm −3 , this approximation will break down for sufficiently cold and dense pH 2 , which will not behave like an ideal gas. In addition, we should note that the Raman linewidth, or full-width-half-at-half-maximum of pH 2 's first vibrational emission line, is often used to determine the decoherence time. However, this linewidth also has a contribution from Doppler broadening of pH 2 [33,38], the decoherence time is an estimate using results from Ref. [50].
where here ω 0 is the first vibrational mode frequency. A total decoherence determination for the first vibrational mode of pH 2 , for temperatures ranging from 77 − 500 K, was approximated by fitting a phenomenological formula [50] where, for example, it was found that for T = 80 K, the collisional term A ≈ 100 MHz cm −3 , and the broadening term B ≈ 20 MHz cm 3 , which implies a 10 ns decoherence time for n ∼ 10 19 cm −3 , as previously noted. Given the theoretical expectations and experimental results detailed above, it is safe to assume that T 2 ≈ 10 ns is an achievable decoherence time for cold parahydrogen. In terms of Bloch vector r 1 , the largest coherence reported in a similar setup was r 1 0.07 for parahydrogen at density n ∼ 5 × 10 19 cm −3 [32]. In the remainder of this article, we will find that advancing coupling sensitivity to dark photons (assuming a roughly 30 cm cylindrical chamber and 1 cm laser beam diameter) requires parahydrogen number densities nearer to n ∼ 10 21 cm −3 . As noted in Figure 3, a higher-power laser than that used in [32] is also required. In Figure 3, we have shown how coherence of pH 2 can be expected to develop in time for n ∼ 10 21 cm −3 , by solving Eqs. (30)-(32), assuming a ∼ 10 nanosecond decoherence time, and intrinsic detuning by experimental effects like Doppler broadening, of both δ = 10 and δ = 100 MHz. We will see that in this ∼ ten nanosecond timeframe, a dark photon field applied to the cold atoms can greatly enhance the two photon transition rate. 32), where we take a gg = 0.90 × 10 −24 cm 3 , a ee = 0.87 × 10 −24 cm 3 , a eg = 0.0275 × 10 −24 cm 3 [36]. The intensity of the pump lasers is indicated. For comparison, we note that a coherence of r 1 0.07 has been achieved for parahydrogen at density 5 × 10 19 cm −3 , using lasers less powerful than those assumed here [32]. However, the nanosecond pulse gigawatt power lasers required are commerically available [52]. (Indeed, even continuous gigawatt lasers as powerful as we require have been demonstrated in recent years [53].) The left panel assumes experimental detuning δ = 100 MHz, as achieved in recent counter-propagating pulsed laser experiments [33]. The right panel assumes δ = 10 MHz, a linewidth that has been achieved in solid parahydrogen [51]. of a dark photon field, the rate for two photon de-excitation can be resonantly enhanced. Suitably applied to coherently excited atoms, we will find that very weakly coupled dark photon fields can trigger two photon transitions, during the ∼ 10 nanosecond window of time that the atoms are coherently excited.
Two photon transitions with kinetic mixing
We begin with the dark photon. The dark photon field is a new massive U (1) gauge field that kinetically mixes with the Standard Model photon. Its Lagrangian has the general form where A µ and A µ are the four vector potential of the ordinary photon and dark photon field, and F µν and F µν describe their field strength separately. Additionally, the dark photon is characterized by a mass m A and the kinetic mixing is suppressed by a constant χ. Here J µ em =ψγ µ ψ corresponds to the electromagnetic charged current with charged fermions ψ.
There is no direct coupling between the dark photon and charged fermions in Eq. (36). Rather, an effective interaction is introduced through kinetic mixing between the photon and dark photon, so long as m A > 0. Equivalently, one can diagonalize the kinetic mixing term by redefinition of the photon field A µ → A µ + χA µ . To first order in χ we obtain the Lagrangian To find the dark photon absorption and emission amplitude in atomic transitions, it will be convenient to work with the effective Hamiltonian for electrons in the non-relativistic limit in the presence of the dark photon field. Substituting the interaction terms in Eq. (37) into the Dirac Lagrangian, we arrive at the Dirac equation for the electron where m e is the electron mass. It will be convenient to work in the Dirac basis and divide the spinor into a dominant component ψ d and a subdominant component ψ s , i.e. ψ = (ψ d , ψ s ) T . Separating out the time derivative from the Dirac equation, we find the Hamiltonian for the system where Here σ are the Pauli spin matrices and the electric potential Φ = A 0 . The non-relativistic Hamiltonian for this system is obtained by subtracting m e from both sides of Eq. (40) and combining the equations of motion for ψ d and ψ s , where this expression is valid in the non-relativistic limit where |H nr | m e and similarly e(A 0 + A 0 ) m e . Eq. (42) gives the effective Hamiltonian for an electron in the presence of electromagnetic and dark photon fields. Subtracting from it the standard QED Hamiltonian we single out the components introduced by the new dark photon field The first line of Eq. (43) reminds us of the standard QED Hamiltonian, with an additional gauge field, whereas the second line can be dropped if we work to first order in e and χ.
With the effective Hamiltonian in hand we are now prepared to compute the transition amplitude from initial atomic state |i to final state |f with the absorption or emission of a dark photon. This transition has the general form We start with the first term in Eq. (43), which describes an E1 (electric-dipole) type transition. Using the relation ∂ µ A µ = 0, which can be readily obtained from the Euler-Lagrange equation (38) for the dark photon, we find where p e is the momentum operator for the electron. Using the relation where H 0 = p 2 e /2m e is the unperturbed atomic Hamiltonian, we obtain where again we note that ω ik ≡ ω i − ω k as the energy difference between the initial and final atomic states. The first term in Eq. (46) is suppressed by a factor ∼ ω/m e and is therefore negligible compared to the second term. Hence we drop this first term for simplicity.
To evaluate the second term, we define the vector component of the dark photon field as A = |A | exp(iωt − ik · r), which will have energy ω = ω if . Because we will be considering dipole moments substantially smaller than the wavelength of the applied laser (or the wavelength of the dark photon), the dipole approximation exp(−ik · r) 1 applies. With this approximation where the d = er is the dipole operator. Following standard electromagnetic conventions we define the dark electric field as where Assuming |A | varies slowly in space and time, we obtain Decomposing the dark electric field into a transverse component E T and longitudinal component E L , Note that our proposed experiment is only sensitive to dark photons with sub-meV masses, since this is a necessary condition for coherent excitation of two stage atomic transitions in the target sample (see Section 4). While the dark photon masses will be m A meV, the transition energy ω ∼ eV , therefore we expect the effect of longitudinal component of the dark electric field to be subdominant since |E L |/|E T | m 2 A /ω 2 . Therefore, we only focus on the transverse component in computing the transition amplitude.
We could proceed to evaluate the transition amplitude induced by the second and third terms of Eq. (43). However, we note that the second term is characterized by M1 (magnetic dipole) type transition which is suppressed by 1/m e compared with M E1 . The third term vanishes in the leading order expansion of exp(−ik · r), and so we also neglect it hereafter.
Dark photon induced superradiance
Now that we have obtained the dark photon dipole transition amplitude, we are ready to study dark photon induced superradiance. We will focus on the transition between the excitation state |e and ground state |g of a pH 2 target. As previously noted, |g and |e will denote the 0th and 1st vibrational state for pH 2 where both of these have J = 0. Since |e and |g share the same parity, an E1 dipole transition is forbidden, but the transition between them can take place via two E1 transitions, by transitioning through an intermediate virtual state |j . Hence we will compute E1 × E1 transitions for which two particles are emitted, as shown in Figures 4a and 4b. As we mentioned in the previous section the interaction between dark photon and electron allows for this E1×E1 transition to occur via the emission of a dark photon and standard model photon |e → |g + γ + γ along with the standard two photon emission process |e → |g + γ + γ, illustrated in Figures 4a and 4b. These two processes will reinforce each other in a coherently excited atomic medium, since the emission of a dark photon can trigger and amplify the two photon emission process, and vice versa. To demonstrate this mutual reinforcement, we shall derive the evolution equations of the dark photon and photon fields during deexcitation.
Maxwell-Bloch equations
First we will reformulate the Maxwell-Bloch equations as they were derived out in Section 2, now including the dark photon's effect on the electric dippole. As before, we denote the spin m J = ±1 states as |j ± . In addition to the two photon fields E 1 and E 2 , we define a dark photon E propagating in the positive z directioñ Because we are only treating the transverse component of the dark photon field, we take = T . We expect that to good approximation ω 1 = ω 2 = ω = ω = ω eg /2, since this is already required for coherence of the excited atomic state. We again write the pH 2 wave function as the superposition of atomic states For the sake of simplicity, we will set δ = 0 in the remainder of this treatment, which amounts to assuming that the atoms, lasers, and dark photon field are in phase over the target volume for timescales shorter than the decoherence time, T 2 ∼ 10 ns. For a full discussion of the physical requirements for ∼ 10 ns decoherence times, and the loss of coherence as δ is increased, see Section 2. The full interaction Hamiltonian is then The Schrödinger equations will now include terms proportional to the dark photon field, where as in Section 2 we absorb spatial dependence into overbarred fields,Ē 1 = E 1 e −iωz , E 2 = E 2 e iωz andĒ = E e ikz . Note also that we have left implicit the sum over all intermediate states j in Eq. (59) and Eq. (60). Integrating Eq. (57) and Eq. (58) over t, using the Markovian approximation, and imposing the initial condition c j ± ,0 = 0, Applying Eq. (61) and Eq. (62) in Eq. (59) and Eq. (60) and using the slowly varying envelope approximation, we obtain the equation for the two-state system in the presence of a dark photon, where the 2 × 2 matrix is the effective Hamiltonian (H ef f ), and its components are where we have defined the dipole couplings a ee , a gg , and a ge as before. In contrast to Section 2, we now also define which quantifies the relative phase between the polarization of the photon field and the dark photon field. As before we introduce the density matrix and add relaxation terms to obtain the Maxwell-Bloch equations where T 1 and T 2 are relaxation and decoherence time, respectively. We can expand Eq. (66) to make manifest oscillations in Ω eg here assuming ω k. From Eq. (69) we also need to decompose ρ ge correspondingly We note that ρ − ge only comes from the atomic excitation due to the absorption of E 2 and E or two dark photons, and the coherence developed in these processes is small. Thus, to leading order we can drop the second term in Eq. (71) and assume no spatial phase in ρ ge .
Field equations
The Bloch equations we have derived in the previous section show the evolution of the population of ground state and the excitation state in the presence of electric and dark electric fields. Now we would like to see how these fields evolve as the population changes. It is straightforward to obtain from Eq. (37) the field equations There is no free electric charge in the target and J em can be identified as the polarization current density determined by the polarization field where n is the number density of pH 2 . We recall the definition of E in Eq. (48) and take the time derivative on both sides of Eq. (72) and Eq. (73) to obtain where i = 1, 2 represent different electric fields. The polarization field arises from the dipole moment in the atomic transition wherẽ Note thatẼ 1 andẼ 2 propagate in opposite directions with opposite spin angular momenta, the microscopic polarization that sources these fields is also different. Accounting for the conservation of angular momentum we have We work in the limit where m A ω so approximately ω k. We can substitute c j± as given in Eq. (61) and Eq. (62) into Eqs. (78) to (80) and keep only the terms containing e ±iωt (to match the left hand side of Eq. (75) and Eq. (76)) to obtain By matching the oscillation phases of the electric fields and the microscopic polarization and using the slowly varying envelope approximation, we arrive at the field equations for The first terms on the right hand sides of these equations, which are proportional to a ee and a gg , do not affect the transition from excited to ground states, but rather describe absorption and reemission of photon or dark photons propagating in the medium. More importantly, the second terms on the right hand sides of the above equations, proportional to a eg , describe the production of electromagnetic fields via excited to ground state transitions of the atoms. Altogether, E 1 can be amplified by seed E 2 and E fields, and correspondingly, E 2 and E are amplified by the E 1 field through transitions. For our purposes, we are most interested in the fact that E will amplify E 1 and E 2 in these equations, which forms the basis for our dark photon detection proposal.
Bloch vector
Defining the Bloch vector as in Section 2, from Eq. (68) and Eq. (69) we obtain where the spatially averaged visible and dark photon fields are together defined as and we assume a eg is real. Note that the expression above has assumed thatĒ 1 andĒ 2 are in phase, which is appropriate for atoms pumped by phase-matched lasers. Due to the smallness of the mixing parameter χ, the dark photon field itself will not drive the evolution of the state population in the system. However, the dark photon can trigger the production of E 1 and E 2 , which in turn trigger additional photon production. Therefore, while it would be safe to drop the dark photon component in Eqs. (86) to (88), we retain it in numeric computations for the sake of rigor. We must retain the dark photon component in the field equations. Using Eqs. (83) to (85) we obtain In the experimental setup we soon describe, after the atoms are pumped into their excited states, the laser fields will be shut off so that |Ẽ 1 | = |Ẽ 2 | ≈ 0. It is clear from Eq. (89) that in this circumstance, a non-zero dark electric field E will be essential to develop the E 1 field, which will in turn trigger additional two photon emission.
Experimental setup
Our proposed experimental setup is schematically illustrated in Figure 5. A continuous laser beam is injected into a resonant cavity, which enhances the laser's probability to oscillate into dark photons. After hitting the wall photons are stopped and only dark photons are allowed through. The target (pH 2 for example) is pumped into a coherently excited state as detailed in Section 2. As it propagates through the target, the dark photon field triggers atomic deexcitation via the emission of a dark photon and a standard model photon. The electric field generated from the first deexcitation subsequently triggers two photon emission, producing back-to-back photons with the same frequency. These photons trigger further deexcitations, detected at both ends of the target vessel. Figure 5: Schematic view of the proposed experiment. First, the pH 2 sample is coherently excited to energy ω eg by back-to-back pump lasers (pump lasers not shown). The excited atoms' E1 dipole transitions are parity forbidden, meaning the atoms are metastable over the ∼ ten nanosecond integration time of the experiment. On the other hand, the emission of two ω eg /2 energy photons in an E1 × E1 transition is allowed. As in lightshining-through wall experiments, a laser is fed into a resonant cavity to increase the dark photon conversion probability. In this case, the laser will operate at energy ω eg /2, so that after passing through the wall, dark photons act as a trigger field for the emission of back-to-back photons which are then observed by detectors labeled D1 and D2.
There are two primary advantages to conducting the experiment in the manner described above. First, the pH 2 sample's response to a dark photon field can be precisely determined by passing a very weak laser field through the sample, where low-power lasers can directly test the response to weakly coupled dark photons. Then, in discovery mode, where visible photons are prevented from passing through the wall, the two photon emission process would presumably only occur, if triggered by a dark photon over the ∼ 10 nanosecond coherence time, because the spontaneous deexcitation process is negligibly slow, as detailed in Section 4.4. Photons produced by dark photon induced transitions would be emitted back-to-back and at the frequency, ω = ω eg /2. Altogether this pro-vides a powerful background rejection method, since the rate for spontaneous two photon emission is very small. This can be contrasted with more ambitious experiments utilizing two photon emission processes [36,[41][42][43]. In these experiments, the signal (one photon and either two neutrinos or an axion) will need to be distinguished from a sizable two photon emission background, since both processes are triggered. Therefore it is plausible that the experiment we have outlined is an intermediate step that could be reached while working towards the proposals laid out in [36,[41][42][43].
Dark photon induced transition rate
To begin with, let us quote the estimated rate for the emission of γ 1 and γ 2 in our proposed experiment. First, we note that without both coherent enhancement and exponential amplification of photon fields by the atomic medium that will be discussed shortly, the |e → |g + γ + γ transition rate depicted in Figure 4a is rather slow. To satisfy the coherent amplification condition, we require where L is the length of the target, which is the longest dimension of the target volume. Under these conditions (see Appendix B for a full derivation) the naive rate for dark photon-induced two stage transitions is where ω 1 is the cavity laser frequency, equal to the dark photon frequency ω , N pass is the number of cavity reflections, P L is the cavity laser power, l is the cavity length, A is the area of the excited atomic target (limited by the pump lasers' beam width) and n is the target number density. Using this naive estimate results in an unobservably small rate, because it does not account for the development of electromagnetic fields in the atomic medium (c.f. the field equations given in Eqs. (89) through (91)). The predicted rate for our benchmark experimental and model parameters, using a dark photon mixing χ = 10 −9 , m A = 10 −4 eV, ω = ω 1 = 0.26 eV, N pass = 2 × 10 4 , l = 50 cm, η = 1, pH 2 number density n = 10 21 cm −3 , target area A = 1 cm 2 , target length L = 30 cm, and for parahydrogen dipole coupling a eg = 0.0275 × 10 −24 cm 3 , is Γ ≈ 10 −5 s −1 . This emission rate is unobservably low considering that each experimental run is expected to last about 10 ns.
However, even a small production rate for E 1 can be exponentially enhanced in coherently prepared atoms. As detailed in Appendix A, the transition rate for producing two photon pairs is exponentially enhanced as the electromagnetic field strength grows, The dependence on E 2 1 E 2 2 in Eq. (94) shows that the growth of signal fields will be exponential after the dark photon establishes a small E 1 seed field. A similar amplification has been observed to be as large as 10 18 compared with spontaneous emission [32]. We expect an even larger amplification factor to be achieved in our benchmark experimental setup.
Numerically simulating field development
When simulating the development of electric fields in coherently prepared atoms, it will be convenient to rescale the spacetime coordinates and the electric fields to be dimensionless. We define where β represents the typical length and time scale for the evolution of the system and ω eg n is the energy stored in excited atoms. The Bloch equations and field equations can be written in terms of these new variables As mentioned before, dipole couplings of parahydrogen have been measured to be a gg = 0.90 × 10 −24 cm 3 , a ee = 0.87 × 10 −24 cm 3 , a eg = 0.0275 × 10 −24 cm 3 [37]. For the relaxation and decoherence times, we take T 1 = 10 3 ns and T 2 = 10 ns respectively; for extended discussion of coherence in preparations of pH 2 , see Section 2. The photon and dark photon energies are ω = ω eg /2 ≈ 0.26 eV. Altogether this gives β = 0.092 10 21 cm −3 N ns = 2.8 10 21 cm −3 N cm , ω eg n = 2.5 * 10 10 n 10 21 cm −3 W/mm 2 .
A typical target vessel is 10 to 100 cm long. Here we assume a vessel that is 30 cm long, which is smaller than the expected length scale over which the pH 2 is coherent. If we assume all atoms are initially prepared in the coherent state, then r 1 = 1 across the target. We will also consider smaller values of r 1 = 0.1, 0.5, 0.9, which correspond to fewer atoms in the coherent state. With the aid of a resonant cavity, the transmission probability for a dark photon to shine through the wall is given by [54,55] p trans = 2(N pass + 1) where l is the size of the cavity and N pass is the number of reflections the laser undergoes in the dark photon generating cavity. We assume l = 50 cm and N pass = 2 × 10 4 in our benchmark setup. These values are in line with what is currently attainable at the ALPS II experiment [17]. The initial dark photon field power in the target volume is estimated to be where for our benchmark setup we assume a laser power P L = 1 Wmm −2 . As mentioned in Section 3.2.1 η is determined by the relative phase between the polarization of the photon and dark photon fields. Without loss of generality we set it to be unity here. We show in Figure 6 through Figure 8 the time evolution of the system. In these figures we assume all the pH 2 atoms are initially prepared in the coherent state, i.e. r 1 = 1, r 2 = 0 and r 3 = 0, across the target. We also assume a dark photon mass m A = 0.1 meV.
As shown in Figure 6 r 1 and r 3 decay exponentially when no laser is present. In this case no initial dark photon field is pumped through the wall and so spontaneous deexcitation dominates the evolution of the system. We note that Figure 7, which shows no substantial E 1 , E 2 , or E field developing when P L = 0, has not included the effect of spontaneous two photon deexcitations, which are expected to be negligibly small, see Section 4.4.
This scenario changes dramatically in the presence of dark photons produced by a laser. Assuming the laser power P L = 1 Wmm −2 and the mixing χ = 10 −3 , a sudden drop takes place in r 1 and r 3 around 10 ns. This drop corresponds to decay and release of the target's energy through production of E 1 and E 2 as well as a minor enhancement of the dark photon field E . The dynamics can be explained as follows. The initial dark photon fields induces a deexcitation via E 1 and E (Figure 4a illustrated this process), then this E 1 field triggers additional two photon deexcitation producing E 1 and E 2 symmetrically (see Figure 4b). The growing E 1 and E 2 , when large enough, cause abrupt decoherence and deexcitation, which in turn gives rise to additional energy release in the form of E 1 and E 2 . As can be identified from Eq. (85) E will also be generated by E 1 induced transitions, at a rate suppressed by χ. Figure 6 no laser is present for dotted lines and a 1 Wmm −2 laser with χ = 10 −3 is assumed for the dashed lines. We also assume the same initial Bloch vectors as Figure 6. |E 1 | 2 (red) is taken at the left end of the target with z = 0 cm while |E 2 | 2 (blue) and |E | 2 (black) are taken at the right end of the target with z = 30 cm.
The transitions are less explosive when χ = 10 −9 , as illustrated in Figure 6 and Figure 8. The deviations of r 1 and r 3 from spontaneous decay are barely observed and the peak intensity of E 1 and E 2 are relatively low compared to χ = 10 −3 . In this case, the dark photon has induced the generation of an observable but small quantity of E 1 and E 2 photons. The dark photon field remains essentially constant since E regenerated from E 1 is too small to be observed.
Spontaneous two photon emission background
We now consider a possible background from spontaneous deexcitation and emission of photons from cold atoms over the runtime of our experiment (around 10 ns). We will find that this background is negligible. Since the transition from excitation state |e to the ground state |g is E1 forbidden, single photon deexcitation is only viable through higher order transitions. Note that we are only looking for signal photons with energy around ω = 1 2 ω eg , because our signal photons are expected at this frequency. The background from spontaneous two photon emission has a rate given by (see Appendix A) where z = ω 1 /ω eg is the fraction of the energy for one of the two photons in the transition. We assume an uncertainty ∆ν = 100 MHz in the frequency measurement, which translates to ∆z = 8.0 × 10 −7 . For a sample target with length L = 30 cm and cross section area A = 1 cm 2 the uncertainty in the emission solid angle is ∆Ω/4π = A/4π(L/2) 2 = 3.5 × 10 −4 . These two photons from spontaneous decay process can be emitted in any direction. Since we only detect photons at the ends of the atomic sample, the fraction of background photons which reach the detector is 2∆Ω/4π. Given the target number density n = 10 21 cm −3 and complete coherence (ρ eg = 0.5) the total number of pH 2 We see that over the course of any reasonable number of experimental repetitions, we should not expect a single background event from spontaneous two photon deexcitation processes.
Results and sensitivity
The signature of the proposed dark photon search is the symmetric emission of photons with frequency ω = ω eg /2 at both ends of the target. The number of signal photons emitted during one experimental trial run (of ∼ 10 ns) is Figure 10: Sensitivity of our proposed experiment assuming the benchmark setup with light-shining-through wall laser power P L = 1 Wmm −2 , target area A = 1 cm 2 , number of cavity reflections N pass = 2 × 10 4 , cavity length l = 50 cm, target length L = 30 cm, pH 2 target number density as indicated, and where we assume N rep = 10 3 exposures, each lasting ∼ ten nanoseconds. The constraints from other dark photon experiments, astrophysics, and cosmology are shown for comparison, e.g. Coulomb [56,57], CMB [58,59], CROWS [60], GammeV [61], ALPS [16,17], and stellar constraints [10,11,15,17], see [62] for a summary of these bounds. The black lines show the sensitivity of our proposed experiment, for pH 2 number densities indicated, and coherence factors as indicated to the right of each sensitivity (see Section 2 for discussion of coherence in parahydrogen).
where A is the area of the target and t is the time duration of the experiment. The experiment can be repeated many times to accumulate signal photons. The Bloch equations and field equations derived in Section 3.2 are highly nonlinear, but we infer from Eq. (89) that E 1 ∝ χE , and therefore the number of photons emitted is where N rep is the number of repetition of the experiment. To see in what regime this scaling holds, we show in Figure 9 the number of photons produced as a function of the mixing, χ assuming different laser powers. There is an upper bound on the number of signal photons, which is saturated if all of the excited atoms are deexcited. It is clear from the figure that before saturation N s is proportional to P L and χ 4 . As χ becomes large enough a significant amount of energy stored in the target is released and one gains very little by increasing the mixing or laser power. We note also that N rep , N pass + 1 and sin 2 (m 2 A l/ω) are will have the same scaling as P L when determining the number of signal photons emitted.
To estimate the sensitivity of our proposed experiment, we require the emission of at least ten photon pairs after a certain number of excitation/deexcitation repetitions. As a benchmark we take the laser power P L = 1 Wmm −2 , the target area A = 1 cm 2 , the number of cavity reflections N pass = 2 × 10 4 , the cavity size l = 50 cm, and the number of repetitions as N rep = 10 3 . In the regime that a fraction of the pH 2 deexcites, the number of emitted photons can be estimated as where this expression has been normalized assuming n = 10 21 cm −3 .
We show the sensitivity of our proposed experiment in Figure 10. Also shown in the figure are the light-shining-through-wall experiments, cosmological, and astrophysical bounds reviewed in [62]. The coherent amplification condition we have assumed throughout ((ω − k)L 1 )) requires that our dark photon mass not be too large. This restricts m A 0.6 meV. As a consequence we have truncated the mass sensitivity at one meV. As seen from the figure, over the mass range 10 −5 ∼ 10 −3 eV our proposed experiment appears rather sensitive to dark photon kinetic mixing.
Conclusions
We have studied a new method to detect dark photon fields using resonant two photon de-excitation of coherently excited atoms. Our proposed experiment combines dark photon production techniques demonstrated by light-shining-through-wall experiments with a new detection method: dark photons triggering two photon transitions in a gas of parahydrogen coherently excited into its first vibrational state. The potential coupling sensitivity to dark photons we project in our benchmark setup is orders of magnitudes beyond present limits for µeV − meV mass dark photon fields.
A major technical hurdle to realizing our proposal will be the preparation of suitably coherent samples of cold parahydrogen using counter-propagating laser beams. As we examined in Section 2, the coherence times and pH 2 densities necessary have already been achieved in laboratory conditions. It remains to suitably increase the fraction of coherently excited pH 2 by using more powerful lasers and colder parahydrogen, as we explored in Section 2. However, even if complete parahydrogen sample coherence is not attained, it would still be possible to realize our proposal by increasing the density of parahydrogen, as explore in Section 4. Indeed, although we have not shown it in Figure 10, the setup we propose with an increased pH 2 number density (2 × 10 21 ), assuming completely coherent atoms (r 1 = 1) can probe kinetic mixings χ 10 −15 . It may also be possible to realize a similar proposal to the one laid out here, using two photon nuclear transitions and free electron lasers. This might permit detecting dark photons at masses greater than an eV.
Our setup relies on the nonlinear development of electromagnetic fields in coherent atoms, and so our sensitivity estimates have relied on numerical simulations of dark photon and photon cascades in pH 2 . However as explained in Section 4, the proposed experiment will allow for dark photon detection to be directly calibrated using a low power trigger laser, as an equivalent stand-in for the dark photon field itself. While for this reason, we have focused on the detection of dark photons in this article, very similar methods could be used to detect axions and other light, electromagneticallycoupled particles. We leave this and other uses of multi stage atomic transitions to future work.
A Coherence and nonlinearity in two photon emission
Let us estimate the transition rate of two photon emission process, as illustrated in Figure 4b. The transition matrix for |e → |g transition is where T is the time-ordering operator and we write the electric fields in as in general, where ω m and k m are the energy and momentum of the emitted photons.
Integrating over t yields where as before we have defined ω ik = ω i − ω k and d ik = i| − d · ( * ) |k . k a ej is the change in the momentum of a specific pH 2 after the transition and r a is the spatial position of the pH 2 . We can perform the second time integral and obtain with M a = d gj d je ω je + ω 1 E 1 E 2 4 e −i(k 1 +k 2 −k a eg )·(r−ra) = a eg 4 E 1 E 2 e −i(k 1 +k 2 −k a eg )(r−ra) .
First we consider the case that the pH 2 is not emitting coherently, which we will call spontaneous two-photon deexcitation. In the case of spontaneous two-photon deexcitation, each pH 2 emits two photons with frequencies that are not necessarily ∼ ω eg /2, in contrast with two photon emission induced by a trigger laser (where the trigger laser frequency used in earlier sections of this document matched the pump laser frequencies, all of these being ω eg /2). In the spontaneous emission case, we sum up the contribution from all pH 2 which gives the emission rate where we have explicitly replaced E m by 2ω m /V for m = 1, 2, and N e is the number of spontaneous emitters. Since the exponential phase is random for each molecule, the product of the phases from different molecules will sum up to zero in the expansion of the square in Eq. (115). This gives Carrying out the integral we find dΓ sp dω 1 = 1 (2π) 3 N e |a eg | 2 ω 3 1 ω 3 2 .
If we define z ≡ ω 1 /ω eg , Eq. (117) can be written as We have used this equation to estimate the two photon spontaneous emission background in Sec. 4.4. Second we will estimate the rate for two photon emission for pH 2 pumped and triggered in a manner which allows for macro superradiance. In the presence of appropriately applied background fields, the pH 2 molecules will tend to emit photons collectively with the same momenta. If the phase k a eg is random for every molecule, the product of the phases would still cancel as we have derived before and the rate would be proportional to N e ; however, if the molecules are pumped into the excitation state coherently (by counter propagating lasers, in the setup we have considered), we can drop the superscript a in k a eg and turn the sum into a spatial integral, i.e.
where n is the number density of the target and ρ ge is the fraction of molecules in the coherent state. In the special case where we use two counter propagating lasers with the same frequency to pump the molecules, k eg ≈ 0, although of course this can be spoiled by the lasers' linewidth and other experimental factors discussed in Section 2. For a dense and large enough target, the spatial integral in r a turns into a delta function, which gives where N is the total number of pH 2 in the target. In the case k eg ≈ 0, the delta function forces k 1 + k 2 = 0, meaning that the two photons emitted superradiantly have to be back-to-back and have equal frequency. Since the delta function is squared we replace one by the target volume V . Evaluating the integrals yields Eq. (121) shows the transition rate in two photon superradiance is proportional to N 2 if coherence conditions are met. This can be compared with (out-of-phase) spontaneous two photon emission described in Eq. (118) where the rate is proportional to N instead. We also see that the rate grows nonlinearly with E 1 and E 2 , the strength of the background fields. At the onset of superradiance, the emission rate is determined by the power of the trigger laser fields. As the photons from the deexcitation increase the strength of the electric fields, the deexcitation rate becomes larger and larger. This exponential growth is clearly seen in Figure 7.
B Estimate of dark photon triggered two stage transitions
Let us now move on to estimate the emission rate of γ 1 and γ 2 in our proposed experiment, as depicted in Figure 5. Consider the process illustrated in Figure 4a. First, the transition matrix for the deexcitation from |e to |g via the emission of a dark photon and a photon in the dark photon background is given by For χ = 10 −9 , m A = 10 −4 eV, ω = ω 1 = 0.26 eV, N pass = 2 × 10 4 , l = 0.5 m, η = 1, pH 2 number density n = 10 21 cm −3 , target area A = 1 cm 2 , length L = 30 cm, laser power P L = 1W/mm 2 , a eg = 0.0275 × 10 −24 cm 3 , This emission rate is relatively low considering the the experimental trial time (determined by the decoherence time) of about 10 ns. But the signal is enhanced by stimulating nonlinearly growing two photon superradiant transitions. This is discussed in Sec. 4.2. | 13,381.2 | 2019-09-17T00:00:00.000 | [
"Physics"
] |
Network efficiency of spatial systems with fractal morphology: a geometric graphs approach
The functional features of spatial networks depend upon a non-trivial relationship between the topological and physical structure. Here, we explore that relationship for spatial networks with radial symmetry and disordered fractal morphology. Under a geometric graphs approach, we quantify the effectiveness of the exchange of information in the system from center to perimeter and over the entire network structure. We mainly consider two paradigmatic models of disordered fractal formation, the Ballistic Aggregation and Diffusion-Limited Aggregation models, and complementary, the Viscek and Hexaflake fractals, and Kagome and Hexagonal lattices. First, we show that complex tree morphologies provide important advantages over regular configurations, such as an invariant structural cost for different fractal dimensions. Furthermore, although these systems are known to be scale-free in space, they have bounded degree distributions for different values of an euclidean connectivity parameter and, therefore, do not represent ordinary scale-free networks. Finally, compared to regular structures, fractal trees are fragile and overall inefficient as expected, however, we show that this efficiency can become similar to that of a robust hexagonal lattice, at a similar cost, by just considering a very short euclidean connectivity beyond first neighbors.
Introduction
The functional features of complex spatial systems depend upon a non-trivial relationship between the space-dependent structure (physical morphology) with the space-independent counterpart (network topology) [1].Among the diverse physical structures found in natural and social systems, disordered fractals represent important systems of study due to the universality of their growth processes and morphological characteristics.For example, the fractal branching observed in living systems, such as neurons, bacterial colonies or hyphal networks, has been also observed in out-of-equilibrium physical phenomena, such as dielectric breakdown, viscous fingering, mineral deposition and colloidal aggregation [2,3,4,5].In particular, the growth dynamics of these physical phenomena fall into the Laplacian growth model, also known to belong to the Diffusion Limited Aggregation (DLA) universality [5,6].The DLA fractal has well-defined fractal dimension according to embedding space [7,6] and, due to the versatility of its pattern-formation mechanism, it has also been used for modelling complex spatial systems, from neurons [8] to cities [9].
In this article we explore the non-trivial relationship between topology and morphology of spatial networks with radial symmetry and different degrees of disordered fractal morphology.Under a geometric graphs approach in which parts of the spatial system are connected via a euclidean connectivity parameter [10,11,1], we quantify the effectiveness of exchange of information, from center to perimeter and over the entire network structure.Our analysis considers only two-dimensional systems of identical particles assembled by stochastic processes.The main models are the DLA and Ballistic Aggregation (BA) disordered fractals.For comparison purposes we also considered two deterministic structures, the Vicsek and Hexaflake fractals, and the non-fractal ordered systems, Hexagonal and Kagome lattices, constructed with similar radial characteristics.The analysis is divided into two parts: first, in a nearest neighbors euclidean approximation, we developed a structural characterization of the system's capabilities to explore the plane in terms of the range (the maximum linear extension with respect to the origin), the coverage (the ability to cover the plane in all directions), the structural cost (assembly connections), and configurational complexity (local connectivity); second, we quantified the effectiveness of exchange of information in terms of a center-perimeter communication ratio and the network efficiency (overall network communication) for the nearest neighbor approximation, then, we show how the efficiency and other properties of the system can be improved at a very low cost by leveraging on the geometric graphs approach beyond first-neighbors.Finally, we present a discussion on our main findings and their potential applications.
In the following, the term morphology refers to the physical or space-dependent structure (how things are distributed in space), while the terms topology or network refer to the spaceindependent structure (how things are connected with each other).In this way, the structure of spatial networks has both physical and topological properties [12,13].
Results
A visualization of the spatial systems here considered is presented in Fig. 1.The physical structure of the systems is composed of identical particles forming connected structures via direct contact interactions.Each particle has a diameter equal to one unit leading to sets of non-overlapping disks with centers at one unit of distance or, equivalently, point distributions where the shortest distance between two neighboring points is one unit.For the network representations, undirected and unweighted networks are created under a geometric graphs approach: links are established pair-wise for all N = 1.5 × 10 4 particles if the euclidean distance between two particles (nodes), d ij , is equal or less than a given distance, r, this, is, d ij ≤ r, where r is the euclidean connectivity parameter.For details of the systems' generation process and the mathematical definitions of the spatial quantities and network metrics used, please see Methods.
Hexagonal lattice
< l a t e x i t s h a 1 _ b a s e 6 4 = " G h g z 9 c h f 5
Range, coverage, cost and configurational complexity (r = 1)
Under the previous considerations, we first considered the case for r = 1, which provides the exact network representation for direct contact interactions.The first important result is that for r = 1, the physical branching morphology of BA and DLA corresponds to that of a tree network (no loops at the micro-level over all the structure [14]).This result might seem self-evident or could be taken for granted due to a bias related to the cluster's physical macro shape, but in fact is not trivial.To formally prove this, we quantified the average clustering coefficient, ⟨C⟩, and found it to be technically zero for BA, DLA and even for the Vicsek fractal, as shown in Table 1.One can observe in Table 1 that ⟨C⟩ is not exactly zero for BA and DLA, this is because for every 10 4 aggregated particles it can be found a couple of them forming triangles with their neighbors due to the stochastic nature of the models.Furthermore, as it's well known, an important disadvantage of a tree network structure is that the network becomes disconnected very easily.Nevertheless, this fragility can be dealt with in many ways as it will be shown further on.The characterization of the morphological features of the network and its capacity to explore the plane was done in terms of the range, coverage, structural cost and configurational complexity, given the same number of particles (N = 1.5 × 10 4 ).The range is defined as the maximum euclidean distance with respect to the origin (linear extension), and the coverage is regarded as the ability to cover the plane in all directions (a space-filling property).The range is quantified by the characteristic radius, R, defined as the distance of the furthest particle in the cluster with respect to the origin averaged over the ensemble.For the coverage, we considered the fractal dimension, D, where D → 1 would be associated to anisotropic linear structures, while D → 2 to isotropic plane-filling structures.This dimension is calculated from the radius of gyration, R g (see Methods).The results for these quantities are indicated in Fig. 1 and Table 1.
As expected, we found that the linearity induced by the branching morphology of tree-like systems (BA, DLA, and Vicsek) provides them with a range greater than that of the lattices (Hexagonal, Kagome, and Hexaflake), that, on the other hand, have the advantage of filling the plane more homogeneously due to their tight packing.Notably, the Hexagonal, Kagome and BA systems have the same dimension, D = 2, despite having very different morphology.The Hexaflake and DLA fractals cover the plane with a similar scaling, D = 1.75 and D = 1.73, respectively, while Vicsek with D = 1.47 reflects its more linear structure.These results are put together in Fig. 2a, where we compare R/R HEX to the corresponding fractal dimension, D. Here, one can observe that, although lattice systems have a high coverage, they have a short range.On the other hand, tree-like structures are more versatile, they have a longer range and span different dimensions, including the one of the embedding space.
The structural cost is quantified in a first approximation by the number of contacts among particles or edges in the network, L. In Fig. 2b, we show L as function of N .All systems have a linear dependence, with L ≈ N for BA, DLA, and Vicsek; L ≈ 2N for Kagome and Hexaflake; and L ≈ 3N for the Hexagonal lattice.Notably, considering that L = N − 1 for any tree-like system (such as BA, DLA, and Vicsek), then the number of connections required to build any tree-like system is independent of its fractal dimension.
As measures of configurational complexity we considered the degree distribution, p k , average degree, ⟨k⟩, average sixfold bond-orientational parameter, ⟨Ψ 6 ⟩, and average clustering coefficient, ⟨C⟩.We found that the average degrees of tree-like systems are statistically equivalent (⟨k⟩ = 2), despite their very different global morphology.Similarly for the Kagome lattice and the Hexaflake fractal (⟨k⟩ = 4) despite having very different fractal/non-fractal nature.Variations in local complexity were also quantified considering the average sixfold bond-orientational parameter, ⟨Ψ 6 ⟩, which for ordered hexagonal systems, ⟨Ψ 6 ⟩ = 1, whereas for non-hexagonal or disordered systems, ⟨Ψ 6 ⟩ → 0 [15,16,17].We found that for the Hexagonal, Kagome and Hexaflake systems ⟨Ψ 6 ⟩ = 1, whereas for the BA, DLA, and Vicsek systems ⟨Ψ 6 ⟩ = 0, regardless of their fractal dimension.These results are put togther in Fig. 2c.A more detailed description of local configurations is given by the degree distribution (see Fig. 2d).For any particle in the plane, the degree, k, goes from one (k min = 1) to six (k max = 6).In this range, particles exhibit different local connectivity configurations, such as tips (k = 1), branches (k = 2) and bifurcations (k = 3) for trees, and regular connections for the lattices.
Center-perimeter communication and efficiency (r ≥ 1)
We characterized the effectiveness of information exchange considering two perspectives: from center to perimeter and across the whole network.Considering first r = 1, we quantified the network center-perimeter ratio, η max , the average shortest-path length, ⟨l⟩, and the average communication efficiency, ⟨E⟩.The network center-perimeter communication ratio, η max = l max 0 /R, where l max 0 is defined as length of the path that connects the particle at the origin (center or source) of the system with the particle whose distance corresponds to R (the perimeter).From Fig. 2e, we found that η max > 1 for BA and DLA fractals, due to their stochastic branched morphology (center-perimeter paths highly deviate from a straight-line), whereas for deterministic fractals and the lattices η max = 1, given their ordered morphology (center-perimeter paths form straight-lines).For the average shortestpath length, ⟨l⟩ (see Fig. 2f), we found that tree-like systems have long shortest-path lengths due to their branches and centralized structure, this is, in order for two particles in different branches to communicate, a path must go through the origin, whereas for lattice systems, the average shortest-path lengths are smaller due to their decentralized structure.This is consistent with the value of the average communication efficiency, ⟨E⟩, that depends on the inverse of all the pair-wise distances and considers that the farther two points in the network are, the less efficient they are to communicate [18].Results for ⟨E⟩ as function of the fractal dimension, D, are shown in Fig. 2g.The efficiency of tree networks can be improved by adding connections among nodes beyond contact interactions or euclidean first-neighbors, which in turns affects the fragility of the tree structures, as previously explained.Following our geometric graphs approach, we introduced more links by allowing the value of the connectivity parameter to be greater than one unit, this is, r ≥ 1.This criterion is uniformly applied pair-wise to all N = 1.5 × 10 4 particles in the system.As it is show in Fig. 3a, adding more links beyond first neighbors has a direct effect on quantities such as the clustering coefficient, which increases indicating more resilient or robust networks.Remarkably, although the BA and DLA fractals are known to be scale-free in space, they possess bounded degree distributions for different values of an euclidean connectivity parameter and, therefore, do not represent ordinary scale-free networks (see Fig. 3b).As r increases, the degree distributions broadens and distributes over a bounded degree range, however, as r → R the distribution will localize again, being exactly p k = 1 for r = 2R.
Adding more links implies an additional structural cost.As such, we are interested in the cut-off value of the connectivity parameter, r c , for which, the efficiencies of the BA and DLA fractals become equal or greater than the efficiency of the ordinary Hexagonal lattice at r = 1, this is, ⟨E⟩/⟨E⟩ HEX = 1, and the structural cost that this entails.In Fig. 3c we show the efficiency, ⟨E⟩, of the BA and DLA fractals relative to the efficiency of the Hexagonal lattice.As expected, the efficiency improves for r > 1 due to the reduction of the all length paths in the network, including the one related to the center-perimeter measure (see inset in 3c).Notably, the cut-off values occur at very small values of the connectivity parameter: BA at r c ≈ 2 and DLA at r c ≈ 4. At these cut-off values, the center-perimeter ratio, η max , and the clustering coefficient, ⟨C⟩ (see inset in 3d), are almost equal or even larger than the corresponding values of the Hexagonal lattice.
Regarding the structural cost, in Fig. 3d we show the number of edges, L, relative to the number of edges of the Hexagonal lattice, L HEX , as function of r.Considering the cut-off values, r c , for BA and DLA, we found that the number of edges needed to build networks in which the BA or DLA are used as underlying spatial systems, with the same or better efficiency than the Hexagonal lattice, requires almost the same number of edges of an Hexagonal grid for BA (L/L HEX ≈ 1), and at least twice the number of edges for DLA (L/L HEX ≈ 2.5).This result can be understood by considering that L increases exponentially as function of the connectivity parameter, r, thus, small changes in r are sufficient to produce important changes in L.
Discussion
In the first part of the analysis, we characterized the systems' capacity to explore the plane in terms of the range (the maximum distance a system advances with respect to the origin), coverage (the ability to cover the plane in all directions), structural cost (assembly connections), and configurational complexity (local connectivity), given the same amount of resources (particles) under a geometric graphs approach for euclidean first-neighbors (r = 1).We found that: • The local linearity induced by the branching morphology of tree-like systems (BA, DLA, and Vicsek) provides them with a range greater than that of the lattices (Hexagonal, Kagome, and Hexaflake).
• Tree structures are more versatile, they have a long range and span different dimensions, including those close to the dimension of the embedding space.Assuming that the amount of energy required to create any assembly connection is the same, the structural cost to create any tree structure is the same regardless of their fractal dimension.However, they are quite fragile (zero clustering).
• Small variations in the micro configurations lead to the emergence of very different macro structures; morphological macro-properties (tree-likeness and fractality) cannot be predicted from the micro-properties of the particles alone.
These results suggest that tree structures have the best balance between range, coverage and cost, they would be the best for the task of exploring space.However, the structural cost of building such networks would not only be associated to the amount of particles and connections (matter and energy), but also to the functionality under the uncertainty of the environment (lack of information).As such, information flow among any pair of points in the network, as well as from the center (or source) to the growing perimeter or any point the structure, is an important factor that weights-in in the cost.This is addressed in the second part of the analysis in terms of center-perimeter communication ratio and the network efficiency (overall network communication) under a geometric graphs approach beyond euclidean first-neighbors (r ≥ 1).We found that: • Lattice systems have the best efficiency due to their decentralized structure (shorter length paths), whereas tree-like structures, despite their low structural cost, are inefficient due to their branched and centralized morphology (bigger length paths).This efficiency can be improved by adding links beyond contact neighbors, at a cost that can be similar to that of the Hexagonal lattice.This provides the same or even better improvements of the topological properties, such as the average shortest-path length, center-perimeter communication ratio and clustering (increase in robustness).
• Tree networks improved by adding a small (local) redundancy, have the best balance between range, coverage and cost, thus, they would be the best for exploring space without compromising the exchange of information over the entire network structure and by improving communication from center to the growing perimeter.
• Noteworthy, although the BA and DLA fractals are scale-free in space, they have bounded degree distributions for different values of the euclidean connectivity parameter and, therefore, do not represent ordinary scale-free networks.
Biological systems such as vascular networks [19], hyphal networks [20,21], neurons [19,22], slime molds [23], and bacterial colonies [24], display complex structures rich in branched or tree-like spatial features.The morphology of these systems seem to solve an adaptive exploration problem related to the maximization of the space that a connected structure can cover in order to retain or gain conditions for survival given limited amounts of matter, energy and information, and according to the demands of the environment [25,26,27,28].One characteristic of these systems is their physical fractality, often quantified by the fractal dimension [19,29,30,31].However, although the fractal dimension is a good global measure of morphological complexity, it does not provide a comprehensive account of the micro structural features that could make branched morphologies relevant at the macro level for the system's biological function [22,32,33,34].The results obtained from our structural analysis based on fractal and network theory provide a detailed quantitative description and further insights into the morphological and topological features that make spatial systems with tree-like structure better at exploring space under limited amounts of resources and information (random growth in all directions under fixed number of particles), and the nontrivial interplay between the physical and topological properties of such complex systems.In addition, although much of the work has been devoted to the theory of random geometric graphs, our results contribute to the few research done on the properties of spatial networks with, or generated from, fractal spatial distributions.
An unambiguous characterization of complex spatial systems is still an open research problem.A good understanding of the fundamental aspects behind the development of such systems could provide valuable insights into medicine [19,30,35], engineering [27,36,37], biomimetic materials and biomorphs [38], and sustainable cities [23,39,40].
Models
Spatial networks are created from two-dimensional systems of identical particles forming connected structures with radial symmetry.In all the simulations, particles have a diameter equal to one unit leading to sets of non-overlapping disks with centers at one unit of distance or, equivalently, point distributions where the shortest distance between two neighboring points is one unit.All systems are centered at the origin and contain 1.5 × 10 4 particles in order to have precise measurements of spatial quantities, such as the fractal dimension, as well as robust statistics for the network analysis (see Table 1).
• The DLA fractal emerges from the aggregation of particles moving under random trajectories upon contact with a static cluster [7].We followed a standard procedure for particle-cluster aggregation in which particles are launched from a circle of radius L = r max + δ.Here, r max is the distance of the farthest particle in the cluster with respect to a seed particle at the origin and δ = 100 is used to avoid screening effects [41,42].In order to speed up the aggregation process, we also used a standard scheme that modifies the mean free path (set to one particle diameter) as the particles wander at a distances greater than L or in-between branches, and set a killing radius at 2L.The fractal dimension of DLA in the plane has been estimated at D ≈ 1.71 [41,42].
• The BA fractal emerges from the aggregation of particles moving in ballistic or straightline trajectories upon contact with a static cluster [43].We followed a standard procedure for particle-cluster aggregation in which particles are launched at random from the circumference of a circle of radius L = r max + δ.Here, r max is the distance of the farthest particle in the cluster with respect to a seed particle at the origin and δ = 1000 is used to avoid screening effects [41,42].The fractal dimension of BA in the plane has been estimated at D ≈ 2 [41,42].
• The Hexaflake fractal is constructed iteratively by exchanging hexagons (scaled by a factor of 1/3) at the position of the vertices and centers of previous hexagons.These hexagons can also be replaced by points (one for each vertex, including the center) in such a way that the fractal can be generated using the following iterated function system [44]: where θ k = kπ/3 and k = {1, 2, 3, 4, 5, 6}.As initial seed we considered (x 0 , y 0 ) = (1, 1).There are 7 n−1 points in the n-th iteration, each at a distance smaller by 1/3 than in the previous iteration.In order to have a spatial distribution centered at the origin where the shortest distance between two neighboring points is one unit, all points are re-scaled using (x * , y * ) = 3 n (x, y) − (1, 1), where (x, y) are the points at the n-th iteration.Its fractal dimension is given by D = log 7/ log 3 ≈ 1.771.
• The Vicsek fractal is constructed iteratively by exchanging squares (scaled by a factor of 1/3) at the position of the vertices and centers of previous squares.These squares can also be replaced by points (one for each vertex, including the center) in such a way that the fractal can be generated using the same iterated function system of the Hexaflake, with θ k = kπ/2 and k = {1, 2, 3, 4}.As initial seed we considered (x 0 , y 0 ) = (1, 1).There are 5 n−1 points in the n-th iteration, each at a distance smaller by 1/3 than in the previous iteration.In order to have a spatial distribution centered at the origin where the shortest distance between two neighboring points is one unit, all points are re-scaled using (x * , y * ) = 3 n (x, y) − (1, 1), where (x, y) are the points at the n-th iteration.Its fractal dimension is given by D = log 5/ log 3 ≈ 1.465.
• The Kagome lattice or trihexagonal tiling is composed of particles distributed on the plane forming equilateral triangles and hexagonal voids, with each particle in contact with 4 neighbors.For finite systems, particles at the border have less than 4 neighbors.
• The Hexagonal lattice is composed of particles distributed on the plane forming an hexagonal grid with each particle in contact with 6 neighbors.For finite systems, particles at the border have less than 6 neighbors.
In the following, due to the stochastic nature of the BA and DLA fractals, we considered an ensemble of 10 clusters which provides a good estimation of spatial quantities and very robust network metrics (see Table 1).The data for each model is available online (see Data Availability).All computations were done using custom Python code and the NetworkX library [45].
Spatial Analysis
• The radius, R, is defined as the maximum (farthest) distance to the origin of the N particles in the cluster, R = max{r i }, with r 2 i = x 2 i + y 2 i , where (x i , y i ) are the coordinates of the ith-particle.
• The radius of gyration, R g , is defined as the root-mean-square distance of all the particles in the cluster [3,2], Clusters are centered at the origin and have radial symmetry, therefore, ⃗ r cm → 0. Also, for large enough N (> 10 3 ) [41], the size scales with the number of particles as the power law, R g (N ) ∝ N β , and the clusters can be considered as self-similar fractals with a fractal dimension, D = 1/β [3,2].This is computed from a linear fit in log-log scale.
• The average sixfold bond-orientational order parameter, ⟨Ψ 6 ⟩, is given by [15,16,17], where, k i is the number of first (contact) neighbors of the i-th particle, and θ ij is the angle between the vector connecting the i-th particle with its j-th neighbor, with respect to an arbitrary axis (which in our analysis corresponds to the x-axis).For ordered hexagonal systems, ⟨Ψ 6 ⟩ = 1, whereas for disordered systems, ⟨Ψ 6 ⟩ → 0.
Network Analysis
We considered undirected unweighted networks [46,47] created under a geometric graphs approach in which links are established pair-wise if the euclidean distance between two particles (nodes) is equal or less than the connectivity distance, r, this is, nodes i and j get connected if d ij ≤ r, where d ij = (x i − x j ) 2 + (y i − y j ) 2 .
• The degree, k i , of a node, i, is defined as the number of links or neighbors, and the average degree, ⟨k⟩, is the arithmetic mean, In addition, given the density, d = L/L max , where L max = N (N −1)/2 is the maximum number of possible links, and the total number of links, L, expressed as, 2L = N i=1 k i , the average degree can be rewritten as, ⟨k⟩ = 2L/N = d(N − 1).Furthermore, for a network of N nodes, the degree distribution, p k , provides the probability of randomly finding a node with degree k, p k = N k /N , where N k is the number of degree-k nodes.
• The clustering coefficient, C i , is a measure of the level to which the neighbors of node i (k i > 1) are neighbors among them as well (formation of triangles or triads).It is defined as, where τ i is the number of pairs of neighbors or triangles involving the node i.The maximum number of triangles of i, τ i,max , is the number of pairs formed by their neighbors k i .The average clustering coefficient is the arithmetic mean, where the nodes of degree k < 2 are excluded of the mean.
• The distance between two nodes is defined as the minimum number of links that are traversed in a simple (not self-intersecting) path connecting two nodes.Such path is known as shortest path and its length as shortest-path length.As such, the average shortest-path length, ⟨l⟩, is the mean of the shortest-path lengths between pairs of nodes in the network, l ij , defined as, • The efficiency measures the effectiveness of the exchange of information through the network.The efficiency, E ij , between the nodes, i and j, is defined as inversely proportional to the distance, E ij = 1/l ij .If there is no path connecting the nodes, l ij = ∞, and E ij = 0; but if they are connected, l ij ≥ 1, and the value of the efficiency is bounded between 0 and 1.The average efficiency is defined as [18], t e x i t s h a 1 _ b a s e 6 4 = " A 1 e l P g p t x e W b y S n x y A W p k 2 v S I E 3 C i S b P 5 I 2 8 O w / O k / P i v P 5 Y C 0 4 + c 0 j + w P n 4 B i E e j + I = < / l a t e x i t > Kagome lattice < l a t e x i t s h a 1 _ b a s e 6 4 = " c N D 8 2 p R W 3 / o 6 k o B 6 3 S d D 4 h l P w 4 A K a c A 0 t a A M D B c / w B u / O g / P k v D i v P 9 G S U / w 5 h D 9 y P r 4 B L a e P 6 g = = < / l a t e x i t > Vicsek fractal < l a t e x i t s h a 1 _ b a s e 6 4 = " T e E H U w u Q k Y M M o 7 f 2 l 7 L / d d T 6 b N I = " > A A A B 9 H i c b V D L T g J B E J z F F + J r 0 a O X i c T E g y G 7 H N Q j i R e O m M g j g Q 3 p H X p h w u w j M 7 M o I f y J n o x 6 8 0 v 8 A f / G W d y D g n W q 7 q p O u s p P B F f a c b 6 s w s b m 1 v Z O c b e 0 t 3 9 w e G S X j 9 s q / l a t e x i t > Hexaflake fractal < l a t e x i t s h a 1 _ b a s e 6 4 = " Z h c r x 7 + T y m b 6 R e b n 8 2 c O Z f C 5 M h A = " > A A A B 6 3 i c b Z D N T g I x F I X v 4 B / i H + r S T S M x c U V m i F G X q B u X m M h P B E I 6 5 Q 4 0 d D q T t m N C J j y F r o y 6 8 2 1 8 A d / G D s 5 C w b P 6 e s 9 p c s / 1 Y 8 G 1 c d 0 v p 7 C y u r a + U d w s b W 3 v 7 O 6 V 9 w 9 a O k o U w y aL R K Q 6 P t U o u M S m 4 U Z g J 1 Z I Q 1 9 g 2 5 / c Z H 7 7 E Z X m k b w 3 0 x j 7 I R 1 J H n B G j R 0 9 X F + R Q F F m q B i U K 2 7 V n Y s s g 5 d D B X I 1 B u X P 3 j B i S Y j S M E G 1 7 n p u b P o p V Y Y z g b N S L 9 E Y U z a h I + x a l D R E 3 U / n G 8 / I S R A p Y s Z I 5 u / f 2 Z S G W k 9 D 3 2 Z C a s Z 6 0 c u G / 3 n d x A S X / Z T L O D E o m Y 1 Y L 0 g E M R H J i p M h V 8 i M m F q g T H G 7 J W F j m p W 3 5y n Z + t 5 i 2 W V o 1 a r e e d W 9 q 1 X q Z / k h i n A E x 3 A K H l x A H W 6 hA U 1 g I O E Z 3 u D d C Z 0 n 5 8 V 5 / Y k W n P z P I f y R 8 / E N u D e N 6 A = = < / l a t e x i t > BA fractal < l a t e x i t s h a 1 _ b a s e 6 4 = " D D B 9 G 3 J z 4 u / 0 A i 9 d R t U f p + a u m l Y = " > A A A B 7 H i c b Z C 9 T s M w F I W d 8 l f K X 4 G R x a J C Y q q S C g F j E Q w M D E W i P 1 I a V Y 5 7 0 1 p 1 7 M h 2 k K q o b w E T A j a e h h f g b X B K B m g 5 0 + d 7 j q V 7 b p h w p o 3 r f j m l l d W 1 9 Y 3 y Z m V r e 2 d 3 r 7 p / 0 N y h I D D r I 5 i v P 8 E k k F T Z j w P P 3 7 2 x G Y q 2 n c W g z M T F j v e j l w / 8 8 P z X R ZZ A x k a Q G B L U R 6 0 U p x 0 b i v D k e M g X U 8 K k F Q h W z W 2 I6 J n l 5 e 5 + K r e 8 t l l 2 G T q P u n d f d + 0 a t e V Y c o o y O 0 D E 6 R R 6 6 Q E 1 0 i 1 q o j S i S 6 B m 9 o X d H O E / O i / P 6 E y 0 5 x Z 9 D 9 E f O x z d U n I 5 A < / l a t e x i t > DLA fractal < l a t e x i t s h a 1 _ b a s e 6 4 = " K A w / s 8 w M w y 2 9 K L B z 9 f n 8 P s v n f E 4 r J S 1 G 0 a P n f / i 4 s b n V 2 g 5 2 d v f 2 P 4 U H n 6 9 t W R s B A 1 G q 0 t y m 3 I K S G g Y o U c F t Z Y A X q Y K b 9 P 5 y w W 8 e w F h Z 6 t 8 4 q 2 B U 8F z L T A q O r j U O L U s h l 7 r J V G 2 n C j K c B 5 0 f t E f j 7 j f K U B Z g a R z d n X Y Y C z p M c Z 0 r o L 8 o M y + q R 5 O T U / o a j v M 1 H C d n n S B g o C d r L 4 z D d t S N l k X f i 3 g l 2 m R V V + P w D 5 u U o i 5 A o 1 D c 2 m E c V T h q u E E p F M w D V l u o u L j n O Q y d 1 N y N P W q W 4 c z p U V Y a i l O g y / O 6 t + G F t b M i d Z 6 C 4 9 S + Z Y v m / 9 i w x u x i 1 E h d 1 Q h a O I t j W a 0 o l n S R M Z 1 I A w L V z A k u j H R T U jH l h g t 0 P x G 4 9 e O 3 y 7 4 X 1 0 k 3 P u t G P 5 N 2 P 1 k F 0 S J f y F d y T G J y T v r k O 7 k i A y L I I / n n t b x t 7 6 / 3 5 G / 4 W y 9 W 3 1 v d O S S v y g + f A V L k r j w = < / l a t e x i t > N = 1.5 ⇥ 10 4 hRi = 234 hR g i = 126 < l a t e x i t s h a 1 _ b a s e 6 4 = " t / Y V S d X D N J l h T S k n l 4 L 2 y x u T m Q c = " > A A A C V X i c b Z H B T 9 s w F M a d 0 E E J 2 y j j y M V a O 2 m n K o n Y 4 I K E t A s n x K a 1 I N W l c t y X 1 M J x I v s F q Y r 6 H 3 I H / h j Q 3 N J D S / d O P 7 / v s / z e 5 6 R U 0 m I Y P n v + V u P D 9 k 5 z N 9 j 7 + O n z f u v g S 9 8 W l R H Q E 4 U q z E 3 C L S i p o Y c S F d y U B n i e K L h O 7 n 7 N 9 e t 7 M F Y W + i 9 O S x j m P N M y l Y K j a 4 1 a h i W Q S V 2 n q r I T B S n O g s 4 l P a N R 9 w d l K H O w N A p v j z u M B R 2 m u M 4 U 0 D + U m T d y v j i m 6 + I o W 5 F P j j t B w E H 9 3 2 H / g v p e 2 T 8 o e 5 d 7 x Z P S u I g F s k W 2 y S 7 x y S E 5 I W f k g t Q I J 4 / k l X y Q T + f B e X H e n P e R d c o Z v 9 k k P 8 b 5 + g Z 3 T K a k < / l a t e x i t > N = 1.5 ⇥ 10 4 R = 64 R g = 45 < l a t e x i t s h a 1 _ b a s e 6 4 = " d n T S h h o o S + N d u y b 5 p q H o Y g t S 0 H 4 = " > A A A C P 3 i c b Z C 7 T s M w F I Y d 7 o R b g Z H F o k V C D F X S g c t Q q R I M j E W i U K k u l e O e J B a O E 9 k O U h X 1 4 W D m B X g C m B C w s e F e B m 5 n + s 7 5 f x / 5 / E E m u D a e 9 + T M z M 7 N L y w u L b s r q 2 v r G 6 X N r S u d 5 o p B i 6 U i V e 2 A a h B c Q s t w I 6 C d K a B J I O A 6 u D 0 d 6 d d 3 o D R P 5 a U Z Z N B N a C R 5 y B k 1 z r 3 z 7 L w 6 b 5 P o j D N 9 s 4 V + w P n 8 A v G T q 5 M = < / l a t e x i t > D ⇤ = 1.71D = 1.72 h 6 i = 0.01 < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 G z c Q R Q h l e 2 R B 2 b 6 p g l 6 z G 7 b A e 8 = " > A A A C P 3 i c b c n o 7 8 q z t Q m q f y 0 g w y 6 C Y 0 k j z k j B o 7 6 p U J C S D i s g h F r m P F o 9 g M 3 e r Z z T 4 + x n 6 t 0 a g S Y u V E H I w F E V R G A j B p a t 4 7 x E R N p E 1 U X Q K y / 3 1 T r 1 z x a t 4 Y + C / x p 6 S C p m j 2 y o + k n 7 I 8 A W m Y o F p 3 f C 8 z 3 Y I q w 5 m A o U t y D R l l t z S C j q W S J q C 7 x b i F I d 4 N U 4 V N D H i s v 2 c L m m g 9 S A K b S a i J 9 W 9 v N P z P 6 + Q m P O o W X G a 5 A c l s x H p h L r B J 8 a h M 3 O c K m B E D S y h T 3 P 4 S s 5 g q y o y t 3 L X n + 7 + P / U v a 9 Z p / W P M u 6 p W T 6 r S I E t p G O 2 g P + a i B T t A 5 a q I W Y u g B v a B 3 9 O H c O 8 / O q / M 2 i c 4 4 0 z d b 6 A e c z y 9 q 0 q r i < / l a t e x i t > D ⇤ = 1.77D = 1.75 h 6 i = 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " c 4 a p M 9 g 0 X X Y I F 5 y i f H C d G E f R 6 5 w = " > A A A C P 3 i c bV C 7 T s M w F H V 4 l v A q M L J Y t E i I o U o q V F i Q K s H A W C Q K S H W p H P c m s X C c y H a Q q q g f B z M / w B f A h I C N D f c x 8 D r T O f c c X / m e I B N c G 8 9 7 c m Z m 5 + Y X F k t L 7 v L K 6 t p 6 e W P z U q e 5 Y t B m q U j V d U A 1 C C 6 h b b g R c J 0 p o E k g 4 C q 4 P R n 5 V 3 e g N E / l h R l k 0 E 1 o J H n I G T V 2 1 C s T E k D E Z R G K X M e K R 7 E Z u t X T m 3 1 8 j P 3 a Q a N K i J U T c T g W R F A Z C c C k p X m v g Y m a y G P s V V 0 C s v9 9 U 6 9 c 8 W r e G P g v 8 a e k g q Z o 9 c q P p J + y P A F p m K B a d 3 w v M 9 2 C K s O Z g K F L c g 0 Z Z b c 0 g o 6 l k i a g u 8 W 4 h S H e D V O F T Q x 4 r L 9 n C 5 p o P U g C m 0 m o i f V v b z T 8 z + v k J j z q F l x m u Q H J b M R 6 Y S 6 w S f G o T N z n C p g R A 0 s o U 9 z + E r O Y K s q M r d y 1 5 / u / j / 1 L L u s 1 v 1 H z z u u V Z n V a R A l t o x 2 0 h 3 x 0 i J r o D L V Q G z H 0 g F 7 Q O / p w 7 p 1 n 5 9 V 5 m 0 R n n O m b L f Q D z u c X Y J 2 q 3 A = = < / l a t e x i t > D ⇤ = 1.46 D = 1.47 h 6 i = 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " F l Z p 5 8 q h W / q r K j W X 0 g P r 2 Z p t W c Y = " > A A A C O X i c b Z C 7 T s M w F I Y d r i X c C o w s F i 0 S Y q i S D o U F C Q k G x i L R g l S X y n F P E g v H i W w H q Y r 6 Z P A I v A A j T A j Y e A H c i x C 3 M 3 3 n / L + P f P 4 g E 1 w b z 3 t 0 Z m b n 5 h c W S 0 v u 8 s r q 2 n p 5 Y 7 O t 0 1 w x a L F U p O o q o B o E l 9 A y 3 A i 4 y h T Q J B B w G d y c j P T L W 1 C a p / L C D D L o J j S S P O S M G j v q l d s k g I j L I h S 5 j h W P Y j N 0 q 6 f X + / g I 1 6 u E W P 4 i I q i M B G D S 1 L z X w E R N 2 i P s V 1 0 C s v 9 9 R 6 9 c 8 W r e u P B f 8 K d Q Q d N q 9 s o P p J + y P A F p m K B a d 3 w v M 9 2 C K s O Z g K F L c g 0 Z Z T c 0 g o 5 F S R P Q 3 W J 8 / x D v h q n C J g Y 8 7 r 9 7 C 5 p o P U g C 6 0 m o i f V v b T T 8 T + v k J j z s F l x m u Q H J r M V q Y S 6 w S f E o R t z n C p g R A w u U K W 5 / i V l M F W X G h u 3 a 8 / 3 f x / 6 F d r 3 m N 2 r e e b 1 y X J 0 G U U L b a A f t I R 8 d o G N 0 h p q o h R i 6 R 8 / o D b 0 7 d 8 6 T 8 + K 8 T q w z z v T N F v p R z s c n O 4 + p c g = = < / l a t e x i t > D ⇤ = 2 D = 2 h 6 i = 1 < l a t e x i t s h a _ b a s e = " F l Z p q h W / q r K j W X g P r Z p t W c Y = " > A A A C O X i c b Z C T s M w F I Y d r i X c C o w s F i S Y q i S D o U F C Q k G x i L R g l S X y n F P E g v H i W w H q Y r Z P A I v A A j T A j Y e A H c i x C M n / L + P f P g E w b z t Z m b n h c W S v u s r q n p Y O t w x a L F U p O o q o B o E l A y A i y h T Q J B B w G d y c j P T L W C a p / L C D D L o J j S S P O S M G j v q l d s k g I j L I h S j h W P Y j N q f X + / g I u E W P i I q i M B G D S L z X w E R N i P s V C s v R c W r e u P B f K d Q Q d N q s o P p J + y P A F p m K B a d w v M C K s O Z g K F L c g Z Z T c g o F S R P Q W J / x D v h q n C J g Y r C p o P U g C m o i f V v b T T T + v k J j z s F l x m u Q H J r M V q Y S w S f E o R t z n C p g R A w u U K W / i V l M F W X G h u a / f x / F d r m N r e e b y X J G U U L b a A f t I R d o G N h p q o h R i R / o D b d T + K T q w z z v T N F v p R z s c n O + p c g = = < / l a t e x i t > D ⇤ = 2 D = 2 h 6 i = 1
Figure 2 :
Figure 2: Structural Analysis (r = 1).(a) R/R HEX as a function of D. (b) The number of connections, L, as a function of N .(c) The average degree, ⟨k⟩, as a function of the sixfold bond-orientational parameter, ⟨Ψ 6 ⟩.(d) Degree distribution, p k , for each system.Network metrics as function of the fractal dimension, D: (e) the center-perimeter communication ratio, η max , (f) average shortest-path length, ⟨l⟩, and (g) network efficiency, ⟨E⟩.
Figure 3 :
Figure 3: Efficiency analysis (r ≥ 1).(a) Network visualizations of the BA and DLA networks for r = {1, 2, 3} as indicated.Nodes are colored according to the clustering coefficient.(b) Corresponding degree distribution, p k .Metrics as function of the connectivity parameter, r: (c) network efficiency, ⟨E⟩/⟨E⟩ HEX , with the center-perimeter communication ratio, η max , in the inset; (d) structural cost, L/L HEX , with the clustering coefficient, ⟨C⟩, in the inset.In (c) and (d), the dotted horizontal line indicates the corresponding value of the metric for the Hexagonal lattice.
Table 1 :
Numerical results of the spatial and network analysis for r = 1. | 13,172.6 | 2022-05-14T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Individualized and Innovation-Centered General Education in a Chinese STEM University
: The concept and practice of general education have been widely discussed and debated in the Euro-American world, but its adaptation in China needs further discussion and understanding. Over the past decade, its impact on Chinese higher education is increasingly salient, with a large number of Chinese first-tier universities claiming to initiate general education reforms to their previously narrowly focused undergraduate programs. This paper explores the development, implementation, and support of general education in a new type of research university in China from an organizational perspective. Through a case study of the Southern University of Science and Technology (SUSTech) prior to the COVID-19 pandemic, this paper examines SUSTech’s individualized and innovation-based general education system, highlighting its institution-wide approach and innovation-centered perspective. The findings underscore the importance of integrating general education principles throughout the university to foster self-directed thinkers and cultivate students’ self-awareness, interests, and passions. This study also reveals how general education is used as an organizational solution to address a variety of historical and complicated issues that challenge Chinese universities. This research serves as a catalyst for reform and innovation in Chinese higher education, inspiring transformative practices that meet the evolving needs of students and society.
Introduction
China, renowned as the world's oldest continuous civilization, has deep philosophical traditions that emphasize character development and the acquisition of knowledge, aligning closely with the holistic principles of general education.Over the past two decades, the number of higher education programs focusing on general education has increased significantly in China.These programs advocate a holistic educational philosophy and provide lifelong learners with a solid foundation of integrated knowledge and social responsibility, challenging the traditional system of specialized training for specific professions.
The roots of general education in China can be traced back to the Republican Era , when Western ideas and practices of liberal arts and general education influenced the country's modern universities.However, these ideas took a backseat in the 1950s when China adopted the Soviet model of specialization [1].It was not until the 1980s, amidst criticism of the limitations imposed by narrow specialization, that general education regained prominence.Since the 1980s, there have been three proponents of general education in China, each aimed at addressing specific problems [2].The first was versatile education (tongcai education) in the 1980s, which sought to broaden knowledge scope and emphasize knowledge structure.The second was culture quality education (wenhua suzhi education) in the 1990s, which focused on humanities and moral education to counterbalance the dominance of hard sciences and engineering.Finally, in the 2000s, general education (tongshi education) emerged, emphasizing the intrinsic value of education and combating the sense of self-loss amid a prevailing utilitarian ethos.It has been pointed out that Chinese educators' commitment to general education in the 21st century has shifted from theoretical debates to practical implementation, with varying degrees of success [2].
An exemplar case of implementing general education exploration in China is the Yuanpei Program at Peking University.Its five-year review (2000)(2001)(2002)(2003)(2004)(2005) of the program highlighted the recognition of general education as a concept and model for talent development in higher education and emphasized the importance of tailoring the idea of general education to Chinese cultural characteristics [3].The case study conducted at Peking University examined the significance, feasibility, and systemic challenges involved in implementing general education in comprehensive universities across China.These challenges encompassed institutional environment, conventions, stakeholder conflicts, limited understanding, faculty competence issues, and resource constraints.The study also provided suggestions for foundational modules of general education, encompassing rationales, goals, program arrangements, curriculum design, faculty resources, pedagogy, evaluation, and support systems.Furthermore, the study highlighted the link between general education and liberal education, both aspiring to cultivate well-rounded individuals.This 2008 research report on the Yuanpei Program played a pivotal role in informing and inspiring subsequent education reformers in China, shedding light on critical success factors, obstacles, and difficulties associated with the implementation of general education, particularly when integrated with specialized education within the same university [3].
The two decades before the COVID-19 pandemic have witnessed more prestigious universities in China undertaking educational reforms and pedagogical innovations driven by the aspiration to achieve excellence and leadership in higher education in an increasingly globalized world.Among the forefront scenarios in the reform of Chinese higher education, notable developments include the remarkable expansion of research capacities and the implementation of general education in arts and sciences [4].These reforms aimed to enhance the quality and global competitiveness of Chinese universities while adapting to the evolving demands of the modern educational landscape.
In addition to internal considerations within universities, scholars have also pointed out that Chinese universities' engagement with general education is motivated by the student recruitment market [5].The term "general education" gained attention as comprehensive universities sought to establish elite degree programs focused on broader knowledge content and educational objectives.The exploration of general education by top-tier Chinese universities has served as a marketing strategy to attract prospective students, and in turn, has shaped models within the higher education sector.While China has introduced general education with the aim of fostering creativity and innovation and supporting national development goals [6], specific goals and implementation strategies at the institutional and program levels remain unclear [7].Chinese universities face the challenges of adapting the models of Western universities to their own contexts [8], often drawing on their own practical approaches to general education [7].In the last decade, newly established research universities in China have embraced the ambition of cultivating innovative and well-rounded talents by using general education as a foundation [7].
This study closely examines the establishment and implementation of general education at a new research university in China that places particular emphasis on fostering students' creativity, innovation, and holistic development.The main objective is to gain insights into the organizational aspects of developing, implementing, and supporting general education programs in alignment with the university's overarching goals.This study also examines how general education has been exploited as an action strategy to address ambiguous systematic problems and thus has gone beyond the scope of discussion in the scholarly field outside of China.By addressing these research questions, this study aims to contribute to the understanding and advancement of general education practices in Chinese higher education and shed light on the broader landscape of educational reform in China.
Literature Review
The terms "liberal arts education" or "general education" are widely discussed throughout the world.The earliest writing that attempted to define "general education" can be traced back to the Reports on the Course of Instruction in Yale College in 1828, in which education is believed to provide an individual with a general foundation in areas that are common for all professions, and not just one specialized profession.The Harvard Committee Report in 1945 defines general education as a part of a student's whole education which looks first of all to his life as a responsible human being and citizen.A review of the current literature in China shows a lack of a coherent and articulated theoretical framework for general education reform [8][9][10].This section will discuss the theoretical foundations for this study, which include the functions, approaches, and models of general education, as well as the specific theoretical framework employed in this research.
Functions of General Education
The surge in general education has been attributed to various factors by scholars, policymakers, and pundits.These factors include the increasing demand for well-rounded workers in the current and future economy, the need to educate individuals who can tackle complex global issues beyond their specific areas of expertise, the imperative for higher education to address ethical, individual, and social responsibilities alongside imparting knowledge and skills, and the importance of granting students the freedom to choose their career paths instead of pressuring them into potentially unsuitable professions at a young age.General education serves multiple functions that include student learning, communal well-being, and institutional purposes.These functions underscore the significance of general education in shaping individuals, society, and educational institutions.
One of the primary functions of general education is to facilitate student learning by promoting a broad range of student learning outcomes.These include developing intellectual proficiencies, fostering ethical and meaningful engagement, and providing a holistic education [11][12][13].Through general education, students acquire essential concepts, methodologies, and knowledge in various disciplines [14].Additionally, general education emphasizes the development of intellectual skills that enable students to make sense of information and their own lives and to apply knowledge for ethical purposes.It aims to produce well-rounded individuals equipped with the intellectual capacities necessary for employment in today's context [15].
Beyond individual learning, general education also plays a role in fostering communal well-being.It contributes to the formation of an educated citizenry and cultivates a sense of public responsibility [16].General education is viewed as a means of preparing students who will actively contribute to building a more equitable society and a global community.It strives to achieve democratic outcomes and global learning and aims to create inclusive and just societies.By fostering knowledge, awareness, and actionable consciousness, general education seeks to empower students to become active agents for the betterment of their communities.
Moreover, general education serves institutional purposes by providing integration and imprinting a mission and identity on the educational program.In the complex landscape of college students' lives, general education offers a unique context for integrative learning [13].It allows students to make connections and meanings across diverse academic disciplines and experiences.By facilitating integrative learning, general education helps students navigate the fragmented nature of their education and develop a comprehensive understanding of knowledge.Additionally, general education influences an institution's educational program and reflects its mission and identity.It contributes to framing and fulfilling the overall educational philosophy of a college or university [17,18].The general education curriculum, which is mandatory for all students, becomes a reflection of an institution's values, goals, and educational mission [19].Thus, general education plays a critical role in shaping the institutional identity and ensuring that the educational program aligns with the institution's overarching mission.
Approaches to General Education
Walker and Soltis (1997) summarize three approaches to general education that reflect institutional academic values and intended learning outcomes: the first is a subject-centered approach that focuses on transmitting knowledge to the next generation, and general education is delivered by teaching basic skills, critical thinking, and mastery of important facts and information; the second is a society-centered approach that focuses on creating and ensuring a prosperous and healthy society, so the aims of education focus on civic responsibility, vocational training, ethical values, development of democratic attitudes, and the preparation of individuals for an industrialized society and for economic competence; and the third one is an individual-centered approach, which emphasizes the importance of individual freedoms, talent, and happiness, developing the student's potential, and preparing them for community life [20].Aldegether (2015) points out that there are three perspectives on general education requirements, namely the traditional or conservative perspective, the multicultural perspective, and the radical perspective [21].Each of these perspectives holds a different view of academic values and hence the direction of education.The traditional perspective emphasizes the importance of the classical curriculum, which deals with how to live right and suggests teaching the courses for that purpose in their original texts.The multicultural perspective emphasizes that general education should include multiple perspectives rather than a single-knowledge perspective to help students search for reliable knowledge about the world by teaching them to use their own judgments on what they read or learn about and what is happening around them.The radical perspective emphasizes the importance of critical pedagogy through which educators and students can think critically about how knowledge is produced and transformed in relation to the construction of social experiences and help students change their current social practices.In brief, Aldegether's summary draws the distinctions by knowledge-based, society-based and individual-agency-based and resonates with Walker and Soltis' categorization of subject-centered, society-centered and individual-centered.
Models of General Educaiton
Models of general education play a crucial role in structuring the core curriculum for undergraduate students.Several models have been identified and elaborated upon in the literature, each with its own advantages and challenges.This section summarizes different models of general education and their key features.
The liberal arts model emphasizes a well-rounded education in the humanities, social sciences, and natural sciences [22].It originated from the classical curriculum of colonial colleges and focuses on subjects such as literature, history, philosophy, and foreign languages.However, it does not include distribution requirements in natural or social sciences.While this model develops critical thinking skills, it has been criticized for prioritizing subjects distant from the practical skills valued by employers [23].
The core model of general education assumes the existence of a discrete body of knowledge that every educated person should know [24][25][26].It requires all students to complete a series of prescribed interdisciplinary courses outside their academic department.The core model promotes interconnections across different disciplines, diverse methodologies, and various ways of viewing the world.However, designing and sustaining these courses can be expensive, and students may struggle to see the benefits, particularly if they are more focused on their majors [26].
The distribution model requires students to take a certain number of courses in different subject areas, such as humanities, social sciences, and natural sciences [27].This model aims to provide breadth and exposure to a wide range of ideas.It introduces students to various disciplines and their bodies of knowledge and methodologies.However, one challenge is that students may prioritize ease or schedule convenience over actual learning [26,28].Students may also perceive these requirements as arbitrary hoops to jump through without clear value or connection to their personal or professional goals [29].
In the thematic model, courses are organized around a central theme or set of themes to provide students with a coherent and integrated education that helps them understand the connections between different subject areas.By structuring courses around a theme, it offers students the opportunity to explore a specific theme or set of themes in depth, while also gaining a broad understanding of various disciplines and perspectives [30].
The competency-framed model focuses on individual abilities and skills of learning and personal growth [24].It emphasizes the development of specific competencies rather than the acquisition of specific content knowledge.This model allows for overlap with the requirements of the major and focuses on transferable skills.However, it presents challenges in determining the distinctiveness and necessity of general education courses outside the major, as well as coordination and communication between faculty and administrators [31].
In practice, many institutions employ a hybrid model that combines elements from different models to create a unique program that meets their specific needs.Hybrid models can include thematic strands, core-distribution approaches, or combinations of core, distribution, and competency elements [26,32].These hybrid models aim to integrate different perspectives and requirements and provide students with a more comprehensive and personalized educational experience.
Overall, the selection of a general education model depends on the goals and values of an institution, as well as the desired outcomes for undergraduate students.Each model has its own strengths and weaknesses, and institutions often strike a balance by adopting a combination of models that best suits their educational philosophy and student needs.
Theoretical Framework
To gain insights into the organizational aspects of developing, implementing, and supporting general education programs in alignment with the university's overarching goals, this study employs Bolman and Deal's (1991) four frames of organizational thought, namely the structural, human resources, political, and symbolic frames, as its theoretical framework [33].These frames offer distinct perspectives that shed light on the functioning of organizations and can be effectively applied to comprehend the nature and operation of general education.By using these frames, this study aims to gain a comprehensive understanding of how general education operates within an organizational context.
Structural frame: The structural frame emphasizes the importance of formal roles, responsibilities, and organizational structure.It views organizations as systems that adapt to their environment and allocate resources and responsibilities accordingly.In the context of general education, this frame suggests that colleges and universities have established goals and objectives, and the curriculum is structured to achieve those goals.General education courses provide a foundational knowledge base and ensure coordination and integration across different academic disciplines.
Human resources frame: The human resources frame focuses on the interdependence between individuals and organizations.It recognizes that organizations are composed of people with diverse needs, skills, and values.In the context of general education, this frame emphasizes the personal and professional growth of students.It seeks to align educational experiences with students' needs and values, allowing them to develop critical thinking, analytical skills, and informed value judgments.The human resources frame values relationships beyond formal organizational structures, encouraging students to engage in holistic learning experiences.
Political frame: The political frame views organizations as arenas where different interest groups compete for power and resources.It acknowledges the presence of conflicts and the diverse perspectives and needs among individuals and groups within an organization.In the context of general education, this frame recognizes the existence of power dynamics and the distribution of resources within educational institutions.It suggests that decision-making processes, resource allocation, and curriculum design can be influenced by various stakeholders, including institutional leaders, faculty, administrators, students, and external forces.
Symbolic frame: The symbolic frame emphasizes the social and cultural aspects of organizations.It recognizes that organizations are driven by symbols, rituals, ceremonies, stories, and myths.In the context of general education, this frame highlights the importance of the educational institution's culture, values, and history.General education serves as a manifestation of an institution's educational philosophy and reflects its distinctive characteristics.It may also be exploited as legitimacy or norms set by benchmark institution in the field.Symbolic elements, such as institutional traditions, educational experiences, and shared values, shape students' perceptions and contribute to their overall educational journey.
By employing Bolman and Deal's (1991) four frames, the analysis of general education can encompass the structural aspects of curriculum design and organizational goals, the interpersonal and developmental aspects of student growth, the power dynamics and resource allocation processes, and the cultural and symbolic elements that shape the educational experience.This multidimensional approach provides a comprehensive theoretical framework for examining general education and understanding its role within the larger educational landscape.
Research Methodology
The case study method is considered the most appropriate approach for this study, as it allows a detailed investigation of a specific social phenomenon in its real context [34].For this study, a single case study design was chosen to comprehensively examine the development and support of institution-wide, individualized, and innovation-centered general education at a specific university where the authors are action researchers and can access the actual process of decision making and implementation.This approach aimed to gain a comprehensive understanding of the complex social phenomena involved in cultivating innovative talents at the case university.
Case Selection
The selection of Southern University of Science and Technology (SUSTech) as the case university for this study was based on its unique characteristics and its pioneering role in developing a comprehensive general education model.As a university entrusted by the Ministry of Education to explore the establishment of a modern university system and an innovative talent cultivation model, SUSTech differs from other Chinese universities in its emphasis on integrating general education throughout the institution to promote students' self-directed thinking.Located in Shenzhen, a city with a limited number of higher education institutions despite its large and young population and thriving economy, SUSTech was established to address the demand for fundamental research, high-level talent, and sustainable development.With the opportunity to start anew, SUSTech strives to become a world-class university by drawing from the best practices of excellent universities worldwide and attracting faculty members with extensive international backgrounds.SUSTech's success is evident in its rankings and reputation, attracting students with everimproving academic preparation.While still in its nascent stage, SUSTech awaits the test of time to fulfill its mission of cultivating innovative talents who will grow into leading scientists and engineers.
Data Collection and Analysis
To ensure a comprehensive dataset to gain insights into the organizational aspects of developing, implementing, and supporting general education programs in alignment with the case university's overarching goals, multiple data collection methods were employed.
(1) Various university documents were collected, including policies, strategic plans, regulations, and minutes of general education-related meetings.These documents offered insights into the formal roles, responsibilities, and organizational design of the general education program.They provided a foundational understanding of how the goals and objectives of general education were structured, as well as how resources and responsibili-ties were allocated within the university environment.(2) External reports from reputable sources, such as university rankings and external quality evaluations, were gathered to display the external perceptions and recognition of the case university's general education initiatives.(3) The researchers employed a participant observation approach in which they directly observed the implementation of general education programs, interactions among institutional leaders, stakeholders, and the general environment and culture surrounding general education within the case university.(4) Focus groups and discussions involving faculty, administrators, and students were conducted in the university's natural setting to gain different perspectives and insights related to general education and to triangulate the findings the researchers had obtained from other data.By engaging these stakeholders, the interdependence between individuals and the organization was explored, allowing for a comprehensive understanding of personal and professional growth opportunities for students.
Data collected through various methods were rigorously analyzed and carefully integrated to gain a comprehensive understanding of the establishment and implementation of general education within the new research university.(1) University documents and external reports were subjected to a thorough content analysis.Recurring themes, goals, and strategies contained in these documents were identified.The structural and political frames were used to examine how the university formally outlined its approach and resources to general education.The insights gained in this part shed light on the goals and structural dimensions of the general education at the case university.(2) Participant observation data in the form of researcher notes, narrative descriptions, audio and video documents, and visual documents and comments were analyzed through iterative coding and thematic analysis.This qualitative approach allowed the researchers to gain a comprehensive understanding of the implementation of the general education program in the university context.Observations were viewed through the lens of both human resources and symbolic frames.The human resources frame shed light on how individuals' interactions and behaviors contributed to the program's effectiveness in fostering holistic student growth.The symbolic frame, on the other hand, provided insights into the cultural nuances and institutional values that manifested in the observed practices.The findings with this lens are presented in the implementation, structural, pedagogical, and integrative dimensions of the general education at the case university.(3) Data collected in the focus groups and discussions were subjected to thematic coding and qualitative analysis to identify recurring themes and underlying patterns in participants' narratives.Findings from the focus groups were viewed with structural, human resources, and symbolic frames to gain insights related to general education and to triangulate the findings that the researchers had obtained from other data.
Integrating data from these different sources was a meticulous process that involved triangulation to ensure credibility and validity.Findings from each method were crossreferenced to provide a nuanced and comprehensive understanding of the multiple dimensions of the general education program.Findings from university documents and external reports provided context for the observed practices and discussions.Similarly, participant observation and focus group data enriched each other by offering different perspectives on the same phenomenon.The integration of these data sources facilitated a holistic analysis that culminated in a coherent interpretation of the complex organizational dynamics that shape general education at the research university.
Research Findings
SUSTech stands as a unique example of this transformative approach to general education.Its methodology defies easy categorization because it goes far beyond the boundaries of conventional curricular discussions.Instead, SUSTech's general education embodies a multifaceted approach that is interwoven with the university's core missions and unique historical path and embedded in its own structure.Intricately serving multiple functions, SUSTech's general education program not only nurtures student learning but also fosters communal well-being while integrating the institution's overarching mission into its educational endeavors.The result is a distinctive model that combines elements from multiple educational paradigms, as discussed in the literature review.
In order to gain insight into the organizational intricacies associated with designing, implementing, and sustaining general education initiatives that align with the overall goals of the university, a detailed description of SUSTech general education is used to provide readers with a clear picture.In order to comprehensively present the development, implementation, and support aspects of SUSTech's general education paradigm from an organizational perspective, the findings are structured into five dimensions that together comprise the description of SUSTech's general education.
Goal dimension: innovation and excellence as drivers of institutional advancement; Implementation dimension: continuous exploration and adaptation; Structural dimension: a whole-institution approach; Pedagogical dimension: a student-centered approach; Integrative dimension: enriching the educational experience through a holistic, immersive approach.This description of the different dimensions illustrates the complex interplay that makes up SUSTech's innovative approach to general education.
Goal Dimension: Innovation and Excellence as Drivers of Institutional Advancement
In 2009, Professor Qingshi Zhu embarked on his journey as the inaugural president of SUSTech following an extensive global search.An academician of the Chinese Academy of Sciences and a renowned higher education leader widely known for his reform mindset, President Zhu's vision for this new university was influenced by the famous question posed by Academician Xuesen Qian (1911-2009), a prominent scientist-"Why have Chinese schools rarely produced truly outstanding talents?"The so-called Qian's Question was raised in his meeting with the then Prime Minister Jiabao Wen and has ever since become the classical educational conundrum that has bedeviled Chinese educators.The question raised existing criticism about Chinese universities to a level that led to heated debates nationwide.In response, President Zhu declared that the mission of SUSTech is to answer Qian's Question by developing into one of the best universities in China that fosters real capabilities in students and trains them to be talent needed by the society upon their graduation.
In December 2010, the Ministry of Education approved preparations for the establishment of SUSTech and set a preparation period of three years.In April 2012, after concerted efforts by visionary SUSTech people, higher education leaders, and government leaders, the Ministry of Education approved the official establishment of SUSTech ahead of schedule and entrusted to SUSTech the two-pronged mission of "exploring the establishment of a modern university system" and "developing an education model for the cultivation of innovative talents".In parallel with the formal process of the Ministry of Education, SUSTech undertook action to form a legitimate mission statement for itself by deriving the key messages firstly from a meeting in December 2009 between President Zhu and Mr. Guiren Yuan, the then-minister of the Ministry of Education, and secondly from the national education reform policy, Outline of China's National Plan for Medium and Long-term Education Reform and Development (2010-2020) released in July 2010 [35].In early 2012, the formal description about the University ran as follows: South University of Science and Technology of China (SUSTC) (In English, the University was named by President Zhu as South University of Science and Technology of China.In 2016, the English name was officially changed to Southern University of Science and Technology in the term of the second president Professor Shiyi Chen.) is a higher education institution built with new thinking and mechanism in the backdrop of Chinese higher education reform and development and by the Shenzhen Municipal People's Government to implement the directives of "The Plan outline of the national mid-term and long term education reform and development" and "The plan outline of the Pearl River Delta reform and development" (2008-2020).SUSTC is an experiment in comprehensive reform for Chinese higher education and carries the significant mission to explore for an education model in China that cultivates innovative talent. . .SUSTC shall borrow from the education models of the world-class universities, innovate the system and mechanism for its operation. . .with goals and self-positioning to become an international high-level research university, and to become a key base for major scientific and technological research and the cultivation of excellent and innovative talents.(SUSTC, 2012) Whether it is "truly outstanding talent" in Academician Qian's terms or "real masters", "excellent and innovative talent" in President Zhu's terms, excellence and innovation have become the two key words that direct pathways for the education reform efforts at SUSTech, with the educational goals of raising leading scientists and engineers for the future.To realize this mission, the University decision-makers chose general education, which is intended to be both broad-based and individualized, as an important mechanism for coordinating curriculum, pedagogy, and administration.The characteristic of being broadbased was widely accepted at the time, thanks to China's general education experiments in the former decades.The concept of individualized education arises from the belief that innovative talents should be able to think independently and 'out of the box', a quality not traditionally fostered by the basic education in China, where exams dictate what students learn and why, and students learn by drills and memorization of knowledge.
In response, SUSTech educators needed to address, first, in the education process how to help students find their real interests, increase their motivation to learn, and grow individually; and second, how to support students' individual learning needs.In practice, they have found that the key is to guide students to find their own path based on knowing themselves and discovering their true passion.Personal commitment leads to engaged learning and thus to excellence.By devoting the first year or two of college to general education before deciding on a major, SUSTech students can make a choice rather than relying on a poorly informed decision about a major before entering college.To a certain extent, the experiment at SUSTech embodies the development of individual subjectivity and the cultivation of personhood.
The guiding principles of innovation and excellence permeate not only the pursuit of educational excellence but also the institutional growth of SUSTech.Since its birth, SUSTech has aimed to develop into a world-class university in a remarkably short period of time by breaking free from the constraints of established Chinese universities and by drawing inspiration from the best practices of excellent universities around the world.The opportunity to establish the university from scratch in the reform-minded and prosperous City of Shenzhen proved to be an advantage for SUSTech to create a high-level system that successfully supports its education ideas [36].When President Zhu finished his term in September 2014, SUSTech had 107 faculty members in place, about 1000 undergraduate students, and 16 undergraduate degree programs.
During the term of President Shiyi Chen (2015-2020), SUSTech advanced upon the foundation laid by President Zhu.By the end of 2020, when President Chen finished his service, SUSTech had about 1000 faculty members, most of whom have extensive international backgrounds (50% tenure-line, close to 50% research track, and about 100 teaching track faculty members); 4374 undergraduate students, 3186 graduate students; 34 undergraduate degree programs that cover sciences, engineering, business, life science, and medicine; 8 master's degree programs; 4 doctoral degree programs; and a revenue about 10 time of the 2014 revenue.The numbers demonstrate the leaping forward progress of the university, and quality is never neglected.All tenure-line faculty members are PhD holders, with more than 90% of them having overseas education and work experience, more than 60% of them from the world's top 100 universities and about 28% hold a foreign passport.English is the instructional language on campus.The faculty body is capable of teaching in English, conducting world-class research aided by international exchanges and collaboration, and is comfortable with student advising.
The success of young SUSTech is attested by rankings.According to the 2021 World University Ranking by Times Higher Education, SUSTech was ranked No. 8 among mainland Chinese universities with the highest publication quality in China and ranked 250-300 worldwide.In the QS 2021 World University Ranking, SUSTech was No. 14 among mainland Chinese universities and No. 1 in the student-faculty ratio.In the Shanghai Ranking 2020 for mainland Chinese universities, SUSTech was No. 8 for high-level academic hires.In Nature Index 2020, SUSTech ranked 15th in China, and 61st in the world.
The academic excellence of the faculty body and the elevated institutional reputation through world rankings reinforced the legitimacy of SUSTech's educational innovation in the marketplace.Over the years, SUSTech has attracted students with better academic preparation and greater understanding of what SUSTech offers.By 2020, SUSTech students came from 22 provinces/directly administrated cities all over China.They are selected through a combined score consisting of the National College Entrance Examination score (60% of the total), a SUSTech administrated examination score (25% computer-based, multiple-choice examination and 5% interview), and high school performance record (10%).According to their standardized National College Entrance Examination scores, the students admitted by SUSTech are in the top 10% of high school graduates, and students enrolled from 10 out of the 22 provinces/directly administrated cities are in the top 1%.When they graduate, 1/3 of them go to overseas graduate programs, 1/3 to domestic graduate programs, and 1/3 to work in companies.Since the graduation of the first cohort, the University has adopted the practice of publishing reports or interviews of excellent graduates that review the trajectory of their college years, highlighting the connection of their personal growth and accomplishments with individual exploration enabled by university opportunities and resources.By giving special publicity to these high-achieving students, SUSTech sets examples for the student body to learn from and emulate.The students' stories also attract prospective students who are drawn to the freedom and independence of a SUSTech education, and along with their parents, promote SUSTech's market recognition, which in turn reinforces the innovative education efforts at SUSTech.By the summer of 2020, SUSTech had only six graduating classes.It awaits the test of time to see whether its goal of cultivating innovative talent to become leading scientists and engineers is to be fulfilled.
In retrospect, many factors have contributed to the miraculous success of SUSTech, which include, but are not limited to, the generous financial support from the municipal government, the continuous commitment of university leadership, the national ambition to develop world-class universities, and the overall ethos of the public in favor of aligning Chinese universities with the world's top universities.SUSTech is the first mainland Chinese university in the People's Republic of China to establish a collective Board of Regents for the presidential search, to reverse the brain-drain trend by hiring more than 90% of faculty members from around the world, and to adopt a college admission assessment mechanism that does not rely solely on the National College Entrance Examination.Many factors and institutional mechanisms support SUSTech in successfully implementing the talent cultivation goals articulated by older generations of educators [37].The opportunity to start a university from scratch with no historical burden enables SUSTech to apply the most up-to-date knowledge about how to build a university for the future.General education functions as a lynch pin in SUSTech's reform system and is linked to a variety of education mechanisms that perhaps develop separately at other institutions.To a certain extent, this study may argue that general education at SUSTech is both significant and tactical, like a stored solution that finds its problem [38].
Implementation Dimesion: A Continual Exploration and Adaptation
SUSTech, a STEM (Science, Technology, Engineering, and Mathematics) university, is viewed as a pioneer in cultivating future leading scientists and engineers and a testbed for higher education reform in China.General education plays a vital role in SUSTech's reform efforts, and defining its scope and content has been a critical issue from the beginning.SUSTech's curriculum designers studied the experience of general education in American and European universities, as well as the nature of STEM learning, to inform their decisions.
In developing the curriculum, SUSTech combined the traditions of liberal education in Europe with the models of general education in the United States.This entailed an amalgamation of classical literature, philosophical discourse, historical knowledge, language proficiency, skill cultivation, and interdisciplinary cognition.As a result, a set of attributes that SUSTech aspires to cultivate within its undergraduates crystallized: extensive knowledge about science and the world, an in-depth understanding of humanity, society, and history, and an ethical consciousness coupled with a sense of social responsibility.
The inceptive STEM-centric phase: The first phase of general education curriculum development at SUSTech involved a proactive five-year period of exploration in which the university embraced its identity as a preeminent STEM institution, inspired in part by the California Institute of Technology model.General education during this phase prioritized a comprehensive understanding of the scientific domain embodied in courses in calculus, linear algebra, physics, chemistry, biology, and computer science.
While STEM knowledge was emphasized, SUSTech also aimed to foster students' overall development by expanding their knowledge of the world and enhancing their intrapersonal intelligence.Because SUSTech initially had a limited number of faculty members in the humanities and social sciences, the university had to explore innovative approaches to teaching general education.This involved carefully selecting MOOCs (Massive Open Online Courses) and offering interdisciplinary courses delivered by guest faculty, which formed the core of the humanities, arts, and social sciences (HASS) curriculum during this period.
The evolutionary elaboration phase: The subsequent phase of general education development at SUSTech was marked by the blossoming of HASS offerings, the introduction of English medium instruction, level-appropriate STEM courses, and the integration of co-curricular education at the residential colleges for whole-person development.With an enlarged cadre of faculty, expanded course offerings, and increasing interdisciplinarity, the contours of general education were rapidly expanding.At this stage, SUSTech's aspirations resonated with the Stanford University model.Formative milestones encompassed the establishment of pivotal centers, including the Center for the Humanities, the Center for Social Sciences, the Center for Higher Education Research, the Center for Language Education, and the Arts Center, which eventually merged to form the College of Humanities and Social Sciences.These centers were instrumental in adding depth, diversity, and structural coherence to SUSTech's general education landscape.In addition, tiered STEM courses were introduced to accommodate the varying entry levels of students and the requirements of different degree programs.An example of this is the division of the course of Calculus into Mathematical Analysis, Calculus A, and Calculus B, to accommodate different academic backgrounds.Offering bilingual and English-medium classes encouraged individual academic challenges and strengthened students' future competitiveness.This phase also witnessed innovation in the prescribed political and moral education modules, culminating in the merging of co-curricular undertakings within SUSTech's residential colleges and the participation of esteemed scholars on the theme of "China and Modern Science and Technology".General education in this phase emphasized a comprehensive and individualized STEM foundation, interdisciplinary HASS engagement, innovative pedagogical approaches, and integrated ethical education, promoting students' autonomy in pacing their learning journey and fostering interdisciplinary intersections.
Structural Dimension: A Whole-Institution Approach
SUSTech takes an innovative and systematic approach to undergraduate education by carefully integrating general education courses with subject-specific content from different academic departments.This harmonious integration not only provides students with a range of intellectually stimulating learning experiences, but also creates a deep sense of coherence throughout their academic journey.Characterized by a comprehensive, institution-wide structure, SUSTech's approach to undergraduate education underscores its commitment to nurturing well-rounded and capable graduates.This institutional philosophy manifests itself in the design of a four-year general education framework with special emphasis on the pivotal first year of study and the incorporation of residential college structures that prove to be powerful catalysts for students' holistic development.In addition, SUSTech employs a mixed-class course system that caters to both domestic and international students, which is a marked departure from the prevalent separate international college model adopted by other prominent Chinese universities.
Central to SUSTech's general education scheme is the pivotal first year, during which students begin their academic journey by enrolling in general education courses.This foundational phase not only stimulates intellectual curiosity, but also gives students the privilege of choosing a major at the end of their first year.A cornerstone of the general education curriculum is the STEM module, which takes on special importance during this initial stage of study.This module consists of a constellation of courses that include calculus, linear algebra, physics, chemistry, biology, and computer science-each of which is carefully tailored to meet the requirements of the various majors and serves as a foundation from which students can explore their academic interests and identify possible directions to help them make informed decisions about which major to pursue.The curriculum also includes a selection of general education courses offered by each degree program.These courses have been intentionally designed to extend beyond the boundaries of the chosen major and serve as a solid foundation for future interdisciplinary exploration, strengthening students' capacity for interdisciplinary innovation.After deciding on a major, students have the option of taking the science elective at their own pace.
In consonance with SUSTech's holistic vision, students must complete credit hours in the humanities module, the social sciences module, and the music and arts module.The humanities courses promote an understanding of a variety of classical Chinese and Western literary works and encourage students to critically interpret and analyze elements such as genres, thematic nuances, and historical contexts.By situating the humanities disciplines in their historical and cultural contexts, students are able to use this knowledge for creative thinking and effective problem solving.Courses in the Social Sciences module aim to provide an understanding of social and cultural diversity, social science theories, research methods, and the art of social research.This curriculum fosters critical thinking skills by training students to observe and analyze social phenomena with a discerning eye.The music and arts module emphasizes an appreciation of artistic expression and provides students with opportunities to interpret both traditional and contemporary artistic expressions.The module not only teaches interpretive skills, but also encourages student engagement with art forms such as music, drama, dance, and fine arts.Through this multi-faceted curricular approach, SUSTech students receive a comprehensive education that combines STEM fundamentals, language skills, humanistic insights, social science perspectives, and a cultivated appreciation for the arts.These interdisciplinary encounters foster the development of creativity, critical thinking, and problem-solving skills and prepare students for the diverse challenges of their future careers.
The essence of SUSTech education lies in the seamless interplay of general education and discipline-specific learning, resulting in a comprehensive and cohesive learning experience.This harmonization enables students to apply their acquired knowledge and skills both within and outside their chosen disciplines and to develop solutions to real-world problems in a variety of contexts.This integration, in turn, leads to holistic personal, professional, ethical, and intellectual development.The institutionalized system of general education supports students' development in their chosen fields of study and beyond by providing them with required and elective courses that promote the acquisition of comprehensive knowledge and skills, advance the development of a growth-oriented mindset, and facilitate holistic personal maturation.
Complementing this educational paradigm are the residential colleges that serve as focal points for students' holistic development.These residence colleges go beyond mere housing and become the core of students' personal and communal development.In this supportive environment, students participate in interactive, extracurricular learning activities and mature cognitively, emotionally, and socially.Each SUSTech student is assigned to a residential college and is matched with a faculty advisor through a mutual selection process.These college life advisors, who are distinguished faculty members themselves, provide advice drawn from their academic backgrounds and life experiences, and often serve as exemplary role models for their advisees.To facilitate this interactive type of college education, SUSTech maintains a favorable faculty-student ratio of 1 to 10 to ensure that each student receives the attention and support they need.
The mission of SUSTech's residential colleges extends beyond the functions of housing and personal counseling to educating students to become proactive agents who can contribute positively to society.The colleges are crucibles that foster students' social development by having all students participate in various social practice projects.These initiatives are an integral part of the moral education module within the general education curriculum and are worth five credits.In collaboration with academic units, the residential colleges design and implement hands-on learning experiences that provide students with a conducive environment to explore their interests, enhance their self-awareness, promote social responsibility, and provide them with lifelong learning skills through collaborative learning, extracurricular engagement, special interest groups, and joint endeavors.The residential colleges are also crucibles for esthetic education, which manifests itself in the various student clubs and societies in the areas of chorus, theater, dance, folk music, symphony, and fine arts.Each residential college maintains its own constellation of clubs and encourages students to cultivate their sense of beauty in various artistic dimensions.
The particular structural dimension of SUSTech's general education focuses on an all-encompassing institution-wide framework.This structural innovation underpins a commitment to comprehensive student development that is supported by the residential college paradigm that promotes holistic student development.SUSTech's overarching ethos is complemented by a course system that accommodates both domestic and international students.SUSTech's general education model is characterized by its flexibility, offering students the freedom to choose courses and classes taught in both bilingual and English formats.In a departure from the traditional approach, SUSTech does not mandate the completion of the entire general education package within the first year.Instead, students are free to take the general education courses in any semester, with the exception of the GE science module, English and Chinese writing courses, and courses required by the Ministry of Education.This academic flexibility is underpinned by the principle that "all courses are open to all students in all programs".This approach gives students the opportunity to design their own knowledge framework and shape their learning path according to their individual needs.
Pedagogical Dimension: A Student-Centered Approach
In discussions of general education, less attention is paid to the pedagogical aspect compared to curriculum design.However, pedagogy plays a critical role in establishing an emotional and intellectual connection between students and their educational experiences.At SUSTech, general education is underpinned by a student-centered pedagogical approach in which various components harmonize to create an integrated learning environment.
At the core of SUSTech's educational framework are carefully structured courses within the science module.These courses are strategically designed to provide step-by-step challenges that are aligned with the mathematical prerequisites of the various majors.This curriculum serves a dual purpose: it encourages students to explore their potential and prepares them for specialized courses of study.Courses in physics, chemistry, biology, and computer science are carefully tailored to different levels of complexity and address the specific pedagogical goals and content intricacies of each area.Advanced courses are closely aligned with their respective disciplines, while subjects less related to upcoming academic paths are designed to foster a broader understanding and engagement with scientific and technological fields.
In the humanities, social sciences, arts, and foreign languages, SUSTech employs innovative and student-centered teaching methods.These approaches foster versatile problem-solving skills, effective communication, and the transfer of knowledge to new contexts.Small class sizes are maintained in language courses to meet individual learning needs and promote meaningful interactions with instructors and fellow students.Interdisciplinary general education courses such as "Language and Science", "Interdisciplinary Solutions to Engineering and Social Problems", "Art Design from Theory to Practice", "Science in Science Fiction", and "Innovative Space Design" are designed to challenge students' understanding of real-world challenges, foster hidden talents and interests, increase self-confidence, and stimulate introspective thinking.
These pedagogical strategies transform the various components of general education at SUSTech into a coherent body of knowledge that promotes advanced cognitive skills.Students' academic journeys are linked to critical issues that are supported by wise advising and careful mentoring.Academic planning, career development, course selection, choice of a major, research initiatives, social practices, internships, and senior theses are enclosed in a framework of thoughtful advising.The declaration of a major takes on special significance, allowing students to refine their choices and solidify aspirations.Comprehensive support mechanisms facilitate exploration during the freshman year, with STEM courses and introductory offerings providing the foundation for an informed choice.SUSTech's overarching philosophy accommodates missteps and redirections, with courses for graduation and major declaration spanning both semesters, providing flexibility for up to six years to graduate.
At the heart of SUSTech's general education structure is the dual-advisor system, which provides personalized and comprehensive support.Each student is served by two advisors: a college advisor during general education and an academic advisor when deciding on a major.Residential college advisors guide students towards discerning decisions during major declaration.College advisors assist with course selection prior to deciding on a major and continue to provide academic advising after a student has decided on a major.Faculty mentoring in the residential colleges may take a variety of forms, but it is accompanied by a minimum advising load of 50 h per year and a clearly defined advising protocol.Each of SUSTech's six residential colleges consists of faculty members from different departments and organizes a range of activities to facilitate the transition of new advisors into their roles and to foster communities of practice to promote effective student advising.A "College Advisor of the Year" Award is presented each year to recognize outstanding advising performance.
At SUSTech, there is an additional layer of mentoring provided by academic advisors who offer guidance and supervision specifically related to degree programs.In some cases, college advisors also serve as academic advisors for students in their respective departments.Hence, the term "dual" applies to both students and advisors: students have dual advisors, while faculty members undertake the dual responsibility of general college life advising and discipline-specific academic advising.Both college advisors and academic advisors act as mentors with a comprehensive understanding of student development.They work closely with residential colleges, academic departments, and administrative offices to provide students with individualized and effective advising.
Integrative Dimension: Enriching the Educational Experience through a Holistic, Immersive Approach
The integrative dimension of SUSTech's general education reflects the institution's commitment to creating a diverse and connected learning environment.This dimension includes the integration of student research, study-abroad programs, and a new engineering education system that pushes the boundaries of general education.
A. Student research
Involving students in research projects is an important component of SUSTech's general education curriculum, even though it is not a required course.Undergraduate research, which includes independent investigations or research that contributes original findings to specific areas, is an integral part of SUSTech's pedagogical approach.SUSTech offers a wide range of opportunities for student involvement in research and encourages engagement through a variety of channels, forms, and methods.Students are encouraged to explore their research interests in depth and engage in dialogue with their advisors.Following required approvals, advisors facilitate early participation in laboratory observations to foster a hands-on, experiential understanding of critical thinking and problem-solving methods.
Many students subsequently embark on independent explorations by formulating their own research questions.These endeavors are supported by their advisors and can be pursued in a variety of ways, including undergraduate research projects offered by their respective departments, College Students Innovation and Entrepreneurship Projects, and other independent student research initiatives.It is common for SUSTech undergraduates to participate in their advisors' research teams, allowing them to engage in research under the guidance and supervision of experienced researchers.Some students even discover their interests and potential career paths through these research experiences, with initial results often being incorporated into their senior theses.
Following the declaration of their majors, conducting research projects under the guidance of advisors becomes a mandatory part of the degree program to foster a scholarly inquiry process that involves investigating, evaluating, creating, and disseminating knowledge or work aligned with the practices of the respective disciplines.Collaboration among students from different disciplines in undergraduate research is actively encouraged at SUSTech.The College Students Innovation and Entrepreneurship Projects program plays an important role in selecting, funding, and supporting approximately 120 undergraduate research projects each year, involving approximately 400 students.These projects emphasize originality, innovation, and in some cases, entrepreneurship, leading to the publication of research results in prestigious academic journals or the transformation of results into entrepreneurial products.
From an educational standpoint, engaging in research offers unique opportunities for students to learn and apply scientific principles.Through active participation in research, students develop scholarly thinking, hone their exploration and communication skills, and gain experience in project management, problem solving, research budgeting, proposal writing, and the complete research process.Additionally, research experiences have the potential to reshape students' perceptions of science and significantly influence their future career.Ultimately, undergraduate research at SUSTech serves not only to deepen students' academic understanding in their chosen disciplines but also to enable them to acquire a comprehensive skill set and broaden their horizons, preparing them for a successful future in their respective fields.
B. Study-abroad programs
Study-abroad programs are an important component of SUSTech's general education.One of the key objectives of undergraduate education at SUSTech is to equip students with the essential qualities they need to become global-minded and pioneering scientists and engineers in the future.Studying abroad serves as a crucial pillar and mechanism to foster students' global outlook.The study-abroad programs offered at SUSTech encompass a wide range of options, including summer/winter programs, research camps, semester/year-long exchange programs, and dual degree programs.Since their inception in 2015, more than 2000 students have participated in and benefited from these programs.
These study-abroad programs have a wide variety in terms of their objectives, content focus, duration, formats, partner institutions, and disciplinary areas.At the university level, comprehensive support and encouragement are provided to undergraduate students throughout their study-abroad journey.This support includes assistance with scholarship applications, selection of partner universities, project design, development and confirmation of study plans, credit transfer processes, and opportunities for sharing experiences upon return.The benefits of participating in study-abroad programs go beyond enhancing global awareness and academic learning.They also encompass the development of leadership skills, personal growth, and the acquisition of greater cultural competence.
Analysis of students' self-reports following their completion of study-abroad programs reveals significant progress in several areas.These developments include, but are not limited to, improved academic skills, language acquisition, deeper understanding of scientific concepts, enhanced academic planning skills, self-discovery, social identity formation, critical thinking, emotional growth.These findings underscore the transformative impact of study-abroad experiences on SUSTech undergraduate students.
C. New engineering education system
The integration of a new engineering education system at SUSTech plays a crucial role in expanding the scope of general education beyond traditional boundaries.As a STEMfocused university, SUSTech attracts a significant number of students to its engineering programs.The adoption of the new engineering education mode serves as a driving force for the advancement and innovation of undergraduate general education at the institution.
Under this educational approach, students are given more autonomy to shape their individual knowledge and skill structure.They have access to abundant resources and interdisciplinary learning opportunities that allow them to define their own learning content and pace.The new engineering education model goes beyond traditional approaches by placing a strong emphasis on addressing complex societal needs.It recognizes the importance of understanding the needs of people and society, which requires a comprehensive and integrative approach that goes beyond discipline-specific education.
The general education modules, particularly in the humanities, social sciences, arts, ethics, integrative residential college education, and student research project schemes, collectively contribute to preparing engineering students to benefit fully from the new engineering education at SUSTech.These modules promote a holistic and coherent learning experience that enables students to engage with diverse perspectives and develop the skills needed to meet the challenges of the 21st century.
By integrating general education with the new engineering education system, SUSTech creates a student-centered and future-oriented approach that effectively addresses societal, environmental, and technological challenges.This integration creates a novel paradigm for college learning that promotes curricular coherence, interdisciplinary connections, and a focus on imagination, creativity, and innovation.
Over a decade of rigorous experimentation and development, SUSTech has developed an innovative and comprehensive general education system that incorporates multiple components.This system seamlessly integrates a well-rounded curriculum, individulized curricula, and an integrated residential college education.Its primary goal is to foster students' exploration of their inherent capabilities, cultivate their ability to think beyond their specialized areas, and equip them with essential skills for lifelong learning and critical thinking.Noteworthy characteristics of SUSTech's general education include its institution-wide approach, seamless integration with students' major declaration and learning processes, student-centered pedagogy, a versatile residential college co-curriculum system, and an extensive support system.
Discussions
In China, the last decade has witnessed a significant rise in higher education programs focusing on general/liberal education, which adopt a holistic educational philosophy and aim to equip lifelong learners with integrated knowledge and a sense of social responsibility.This shift represents a departure from the traditional utilitarian Chinese curricula that prioritized specialized professional training [39].It also displays some of the reform efforts by education leaders to overhaul China's higher education institutions.The general education curriculum plays a pivotal role in instilling educational values and aspirations, incorporating perspectives from various stakeholders, and encompassing social, cultural, economic, and governmental factors [27].
In response to these evolving global trends and the aspiration of becoming a worldclass university, the case university of this study has embarked on a comprehensive and innovative journey to establish a robust general education system.Over the course of a decade, SUSTech has undertaken rigorous experimentation and development to create a holistic educational experience for its undergraduate students.This system integrates various components to provide students with a well-rounded education that extends beyond their specialized areas of study.The primary focus of SUSTech's general education approach is to foster students' exploration of their inherent capabilities and nurture their critical thinking skills.This is accomplished through the implementation of a well-rounded curriculum, personalized learning, and an integrated co-curricular education.
SUSTech's general education system stands out due to its institution-wide approach, which ensures that the principles and goals of general education permeate every facet of the university.The system seamlessly aligns with students' major declaration and learning processes, creating a cohesive educational pathway that connects different disciplines and areas of study.By adopting a student-centered pedagogy, SUSTech empowers students to actively shape their learning experiences and align their education with their individual interests and aspirations.
Complementing the academic aspects of general education, SUSTech enhances the overall educational experience through its residential college co-curricular system.This system provides students with a versatile platform for experiential learning, leadership development, and cultural immersion.Through a wide range of co-curricular activities, students broaden their perspectives, expand their networks, and develop essential life skills that complement their academic endeavors.
Students are supported throughout their general education journey by a comprehensive support system provided by SUSTech.This includes a 1:10 to 11 faculty-to-student ratio, faculty time protected by on-campus housing, and mentoring, guidance, and resources from advisors who help students in navigating their academic and personal development.This is to ensure that students receive the support they need to maximize their general education experience and reach their full potential.
In examining the implementation of general education at SUSTech, the theoretical framework proposed by Bolman and Deal (1991) in their four frames of organizational thought-the structural, human resources, political, and symbolic frames-provides a valuable lens [33].
From a structural frame perspective, SUSTech's general education system takes an institution-wide approach that ensures that the principles and goals of general education permeate every facet of the university.This structural design allows for a cohesive and interconnected educational journey that is integrated with students' major declaration and learning processes.By embracing a student-centered pedagogy, SUSTech empowers students to actively shape their learning experiences and align their education with their individual interests and aspirations.
From a human resources frame perspective, SUSTech's general education system emphasizes personalized study plans that enable students to make choices and explore interdisciplinary perspectives.This approach recognizes the interdependence between individuals and organizations and provides students with abundant resources and opportunities to engage in interdisciplinary learning and to determine their own learning content and pace.The system promotes the development of critical thinking, analytical skills, and informed value judgments, and encourages personal and professional growth supported by formal organizational structures.
From a political frame perspective, SUSTech's general education system has been consistently maintained by university leadership and recognizes the diverse perspectives and needs of stakeholders.The system encourages collaboration among different academic departments, facilitates interdisciplinary learning, and promotes a holistic educational experience.The residential college co-curricular system, serving as a symbol of the political frame, provides students with a platform to engage in experiential learning, leadership development, and cultural immersion.This system promotes social and intellectual growth by integrating domestic and international students into a unified educational environment.By embracing this inclusive approach, SUSTech enhances students' understanding of different perspectives and cultivates a collaborative spirit, laying the foundation for a well-rounded education.
From a symbolic frame perspective, SUSTech's general education system acknowledges the social and cultural dimensions of education.The system incorporates various mechanisms identified as successful practice in the world's top colleges and universities, including residential colleges, dual-advisor mentorship, freedom in course selection and major declaration, the integration of general and major education, and student research support.These elements play a significant role in shaping students' perceptions and contributing to their overall educational experience.By incorporating rituals and shared experiences, SUSTech creates an inclusive and supportive environment that goes beyond mere rules and policies.This fosters a strong sense of belonging and identity among students and enhances their engagement and personal growth.While SUSTech's general education system represents an innovative and comprehensive approach, there are areas that require attention and improvement.Striking a balance between general education and major-related studies is crucial, as faculty concerns about the heavy load of general education courses and their potential impact on specialized learning should be addressed.Clear communication of learning outcomes is essential to enhance student engagement and motivation.Additionally, adopting a comprehensive and systematic assessment framework that encompasses the entire general education system will provide valuable insights into its effectiveness and facilitate continuous improvement.
SUSTech's implementation of general education serves as an innovative and pioneering experiment in the Chinese higher education landscape.By bridging the gap between general and specialized education, SUSTech prepares its students to become self-directed thinkers capable of making informed decisions based on broad knowledge and reasoned ideas.The university's commitment to general education aligns with the national demand for innovative talents and its dedication to educating a new generation of leaders for scientific and technological advancements.
Conclusions
SUSTech's journey in developing and implementing a robust general education system implies the importance of taking an institution-wide approach and adopting an innovationcentered perspective.By ensuring the integration of general education principles and goals across all aspects of the university, SUSTech creates a cohesive and interconnected educational pathway for students.The innovation-centered approach at SUSTech imparts a liberal sense to general education by empowering students to become self-directed thinkers with inquiring minds and the intellectual tools to think independently.By focusing on awakening students' self-awareness, interests, passions, and visions for the future, SUSTech cultivates students who are able to make personal decisions based on broad knowledge, well-reasoned ideas, and values.
This case study of SUSTech also demonstrates the importance of educational ideas of institutional leadership, organizational support, systematic design, concerted effort, and financial support in creating a successful general education system.While the specific pathway of SUSTech may not be easily duplicated in other universities, it can serve as a reference point, an inspiration, and a catalyst for fundamental and systemic reform and innovation in the broader higher education landscape.By sharing the implementation of general education at SUSTech, this case study makes an original contribution to the practice of general education, both among Chinese universities and globally.It serves as a stimulus for open discussions among researchers and practitioners in higher education and promotes the exploration of innovative approaches to general education that meet the evolving needs of students and society.
In conclusion, this case study of SUSTech's general education system demonstrates the importance of an institution-wide approach, an innovation-centered perspective, and the integration of broad-based and individualized learning.Lessons learned from the SUSTech experience contribute to the broader discourse on general education and can inspire universities to transform their educational practices and prepare students to become innovative, self-directed thinkers capable of meeting the challenges of the future. | 14,688.8 | 2023-08-19T00:00:00.000 | [
"Education",
"Engineering"
] |
Medium Transparent MAC access schemes for seamless packetized fronthaul in mm-wave 5G picocellular networks
Telecom operators are racing towards upgrading their facilities and broadband services in order to meet the highly challenging 5G operational framework in dense urban landscapes. The oversubscribed sub-6 GHz wireless band is lacking the necessary bandwidth to support the envisioned 5G data rates, suggesting the transition to mm-wave bands as the only viable scenario. In conjunction with the cell densification that is required to achieve the desired frequency reuse factor, it becomes obvious that the current CPRI-based fronthaul cannot cope with massive multi-Gbps traffic streams and a paradigm shift in resource allocation and network intelligence is necessary. To this end we propose the Medium Transparent MAC protocols as the solution towards forming and managing a converged mm-wave FiWi fronthaul infrastructure. Our approach allows for directly negotiating wavelength, frequency and time resources between the centralized unit and the wireless terminals, while offering fast on-demand link formation following closely the demand fluctuation at the picocell level. In this paper we investigate the functional and physical consolidation as well as the respective performance of MT-MAC-enabled fronthaul and report on its application and suitability for mm-wave 5G access networks.
INTRODUCTION
Next generation wireless access networks are gradually adopting two reform roadmaps. The first is the deployment of small-cells that targets to enhance spectral efficiency [1]- [2], whereas the other is adopting mmwave bands as the primal radio solution since they provide enormous bandwidth [3]- [4]. However, the above inevitably lead to the installation of large amounts of active equipment (Base Stations (BSs)/Access Points (APs)), a fact that reduces the network's efficiency. RoF has been proposed as an ideal solution to the above problem, since it offers several advantages such as low-cost, functionally simple and energy efficient Remote Antenna Units (RAUs), transparency regarding modulation and central control thus providing optimal resource management that traverses the entire network [5]. Even though Layer-1 functionality has been researched extensively, Layer 2 MAC RoF protocols remain an unresolved research topic. Until recently, resource management for converged Fiber-Wireless schemes was based on two approaches [6]: i) the use of different and distinct wireless and wired protocols that interface at the router (Radio-and-Fiber -R&F [7]) or ii) the direct implementation of pre-existing pure wireless MAC protocols (such as the 802.11) directly on top of RoF architectures [8]. R&F architectures however split the control algorithms in two, create two separate and hidden network portions and ultimately go against the intertwined RoF structure, disrupt the centralized control and demand a series of active APs. Regarding the second solution, all wireless MAC protocols do not consider and are unaware of the underlying optical components and therefore depend on the existence of a persistent active optical connection that can simply transfer bits of information whenever and however they arrive from the wireless clients to the RAU and traverse them to the CO. Considering the extremely high propagation losses that are dominant in the mm-wave bands, it becomes obvious that a large array of interconnected antennas are necessary in order to provide service to an area even at the size of an apartment. In the above circumstances, it is clear that the above solutions are not applicable. This paper provides a summary on the notion of transparency and functional split in the converged RoF Layer 2 and explains how the MT-MAC protocols can form extended reach 60GHz Wireless Local Area Networks (WLANs) that can interconnect wireless terminals even under and non-Line of Sight (LOS) circumstances.
MEDIUM-TRANSPARENT MAC PROTOCOLS
This section provides a short summary regarding the main features of the MT-MAC protocol. The main characteristic of the MT-MAC protocols is that the wireless nodes and the optical CO communicate simultaneously over a continuous stream of information that operates utilizes both optical and wireless media. In the MT-MAC architecture, the CO is connected through an optical fiber to a series of RAU modules that are in turn connected through mm-wave radio to the wireless terminals that are located within the range of former, as shown in Fig. 1a. All optical wavelengths used for uplink and downlink transmission are generated by the CO and are injected into the optical fiber in pairs: one for downlink (CO to RAU) and the other for uplink (RAU to CO). A special wavelength pair is set aside to use for control signalling link (hence termed the Control Channel). Amongst other operations, the control channel's main task is to carry the necessary information to the RAUs regarding which data wavelength should the latter tune into in order to carry out the data transmission.
Bandwidth between the wireless clients and the CO is negotiated in a direct manner between the involved entities without any Layer-2 involvement on behalf of the RAUs. To this end. the MT-MAC merged the optical and wireless networks into an amalgamated network and manages to interconnect wireless terminals located in different RAUs but are served by the CO and thus overcome the LOS and even range restrictions that are always present in the traditional wireless networks. To achieve this kind of operation, the MT-MAC employs two contention periods that run over both media in parallel: the First Contention Process (FCP) is executed in both the optical and wireless networks and its purpose is to inform the CO on whether there are wireless terminals with outstanding capacity claims in any of the RAUs and if yes in which RAUs are these terminals connected to. The Second Contention Process (SCP) is employed in order to allocate optical and wireless bandwidth slots to the wireless terminals. Following the FCP, the CO instructs every RAU with pending traffic to tune its photodiodes to specific wavelengths. For the case where there are more RAUs requesting traffic than the number of available wavelengths, the CO employs the Round Robin algorithm. All data is broadcasted in chunks entitled Superframes (SFs). Each SF is comprised of Resource Requesting Frames (RRFs), whose main purpose is to transport the SCP information, and the Data Frames (DFs) that carry the actual data payload (Fig. 1b). In the pure MT-MAC described here, the number of DFs is the same always and equal to (fixed service). The RRF packet's purpose is to scan and determine the number of wireless terminals that are requesting bandwidth. This ensures that only active nodes will take place in the subsequent data exchange, whereas other nodes will not be allocated resources. After all nodes have been identified, the SCP is considered as finalized and the CO creates the polling sequence and subsequently initiates the data exchange. All data are transmitted within DFs that carry the payload according to the instructions laid out by the CO. The contention resolution process that the SCP is based on employs a random number choice scheme, and if and when all the nodes choose a unique number then all terminals will be successfully identified and therefore the SCP will conclude. As it is depicted in Fig. 1b, the RRFs are divided into slots, with each slot being long enough to support the transmission of a series of POLL, ID and ACK packets. At the beginning of the RRF, each wireless terminal chooses randomly a number between 1 and . This number signifies the number of POLL packets that the wireless terminal must receive prior to transmitting its ID packet. In the case that the CO receives and decodes the ID packet successfully it will reply with an ACK, notifying the wireless terminal that is has been correctly identified. However, in case two or more terminals choose the same number , they will transmit at the same time, and the transmission will be unsuccessful. In turn the CO will be incapable to correctly decode the ID packets and will realize that there has been a collision. The CO will keep on transmitting RRFs until no collisions are detected. In this case the CO will now know that every node has been correctly identified and therefore will proceed with the transmission of the data packets based on the newly formed polling sequence.
CLIENT-WEIGHTED MT-MAC
The Client-Weighted MT-MAC [10] (CW-MT-MAC) is an alternate version of the original MT-MAC protocol that was presented in the previous section. Contrary to the MT-MAC, the CW-MT-MAC assigns wavelengths not following the fixed service but a variation of the gated service. Specifically, the CW-MT-MAC assigns the wavelengths for time that is proportionate to the number of clients that have been identified by the SCP. The CW-MT-MAC's functionality is based on maintaining a record (Matrix ). Matrix has q rows which correspond to RAUs in the whole network, and 2 columns. The first column stores the number of clients that have been identified in RAU , whereas the second column stores the Utilization Counter . The latter is a counter that shows the number of times that RAU has been granted service opportunities. The algorithm grants the first available wavelength slot time to the RAU that has the lowest . For the RAUs that have been granted access at the latest SF, the respective value is incremented by / , where is the number of wireless terminals that have took part in the latest SF and is the total number of wireless terminals that have been identified accumulatively throughout the whole network operation time. In case two or more RAUs have the same UC value the RAU with the highest is given priority in the selection process. If is 0, then the is increased by 1 (meaning that all users were served) and therefore RAU 's priority sustains a significant drop. In case RAU has no terminals it is removed from . In the opposite scenario where RAU requests bandwidth and is not in it is inserted and given the lowest UC. The purpose of this is to give the newly inserted RAUs the maximum priority and enable the CO to quickly discover the number of users that reside within this RAU.
PERFORMANCE EVALUATION
The following section presents the performance evaluation of the MT-MAC and CW-MT-MAC protocols while operating over a RoF network like the one displayed in Fig. 1. The network comprises 10 RAUs with 50 m fiber interval between every two RAUs. The RAU that is physically located first in the bus is located 500 m from the CO (total network length 950 m) and each RAU has a range of 3 m. The performance results were derived using a Java simulator that is event-based. For the simulation purposes 50 terminals have been distributed to the RAUs using an alternative to the normal distribution. This alternative provides integer "bell-shaped" numbers with mean value μ = 5 users/RAU. Every RAU is considered to have at least one wireless terminal within its range, thus wavelength management spans across the whole network of 10 RAUs. The wavelengths are than the number of existing RAUs in the network and the wavelength to RAU ratio is denoted as / . The employed packet generator is based on the bursty traffic model that exhibits long-tail properties (i.e. high deviation from the distribution's mean value). Specifically, we have considered 1.5 kB of mean bursts, with standard deviation equal to 1,42 kB. The transmission opportunity window that is granted to each user was chosen to be 30 frames long (~4 kB), to ensure that the majority of the packet birth bursts would fit in a single SF. Figure 2 presents the protocols' performances versus the optical availability ratios ( / ) (Fig. 2a and 2b), ranging from 0.1 and up to 0.9 as well as versus traffic loads ( Fig. 2c and 2d), ranging from 10% up to 100% of the maximum theoretical network capacity. Both protocols were tested for an extreme user distribution standard deviation σ = 4.5 (mean is μ = 5). As it is evident in Fig. 2a and 2b, the results are logically classified into subgroups. The subgroup formation is based on the normalized load that the terminals are producing. In the same manner, the results depicted in Fig. 2c and 2d are grouped in the basis of the w/R ratio. As can be noted in Fig. 2a, throughput increases linearly and reaches its saturation point when at the moment w/R ratio surpasses the produced load, denoting that all produced packets are served effectively due to the abundance of the optical resources and as such throughput does not increase any further. The curves denoting the 80% load scenario follow the same curvature, with the respective difference being that the linear behaviour is continued for a greater range of values. When comparing the MT-MAC and CW-MT-MAC protocols we notice a very narrow superiority of the latter. Specifically, the client weighted version of the protocol exhibits a small throughput increment in the case of 30% load. This gain increases even further and reaches the value of 4% in the respective 80% load case. The performance gain is derived from the enhanced fairness of the CW-MT-MAC algorithm which assigns the wavelengths not in affixed manner but rather proportionally to the capacity requests. Compared to the fixed service regime, the wavelength capacity is moved away from sparsely populated RAUs and instead given to heavily crowded RAUs. This behaviour is also shown in Fig. 2b which displays the average packet delay. Delay values begin at very high values in the area where the load surpasses the / ratio. This is occurring naturally since in this part of the graph the produced load is greater than the max theoretical capacity offered by the available / ratio. As the latter increases though, delay values drop significantly since the increased capacity can now serve the offered load. By assessing the results, it becomes evident that in the cases where the population of the wireless terminals is unevenly distributed amongst the RAUs, the CW-MT-MAC performs better. Figures 2c and 2d depict the MT-MACs' performances for variating load conditions, namely from 10% up to 100%. Of the maximum theoretical capacity. The displayed results correspond to two different w/R ratios, namely 0.3 and 0.8. In Fig. 2c we can view the total system throughput versus load. The results indicate that there is a linear relationship between throughput and load regarding both MT-MACs at the graph area where the generated traffic is less than the w/R ratio. However, as load continues to increase, throughput stops increasing and instead stabilizes around a saturation value. This is attributed to the fact that when load exceeds the / value traffic is in excess of the wavelength capacity and therefore gets dropped. Correspondingly the delay results depicted in Fig. 2d start off at very low values while load is below the / and increase when the former approaches or surpasses the later. When generated traffic moves beyond the / , point the delay values continue to increase until they also reach their saturation point, since the packets that get dropped are not counted in the delay metrics. Again, we notice again a small performance gain in favour of CW-MT-MAC stemming from the more balanced and fair distribution of wavelengths.
To investigate more deeply the CW-MT-MAC's performance on the more granular user level we present Figs. 2e and 2f. The derived results show the protocols' performances regarding the per user throughput and packet delay versus the user distribution's standard deviation σ. The displayed results correspond to / = 0.5 and 50% load. Figure 2e illustrates the per user throughput for both protocols. The symbols on the curves correspond to the mean values whereas the protruding lines correspond to the standard deviation of these mean values. The results show that CW-MT-MAC achieves higher throughput values as compared to the MT-MAC but also achieves a very low standard deviation. Where the MT MAC standard deviation is close to 0 only in the uniform user distribution, the CW-MT-MAC's manages to deliver 0 deviations for the first four user distributions. In this way, the CW-MT-MAC clearly outperforms the MT-MAC and offers only minimal standard deviation and thus gives a fairer and more consistent performance to the participating users. The above conclusions are also true in Fig. 2f, where we can witness that again the CW-MT-MAC offers not only lower delays but also diminishes the deviation of the mean packet delays experienced by the users.
CONCLUSIONS
This paper summarizes the notion of medium transparency and shows its performance under a Radio-over-Fiber architecture employing mm-wave radio. To this day, two versions of the MT-MAC protocols have been presented: The first MT-MAC assigns wavelengths based on a round robin functionality and thus for equal time intervals. The second version (Client-Weighted MT-MAC) assigns wavelengths for time proportionally to the number of users of each RAU. This variation targets use cases where capacity fairness amongst the users of every RAU is a necessity. Results show that both protocols can successfully form end-to-end converged hybrid RoF networks that provide extended range mm-wave LAN connectivity to wireless terminals even when the latter are out of LOS. Furthermore, the results indicate that the Client Weighted version of the MT-MAC diminishes the standard deviation in terms of throughput and packet delay between the users. To this end, the MT-MACs exhibit their capacity to form a complete access framework supporting for the upcoming 5G networks. | 4,009.6 | 2017-07-02T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Simulation and discussion of typical radio frequency filters
The demand for filters is increasing as the frequency bands for 5G expand. RF filters become the segment with the largest market share of RF front-end chips. More crowded frequency bands, as well as thinner and lighter communication equipment, necessitate higher filter performance. In this paper, several technologies for the implementation of Radio Frequency filter are presented. Essentially three types are considered: (i) Lumped Element (LC) Filter and Integrated Passive Devices (IPD) technology; (ii) Acoustic Wave Filter; and (iii) a composite filter design that combines acoustic and IPD technologies. Benefits and drawbacks of these diverse configurations in terms of their technical performance and current applications are examined, and several models are presented and simulated using ADS software.
Introduction
Such as LTE, LTE advanced and Fifth generation (5G), new wireless applications utilize several bands of radio frequency to instantly assign the bandwidth required to enhance data rate.In particular, as a subsequent network to 4th generation, 5G networks are anticipated to have a hundred times more reduced delay and nearly a hundred times greater telecommunication speeds compared to existing 4th generation mobile communication technology, in order to achieve the data efficiency and speed objectives.The rigorous low loss requirements in millimetre-wave frequency bands, in addition to the precise impedance configurations featuring reduced thick and size, constitute significantly problematical issues of the design on radio frequency fore end modules [1].As among the most critical constituents in 5G radio frequency front-end modules (FEM), the demand for higher-frequency filters like BAW, IPD and LTCC grows.
RF filters are used to reject or allow transmissions in certain sections of the radio spectrum.Although filter construction differs according on the application, with size, cost, and performance being the most important factors, filters used for RF transmissions are typically Bandpass filters designed with coupled resonator techniques.Discrete inductor-capacitor (LC) filters or lumped element filters are perhaps the most common of all electromagnetic filter types with low-cost structures of moderate performance and size.The LC elements are sometimes implemented as printed structures on substrates, which is referred to as an integrated passive device (IPD) [2].Acoustic filters, namely bulk acoustic wave (BAW) and surface acoustic wave (SAW), are another popular filter configuration for mobile devices.Some of the parameters used to determine whether filters fit the different requirements include the rapidest transition to the final roll-off, in-band ripple, and the biggest out-of-band rejection.SAW and BAW filters possess a high Q-value and low insertion losses, as well as outstanding band selectivity.Therefore, by combining LC and acoustic technologies, it is possible to achieve high frequency, wide bandwidth, low insertion loss, as well as steep roll-off all in the meantime.
Lumped Element (LC) Filter
For 5G networks, there are a total of 29 frequency bands divided into two primary spectrum ranges, with a total of 26 bands below 6GHz (collectively known as Sub-6GHz) and 3 mm-Wave bands.Sub-6GHz is the primary band now in use, and it contains seven bands, which are n1, n3, n28, n41, n77, n78, and n79 [3].To meet the requirements of wireless communications systems, passive modules in the 2.4 GHz band are commonly used on the RF front-end of electrical devices.LC filters are applicable from less than 100 kHz to a little over 3 GHz.They have a low insertion essentially while the switch from its passband to rejection is abrupt.Such benefits make them popular for years.
Schematic Design of RF Bandpass LC Filter
The modern network synthesis method for constructing microwave filters is based on a low-pass prototype filter with integrated components.As shown on the Figure 1, this prototype is used to derive the transmission properties of various bandpass, lowpass, highpass, and band-reject filters [4].To demonstrate the design (Figure 2), Agilent Advanced Design System (ADS) 2020 software is used to create a three-order filter with a centre frequency of 2.45GHz and a bandwidth of 1GHz.After calculation and simulation, the results bellowed satisfy the criteria: S-Parameter simulation can signify reflection and transmission properties (amplitude and phase) in the frequency domain.Reflection coefficients S11 or S22 represent return loss, admittance, and impedance, while transmission parameters S12 or S21 show insertion loss, phase, and group delay.According to Figure 3 and Table 1, the simulation results reveal that the bandpass filter exhibits return losses of less than 15 dB across the required band of 2.35-2.55GHz and insertion losses of about 0.1 dB across the pass-band, which illustrates that the proposed filter fulfil the expectation.
Integrated Passive Device (IPD) Technology in RF Filter
Statistically as the function of mobile phone gets more powerful, more passive components need to be used support the operation, which will result in increased product volume.Therefore, the development of multifunctional and compact microwave circuits with reduced power loss is becoming an unavoidable trend.To satisfy rising demand, reduce size and cost, and boost functionality, integrated passive device (IPD) technology has emerged as a feasible option for RF front-end design.
IPDs (integrated passive devices), sometimes known as EPC (embedded passive components), or IPCs (integrated passive components), are IPD are electronic components that incorporate resistors (R), capacitors (C), inductors (L), baluns, impedance matching elements, microstrip lines, or any combination of these components in the same package or on the same substrate [5].While IPD Technology is the high density integration of multiple passive components by etching diverse patterns on silicon, glass, or ceramic substrates utilizing a photolithographic Wafer fabrication technique, can replace bulky discrete passive components.It is classified as Thick-film or Thin-film depending on the passive device fabrication technique [6].LTCC (Low Temperature Co-fired Ceramics) is a branch of the Thick-film technologies which are widespread and relatively low cost fabrication method of passive components.In literature [7], A miniature dual mode bandpass filter with 7th order harmonics suppression for 5G N77-Band applications is proposed.As shown in Figure 4, it is manufactured in a two-layer LTCC substrate, with the centre frequency of 3.75 GHz plus the bandwidth of 0.9 GHz.The total size of the chip is only 7*7*0.3 mm^3.The use of the meandering SIVFL (stepped-impedance- .As the technology is racing ahead, the millimetre-scale Thick-film technology is miniaturized into micron-scale Thin-film technology, which can reduce the size of passive systems by a factor of 1,000 while also drastically lowering the cost.Nickel chromium sputtering on an alumina substrate is one method of producing thin-film chips.Additional strategies are established on the usage of chromium silicide and tantalum nitride on silicon [8].For IPD-based filter, one of the factors influencing the electrical performance is the substrate material chosen.The substrate for IPDs can be ceramic (alumina), glass, or semiconductor like silicon and GaAs (Gallium Arsenide).
Silicon IPD Technology
In literature [9], a highly selective and compact bandpass filter (BPF) in the 5G communication band derived from high-resistivity silicon (HRS) passive integrated device technology is proposed.For existing silicon integrated circuit technology, the Q-factor of passive components is primarily restricted by metal line resistive attenuation and substrate loss [10,11].To attain great performance, HRS IPD technology combines a high-resistivity silicon substrate with nominally 3000-Ω⋅cm resistivity or even higher, and high-conductivity metallic copper.As a result, it has the following advantages: low loss, high quality factor, inexpensive, easy integration, small in volume, and suitable for integrated circuit fabrication process.The structure is as demonstrated in Figure 5.With the size of 2*1.25*0.1 mm^3 and upper limit insertion loss lower than 1.76 dB, both miniaturization and high selectivity have been realized in the filter.
GaAs IPD Technology
GaAs can be used as semi-insulating high resistance materials for integrated circuit substrates.Gallium arsenide (GaAs) semiconductor devices possess the benefits of high frequency, good performance in low temperature, less noise, and strong radiation resistance.Si-based chips are produced by a physical etching process, whereas GaAs chips usually go through a multi-layer chemical stacking procedure.A significant strength of the mentioned GaAs IPD technology over silicon-based technology is its greater unloaded Q-factor of lumped elements.Because of the semi-insulated properties, the inductor's parasitic capacitance to ground is greatly reduced, boosting the quality factor and resonance frequency at high frequencies, while the filter based on silicon-based IPD technology has a larger insertion loss due to the higher substrate loss.Besides, the thickness of GaAs substrate is another critical factor in optimising inductor performance and size [12].One design of GaAs IPD substrate is shown bellowed (Figure 7).
Glass IPD Technology
Glass substrate is an ideal choice for integrated passive device due to its material properties.As an insulator, glass is generally chosen in preference to other materials for advanced packaging owing to its low electrical loss and ultra-high resistivity especially at high frequency [14].To further utilize properties of glass for tiny substrates, precision or Through Glass Vias (TGVs) are required.TGVs are typically produced in two sequential process stages.To begin, a slew of micron-sized holes are drilled through the glass.The previously drilled micro holes are filled with a metal of choice in the second stage.According to [15], a metal-insulator-metal (MIM) structure was used to obtain high-Q capacitance, therefore making improved diplexers and broadband filters.Based on the result of [16], Through Glass Via is estimated to considerably increase three dimensional integrated circuits (3D-IC) performance since it has lower parasitic than Through silicon via (TSV).
In literature [17], silicon, glass and LTCC are used as the substrate of same circuits.Then three types of materials are compared in terms of electrical performance and cost.According to the findings, silicon and glass based IPDs are smaller in size and less expensive in price with equivalent electrical performance compared with LTCC technology.
Acoustic Wave Filter
Acoustic waves are classified into three frequency bands: audio, infrasonic, and ultrasonic.For RF frontend components, ultrasonic can be used to transmit signal information.Acoustic filter technologies are evolving in response to the global transition to 5G networks.Surface acoustic wave (SAW) and bulk acoustic wave (BAW) are two common types for mobile phone applications.
Surface acoustic waves (SAW)
Surface acoustic waves (SAW) are bouncy waves that are formed and propagate transversely on the surface of a piezoelectric substrate material, and their amplitude drops fast as penetration into the substrate material increases.As an illustration, Figure 8 below shows the structure of a basic SAW filter which is made up of a piezoelectric substrate and two Interdigital Transducers (IDTs).The IDT on the input side converts the electrical signal into an acoustic signal, while the output side IDT transforms the acoustic wave back into an electrical signal.The wavelength of the electrodes determines the SAW filter's operating frequency.The essence is to combine piezoelectric materials such as quartz crystal and piezoelectric ceramics to create a specific filtering mechanism that takes advantage of its piezoelectric effect and the physical properties of surface acoustic wave propagation.[18].SAWs and temperature-controlled SAWs (TC-SAW) have long been popular because the technology is mature and widespread enough to keep component costs low.The present success of SAW filters is also related to their compact size, and high stopband rejection.They are widely used in receiver front ends, duplexers, and receive filters in 2G, 3G and 4G networks.The most widely used substrate materials of modern RF SAW filter are Lithium niobate (LiNbO3) and Lithium tantalate (LiTaO3).But the usage of devices based on these materials is limited due to decline in performance above 2.7GHz.In addition to the substrate material, the thickness of the substrate also affects the performance of the filter.In literature [19], the results reveal that thinner substrate is beneficial to enhancing the driving sensitivity of the IDT.When electrode width to substrate thickness ratio is equal to 0.33, the piezoelectric driving performance is the best.Furthermore, above 1GHz, the selectivity of SAW filter declines.SAW also has poor thermal stability.The response and centre frequency of SAW filter will decrease sharply as the temperature rises.Therefore, TC-SAW is designed to improve the conventional SAW filter by covering its IDT with a temperature compensation layer.As temperature compensation, a thin film of Silicon Dioxide (SiO2) helps to reduce temperature-dependent filter frequency drift.
Bulk acoustic wave (BAW)
In a BAW filter, acoustic wave which is excited by metal patches on top and bottom sides of the quartz, propagates longitudinally and bounces up and down to generate a standing acoustic wave.To design a bulk acoustic wave (BAW) filter, multiple resonators are coupled together in a certain topology to form a passband, usually in a ladder configuration, which is set up using series and parallel resonators alternately.
Compared with SAW filter, BAW filter performs better at high frequencies.According to the frequency equation of SAW filter: f = v λ ⁄ , where λ refers to the spacing between the IDT electrodes.Since λ cannot be too small, SAW filter is not suitable for higher frequency band, while BAW filter can operate at high frequency up to 20 GHz.On account of the resonant frequency is inversely proportional to the thickness of the film, the size of BAW filter decreases with increasing frequency.Furthermore, high quality factor (Q<2000), very low loss and very steep filter skirts are typical advantages of BAW filter.However, as BAW filter is more elaborate and more expensive, both BAW and SAW will co-exist in the long term, playing to their respective strengths in the high frequency or low frequency.
BAW filters include FBAR (film bulk acoustic resonator) and BAW-SMR (solidly-mounted resonator BAW) devices.BAW-SMR filter forms a Bragg reflector by stacking thin layers of different stiffness and density to keep the wave to oscillate inside piezoelectric film, while FBAR use airgap or membrane to achieve same purpose.In literature [20], author analyses several sources of Q loss.They are energy leaking from the perimeter, thermal-elastic, Ohmic Losses and so on.
Hybrid Filter with lumped-LC and acoustic technology
The q-factors of lumped LC filters and transmission line filter designs are usually insufficient to replace acoustic filters, so they are still employed in coupled with SAW/BAW in practical application.In order to make the best use of the strengths of both filters, a hybrid filter with high q-factor and wide bandwidth is proposed in literature [21].There are three major models for the simulation of BAW filters, which are the physical-based one-dimensional (1-D) Mason model, 2-D finite element method (FEM) model, and the equivalent circuit based Modified Butterworth-van-Dyke (MBVD) model [22].To verify the feasibility of the hybrid filter, ADS software is used to simulate a series circuit of acoustic and LC components.The schematic design is shown in Figure 9.The design inside red circles is equivalent circuit of acoustic part.As shown in Figure 10, the simulation results reveal that the out of band rejection is around 40 dB for the entire frequency range from 1.0 to 10.0 GHz.The bandpass filter exhibits insertion losses of about 0.1 dB across the 860MHz passband, which illustrates that the proposed filter fulfils the expectation.
However, owing to the coexistence of several assorted components, it is not easy to compound them together into a individual component unit and in the same process.In literature [23], the innovative SiCer compound substrate technology is introduced.As depicted in Figure 11, Through-Silicon Vias (TSV) are employed in the silicon layer and can be piled up with vias in LTCC, which enables the creation of a small RF-system-in-package (RF-SiP) with RF-MEMS and embedded passive or active components.
Conclusion
For 5G New Radio (NR) wireless communications, there has been a general and unavoidable tendency toward using higher frequency and greater bandwidth of the electromagnetic spectrum.The size of the filter will continue to decrease.However, the choice of filter is based on the final purpose, making a trade-off between cost and performance requirements.This study presents several techniques for implementing RF filters, compares and analyses their advantages and limitations.
Nonetheless, due to equipment and time constraints, the filter models presented in this paper are ideal circuit layouts and the materials and packaging methods used have not been simulated, nor have they been tested and optimised in kind.Future research can be directed in the following directions based on the overview of the preceding work.
In the aspect of the labour value of filters used in the RF front end, since each band requires a distinct filter to support it, the use of filters and the proportion of the labour value of filters in the RF front end increases as the number of frequency bands grows.Filters and other RF devices will see a trend toward miniaturisation, improved device shapes, and composite type in the future.
Advanced packaging solutions for RF devices will continue to innovate, such as tighter component layouts, Conformal shielding and Compartment shielding, double-sides PCBs, and high-precision and high-speed surface mount technology (SMT).
Figure 2 .
Figure 2. Schematic design of RF bandpass filter.
Figure 3 .
Figure 3. S-parameter graph of providing model.
Figure 4 .
Figure 4. 3D structure of the LTCC-based BPF[7].As the technology is racing ahead, the millimetre-scale Thick-film technology is miniaturized into micron-scale Thin-film technology, which can reduce the size of passive systems by a factor of 1,000 while also drastically lowering the cost.Nickel chromium sputtering on an alumina substrate is one method of producing thin-film chips.Additional strategies are established on the usage of chromium silicide and tantalum nitride on silicon[8].For IPD-based filter, one of the factors influencing the electrical performance is the substrate material chosen.The substrate for IPDs can be ceramic (alumina), glass, or semiconductor like silicon and GaAs (Gallium Arsenide).
Figure 8 .
Figure 8. Basic structure of SAW[18].SAWs and temperature-controlled SAWs (TC-SAW) have long been popular because the technology is mature and widespread enough to keep component costs low.The present success of SAW filters is also related to their compact size, and high stopband rejection.They are widely used in receiver front ends, duplexers, and receive filters in 2G, 3G and 4G networks.The most widely used substrate materials of modern RF SAW filter are Lithium niobate (LiNbO3) and Lithium tantalate (LiTaO3).But the usage of devices based on these materials is limited due to decline in performance above 2.7GHz.In addition to the substrate material, the thickness of the substrate also affects the performance of the filter.In literature[19], the results reveal that thinner substrate is beneficial to enhancing the driving sensitivity of the IDT.When electrode width to substrate thickness ratio is equal to 0.33, the piezoelectric driving performance is the best.Furthermore, above 1GHz, the selectivity of SAW filter declines.SAW also has poor thermal stability.The response and centre frequency of SAW filter will decrease sharply as the temperature rises.Therefore, TC-SAW is designed to improve the conventional SAW filter by covering its IDT with a temperature compensation layer.As temperature compensation, a thin film of Silicon Dioxide (SiO2) helps to reduce temperature-dependent filter frequency drift.
Figure 9 .
Figure 9. schematic design of hybrid filter.
Figure 10 .
Figure 10.simulation result of designed model.
Table 1 .
Electrical specifications of LC BPF.
variable feeding lines) and oblong resonator results in over 25 dB suppression up to 7th order harmonics as well as decline in size. | 4,319 | 2023-11-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
A Defective Interfering Influenza RNA Inhibits Infectious Influenza Virus Replication in Human Respiratory Tract Cells: A Potential New Human Antiviral
Defective interfering (DI) viruses arise during the replication of influenza A virus and contain a non-infective version of the genome that is able to interfere with the production of infectious virus. In this study we hypothesise that a cloned DI influenza A virus RNA may prevent infection of human respiratory epithelial cells with infection by influenza A. The DI RNA (244/PR8) was derived by a natural deletion process from segment 1 of influenza A/PR/8/34 (H1N1); it comprises 395 nucleotides and is packaged in the DI virion in place of a full-length genome segment 1. Given intranasally, 244/PR8 DI virus protects mice and ferrets from clinical influenza caused by a number of different influenza A subtypes and interferes with production of infectious influenza A virus in cells in culture. However, evidence that DI influenza viruses are active in cells of the human respiratory tract is lacking. Here we show that 244/PR8 DI RNA is replicated by an influenza A challenge virus in human lung diploid fibroblasts, bronchial epithelial cells, and primary nasal basal cells, and that the yield of challenge virus is significantly reduced in a dose-dependent manner indicating that DI influenza virus has potential as a human antiviral.
Introduction
Influenza virus causes annual epidemics and occasional but devastating pandemics that are associated with considerable morbidity and mortality, particularly in the elderly and in young children [1,2].Over recent years the threat of influenza viruses, such as H5N1, crossing from their natural avian reservoir to humans and evolving to transmit person-to-person has become a major concern.This threat is confounded by a dearth of new strategies in development.Current anti-influenza measures include vaccines and antivirals (such as Tamiflu ® and Relenza ® ), but the former are highly strain-specific and have to be specifically tailored to those viruses which are thought likely to be circulating in the following months, and viruses are gaining resistance to the antivirals.In recent years concerns have been raised about the year to year differences in efficacy of influenza vaccines due to antigenic mismatch with circulating strains and this is compounded by the variation in vaccine efficacy differs between age groups [3][4][5][6].
Defective interfering (DI) viruses arise during the replication of almost all viruses studied to date and are the result of errors during genome replication [7,8].The DI virus contains a truncated (hence non-infectious or defective) version of the infectious genome that is able to interfere with the production of infectious virus [9].The position and extent of the genome deletion in the DI virus is highly variable, and there can be a central or a terminal deletion depending on the species of virus involved [7,8].However, the only defective genomes that qualify as DI virus are those that retain the signals that permit and control genome replication and packaging.DI viruses have a number of interesting biological properties not present in the infectious parent virus: chief among these is the inability to replicate autonomously (i.e., they are defective) and their dependence on an infectious genome in the same cell to provide the functions that have been deleted and are needed for replication.In addition, DI viruses have the ability to become numerically dominant over infectious virus, and lastly they depress the production and yield of infectious virus (i.e., they are interfering) [9][10][11].
The evolutionary significance of DI virus is not known, though they have been detected in patients infected with a range of different viruses [12][13][14][15].It has been suggested that they play a role in controlling infection and hence in the survival of the host species and ultimately the virus itself, that they result from an inefficient replication process and represent an evolutionarily economic alternative to having more genome to better monitor and correct errors of replication, or that they modulate host immunity in ways that are beneficial to the host [8].
A specific DI influenza A virus, 244/PR8, which is derived from genome segment 1 of influenza A/PR/8/34 (H1N1), has been shown to protect against influenza A virus disease in mice and ferrets following intranasal administration [16][17][18].In addition, 244/PR8 DI virus stimulates interferon type I and protects in vivo against clinical disease caused by non-influenza A viruses such as influenza B virus and a mouse version of respiratory syncytial virus [19,20].These data suggest that 244/PR8 DI virus may be useful clinically in preventing disease caused by influenza A and B viruses and other respiratory viruses.To date there have been no studies of the effect of 244/PR8 DI virus in human cells.We describe here the activity of DI influenza A virus in various cells originating from the human respiratory tract, and demonstrate that 244/PR8 is able to significantly reduce the production of infectious influenza A virus in a dose-dependent manner.
Defective Interfering Virus
The influenza virion genome comprises eight segments of single-stranded negative sense RNA.All influenza DI RNAs have a central deletion and retain the termini found in the full-length virion RNA segment [21,22].We have used reverse genetics to clone a 395-nucleotide DI RNA, called 244/PR8 that is derived from segment 1 of influenza A/PR/8/34 (H1N1) virus [23].244/PR8 DI virus was grown in embryonated chicken's eggs in the presence of infectious "helper" virus to provide the functions missing in the deleted genome segment of the DI virus.The resulting DI virus comprises over 99.9% of the virus present so that it can readily be quantitated using a standard haemagglutinin assay with chicken red blood cells and then converted to micrograms of virus protein (4000 haemagglutination units (HAU) equates to 12 µg).Helper virus infectivity is inactivated by ultraviolet (UV) irradiation at 253.7 nm (0.64 mV/cm 2 ) for 40 s, as described previously [23] with the result that the DI virus preparation is non-infectious [16][17][18][19][20]24]. UV irradiation for 8 min completely abrogates the protective ability of the 244/PR8 DI RNA and this provides a material which acts as a control for any effect of virus protein load [23].
Cells
MRC-5 cells are a permanent fibroblast cell line derived from normal human foetal lung tissue.These cells are diploid, have a limited life span of 42-46 cell doublings, and are approved for the preparation of human virus vaccines.Human bronchial epithelial cells (HBEC; TCS Cellworks Ltd., Buckingham, UK) are primary cells isolated from bronchi.The HBEC were grown on a poly-L-lysine coated plastic substrate in the recommended medium (ZHM-1905, TCS Cellworks Ltd., Buckingham, UK) according to the supplier's instructions, and passaged once before infection.
Primary nasal basal cells were obtained from the respiratory tract of healthy adult human volunteers by brushing the inferior nasal turbinate.None of the subjects were taking medications or had a symptomatic upper respiratory tract infection in the preceding six weeks.All individuals gave their informed consent to be included in the study and all samples were obtained with the individual's permission.The study was conducted in accordance with the Declaration of Helsinki, and with ethical approval from the University of Leicester Committee for Research Ethics.Cells were washed from the brush with 20 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES)-buffered medium 199 (pH 7.4) (Gibco, Life Technologies, Paisley, UK), and kept at 4 • C overnight.For this study cells were obtained from two different individuals.Basal cells were propagated as previously described [25], and grown to >90% confluence in a T80-collagen-coated flask (Nunclon, Fisher Scientific, Loughborough, UK) containing BEGM™ bronchial epithelial cell growth medium (Lonza, Basel, Switzerland).When confluent, cells were detached with trypsin/ethylenediaminetetraacetic acid (EDTA) solution (Sigma-Aldrich, Gillingham, UK), collected by centrifugation, and seeded at 10 4 cells/well into a collagen-coated 96-well plate (Corning, Fisher Scientific, Loughborough, UK).These were grown in BEGM™ medium (Lonza) until >80% confluent (4 × 10 4 cells/well).
Reverse Transcription PCR (RT-PCR) Detection of 244/PR8 DI RNA
Approximately 2 × 10 5 MRC-5 or HBE cells/well were infected with 5 plaque forming units (pfu)/cell of influenza A/WSN/33 (H1N1) or mock infected with 199 medium for 40 min at room temperature prior to inoculation with serial 10-fold dilutions of 244/PR8 DI virus (9 pg to 9 µg as indicated) for the same time.DI virus was removed, maintenance medium (199 medium containing 2% v/v fetal calf serum) added and incubation continued for 24 h at 33 • C. Medium was removed and cells harvested and lysed in Trizol (Sigma-Aldrich, Gillingham, UK).RNA was extracted and analysed by RT-PCR.244/PR8 RNA was detected using reverse transcription PCR using the 244/join primer set that specifically anneals to the unique junction region formed as a result of deletion of a central part of the segment 1 sequence and hence detects only 244/PR8 DI RNA generating a 252-nucleotide fragment, as described previously [18].A 1 kb DNA size ladder (Bioline, London, UK) was used to determine the size of RT-PCR product.
Inhibition of Infectious Virus Multiplication in Primary Nasal Basal Cells by 244/PR8 DI Virus
Nearly confluent basal cells were rinsed with medium and inoculated with 12 pg to 1.2 µg of 244/PR8 DI virus/well or with the same amount of DI virus that had been inactivated by prolonged UV irradiation [23].Following incubation for 24 h at 37 • C, cells were washed with fresh BEGM™ medium, and infected with influenza A/WSN/33 (10 3 pfu/well).After incubation for a further 72 h at 37 • C, culture supernatants were collected and assayed for infectivity.Briefly, Madin-Darby canine kidney (MDCK) cells were inoculated with serially diluted virus and incubated for 8 h.Cells were then fixed with 4% paraformaldehyde (Sigma-Aldrich, Gillingham, UK), blocked with 5% milk powder in phosphate-buffered saline (PBS), and incubated with a WSN haemagglutinin (HA)-specific mouse monoclonal antibody (a gift from R. G. Webster, St Jude Children's Research Hospital, Memphis, TN, USA) in PBS containing 0.1% Tween 20 (PBS-Tween).After removal of the primary antibody, cells were washed with PBS and incubated with a goat anti-mouse IgG antibody (Sigma, Gillingham, UK) coupled to alkaline phosphatase in PBS-Tween.After a further wash with Tris buffered saline (Sigma, Gillingham, UK) to remove unbound material, the cells were incubated with the enzyme substrate (4-nitrophenyl phosphate, Sigma) in diethanolamine buffer solution (Sigma, Gillingham, UK) according to the manufacturer's instructions, and the absorbance read at 405 nm.The significance of the inhibitory effect of DI virus was determined using a Student's t-test.
Selected basal cell cultures were prepared for immunofluorescence microscopy by fixing overnight with 4% paraformaldehyde (Sigma, Gillingham, UK) in PBS.After blocking with PBS containing 1% bovine serum albumin (BSA, Sigma, Gillingham, UK), cells were stained for WSN HA antigen using a HA-specific mouse monoclonal antibody as above.Unbound antibody was removed and bound antibody was detected using a secondary rabbit anti-mouse immunoglobulin (Ig)G conjugated to Alexa Fluor ® 594 (Invitrogen, Life Technologies, Paisley, UK).Nuclei were stained with Hoechst 33258 (4 ,6-diamidino-2-phenylindole, DAPI, Invitrogen).Cells were mounted in 50% glycerol in PBS, containing 0.01% N-propyl gallate.Low magnification images (20×) were obtained with a Nikon TU1000 fluorescence inverted microscope (Kingston, UK) equipped with a Hamamatsu digital camera (Shizuoka, Japan).
To determine the 50% inhibition level (EC 50 ) values, confluent cell cultures in 96 well plates were treated with a 10-fold dilution series of 244/PR8 DI virus or UV inactivated DI virus and incubated for 24 h at 37 • C. The cells were washed with fresh BEGM™ medium, and infected with influenza A/WSN/33 (10 3 pfu/well).After incubation for a further 72 h at 37 • C, culture supernatants were collected and assayed for infectivity as described above.The data were plotted using Graphpad Prism 6 (GraphPad Software, Inc., La Jolla, CA, USA) using a non-linear fit and normalised to calculate percentage virus replication.
Replication of 244/PR8 DI RNA in MRC-5 and Human Bronchial Epithelial (HBE) Cells
It has not previously been shown if influenza 244/PR8 DI RNA can be replicated by infectious virus and exert its interfering activity in human cells derived from the respiratory tract.After treatment of MRC-5 cells with 9 pg, 90 pg, 0.9 µg or 9 µg 244/PR8 DI virus (and no infectious virus), DI RNA was detected at 24 h after administration in cultures that received 900 pg or 9 µg DI virus alone but not in cells that were inoculated lower levels (Figure 1).This suggested that the inoculum DI RNA was retained in cells for at least 24 h and this is consistent with in vivo data from mice showing that treatment with 244/PR8 DI virus prior to infection provides protection from disease [23].In contrast, in cells inoculated with both DI and infectious WSN virus DI RNA was evident at all dilutions of DI virus, demonstrating that the DI RNA had been replicated by the infectious helper virus (Figure 1).No DI RNA was detected in cells inoculated with infectious virus alone or in mock-inoculated cells.Similar data were obtained from primary HBE cells, though in contrast to the situation with the MRC-5 cells no signal was detected when the cells were treated with 0.9 µg of DI virus alone.The intensity of the DI RNA-derived PCR product detected in virus infected HBE cells treated with 9 pg of DI virus was less than that seen in the corresponding MRC-5 cells.(Figure 1).These data demonstrate that both 244/PR8 DI and infectious influenza A viruses are able to enter human respiratory tract cells and that after entry the 244/PR8 DI RNA is accessible for replication by an infectious influenza A helper virus.
Interference in Primary Human Nasal Basal Cells
Having demonstrated that 244/PR8 DI RNA could be replicated by infectious virus in the MRC-5 cell line and the primary HBE cells, we then examined the ability of 244/PR8 DI virus to interfere with the production of infectious WSN virus in differentiated human basal cells that closely resemble cells of the intact nasal epithelium [25].No treatment, or treatment with 1.2 μg DI virus alone, or with 1.2 μg UV-inactivated DI virus alone generated no infectious virus, as expected (Figure 2).When 12 pg, 120 pg or 1.2 μg HAU of DI virus were inoculated alongside infectious virus there was a statistically significant reduction in the yield of infectious virus (p ≤ 0.05) with all three doses of DI virus compared to the control culture infected with virus alone.Co-infection with UV-inactivated DI virus and infectious virus had no significant effect on the yield of infectious progeny virus (Figure 2).These data show that primary human basal cells support the replication of influenza A virus and that treatment with 244/PR8 DI virus significantly reduces the replication of infectious virus in a dose-dependent manner.
Interference in Primary Human Nasal Basal Cells
Having demonstrated that 244/PR8 DI RNA could be replicated by infectious virus in the MRC-5 cell line and the primary HBE cells, we then examined the ability of 244/PR8 DI virus to interfere with the production of infectious WSN virus in differentiated human basal cells that closely resemble cells of the intact nasal epithelium [25].No treatment, or treatment with 1.2 µg DI virus alone, or with 1.2 µg UV-inactivated DI virus alone generated no infectious virus, as expected (Figure 2).When 12 pg, 120 pg or 1.2 µg HAU of DI virus were inoculated alongside infectious virus there was a statistically significant reduction in the yield of infectious virus (p ≤ 0.05) with all three doses of DI virus compared to the control culture infected with virus alone.Co-infection with UV-inactivated DI virus and infectious virus had no significant effect on the yield of infectious progeny virus (Figure 2).These data show that primary human basal cells support the replication of influenza A virus and that treatment with 244/PR8 DI virus significantly reduces the replication of infectious virus in a dose-dependent manner.To confirm the effect of the 244/PR8 DI virus on the replication of infectious influenza virus, human basal cells inoculated as above with 12 pg or 1.2 μg of 244/PR8 DI virus and infectious influenza A virus were fixed after removal of the medium at 24 hours post infection for the infectivity assay described above.The presence of influenza A/WSN/33 antigen in the fixed cells was detected using a monoclonal antibody specific for the WSN HA protein and a fluorescently labelled secondary antibody (Alexa Fluor ® 594; Figure 3).The positive HA fluorescence in cells infected with WSN alone clearly shows the replication of virus in the basal cells.In WSN-infected cells treated with 1.2 μg of 244/PR8 DI virus the level of WSN HA protein detected by fluorescence was significantly reduced at the higher concentration of 244/PR8 DI virus (Figure 3), consistent with the significant reduction in yield of infectious virus when treated with the same amount of 244/PR8 virus shown in Figure 2. The 244/PR8 DI virus is replicated by complementation with influenza A/WSN/33 so that any progeny DI virus is 244/WSN and carries all the WSN antigens.With 12 pg 244/PR8, infectivity was also significantly reduced (Figure 2) although expression of the HA antigen was unaffected (Figure 3).This suggests that DI virus is being synthesized in place of infectious virus.At the higher concentration of 244/PR8 (1.2 μg) little HA antigen can be seen (Figure 3), suggesting that the yield of both DI and infectious virus has been greatly reduced.To confirm the effect of the 244/PR8 DI virus on the replication of infectious influenza virus, human basal cells inoculated as above with 12 pg or 1.2 µg of 244/PR8 DI virus and infectious influenza A virus were fixed after removal of the medium at 24 h post infection for the infectivity assay described above.The presence of influenza A/WSN/33 antigen in the fixed cells was detected using a monoclonal antibody specific for the WSN HA protein and a fluorescently labelled secondary antibody (Alexa Fluor ® 594; Figure 3).The positive HA fluorescence in cells infected with WSN alone clearly shows the replication of virus in the basal cells.In WSN-infected cells treated with 1.2 µg of 244/PR8 DI virus the level of WSN HA protein detected by fluorescence was significantly reduced at the higher concentration of 244/PR8 DI virus (Figure 3), consistent with the significant reduction in yield of infectious virus when treated with the same amount of 244/PR8 virus shown in Figure 2. The 244/PR8 DI virus is replicated by complementation with influenza A/WSN/33 so that any progeny DI virus is 244/WSN and carries all the WSN antigens.With 12 pg 244/PR8, infectivity was also significantly reduced (Figure 2) although expression of the HA antigen was unaffected (Figure 3).This suggests that DI virus is being synthesized in place of infectious virus.At the higher concentration of 244/PR8 (1.2 µg) little HA antigen can be seen (Figure 3), suggesting that the yield of both DI and infectious virus has been greatly reduced.The data presented here demonstrate that cloned 244/PR8 DI influenza A virus is capable of delivering DI RNA into three different types of cells derived from the human respiratory tract.This DI RNA was replicated by infectious challenge influenza A virus as has been seen in vivo in mice and ferrets [16][17][18]23].In primary nasal basal cells the DI RNA was stable for at least 24 h at 33 °C before cells were infected.The yield of challenge virus and its antigen production were reduced in proportion to the dose of applied DI virus, and the calculated EC50 in two separate cultures of primary nasal basal cells of 10 −3.04 and 10 −3.09 was comparable with the MDCK cell value of 10 −3.27 (Figure S1).Further data show that primary human basal cells can support the replication of influenza A virus and that treatment with 244/PR8 progressively reduces the replication of infectious virus, HA antigen production, and DI virus itself in a dose-dependent manner.These results represent a significant step forward in validating the use of DI virus as an antiviral in humans.
Figure 1 .
Figure 1.Replication of 244/PR8 defective interfering (DI) RNA in MRC-5 and human bronchial epithelial (HBE) cells in culture.RNA was prepared from cells inoculated with 9 pg, 90 pg, 0.9 μg or 9 μg 244/PR8 DI virus alone or with 244/PR8 DI and infectious A/WSN/33 influenza viruses together, as indicated.244/PR8 RNA was detected by reverse transcription PCR (RT-PCR) amplification and the 252 nucleotide (nt) product was identified by agarose gel electrophoresis.RNA used for RT-PCR amplification in lane m was extracted from mock infected cells and RNA in lane v was extracted from virus infected cells.Positive and negative PCR controls were generated using purified 244/PR8 DI RNA (+) or by omitting RNA from the RT-PCR reaction (−).A DNA size marker ladder is in the left-hand lane.The size of fragments (nt) is indicated.
Figure 1 .
Figure 1.Replication of 244/PR8 defective interfering (DI) RNA in MRC-5 and human bronchial epithelial (HBE) cells in culture.RNA was prepared from cells inoculated with 9 pg, 90 pg, 0.9 µg or 9 µg 244/PR8 DI virus alone or with 244/PR8 DI and infectious A/WSN/33 influenza viruses together, as indicated.244/PR8 RNA was detected by reverse transcription PCR (RT-PCR) amplification and the 252 nucleotide (nt) product was identified by agarose gel electrophoresis.RNA used for RT-PCR amplification in lane m was extracted from mock infected cells and RNA in lane v was extracted from virus infected cells.Positive and negative PCR controls were generated using purified 244/PR8 DI RNA (+) or by omitting RNA from the RT-PCR reaction (−).A DNA size marker ladder is in the left-hand lane.The size of fragments (nt) is indicated.
Figure 2 .
Figure 2. The 244/PR8 DI virus interferes with the multiplication of infectious influenza A/WSN/33 in primary human nasal basal cells.Basal cells were inoculated with 12 pg, 120 pg or 1.2 μg DI virus or 1.2 μg inactivated DI virus (iDI) before infection with influenza A/WSN/33 virus.Culture supernatants were collected and assayed for virus infectivity after 24 h.Infectivity was significantly reduced at all concentrations of DI virus employed (** p ≤ 0.05).
Figure 2 .
Figure 2. The 244/PR8 DI virus interferes with the multiplication of infectious influenza A/WSN/33 in primary human nasal basal cells.Basal cells were inoculated with 12 pg, 120 pg or 1.2 µg DI virus or 1.2 µg inactivated DI virus (iDI) before infection with influenza A/WSN/33 virus.Culture supernatants were collected and assayed for virus infectivity after 24 h.Infectivity was significantly reduced at all concentrations of DI virus employed (** p ≤ 0.05).
Figure 3 .
Figure 3. Immunofluorescent detection shows that 244/PR8 DI virus interferes with haemagglutinin (HA) antigen production by influenza A/WSN/33 (WSN) in primary human nasal basal cells.Basal cells were inoculated with bronchial epithelial cell growth BEGM™ medium (control), or 12 pg or 1.2 μg DI virus, as indicated, before infection with influenza A/WSN/33 virus, or infected with WSN alone.Cells were fixed after 24 h and the WSN haemagglutinin protein detected by immunofluorescence.Left panels show cell nuclei stained with 4′,6-diamidino-2-phenylindole (DAPI), middle panels show cells stained with a WSN HA-specific monoclonal antibody and a secondary antibody conjugated with Alexa ® 594 (both shown in black and white), and right panels show the merged images in colour.DAPI is blue and Alexa ® 594 is red.The images show considerable reduction of the level of influenza A/WSN/33 HA protein in the presence of the higher dose 244/PR8 DI virus.The scale bar indicates 20 μm. | 5,322.8 | 2016-08-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Online Health Monitoring using Facebook Advertisement Audience Estimates in the United States: Evaluation Study
.
Facebook Use in Health Domain
Nearly one third of the world population is using social media and the Internet for entertainment, study, work, and socializing. Currently, Facebook is the most popular social network, with over 1.7 billion monthly active users (as of the end of 2016). Due to this popularity, many health organizations, including hospitals, governments, and patients associations, use Facebook as a channel for health communication [1]. For example, a study by Griffi et al found that over 90% of the US Medicaid/Medicare hospitals had Facebook accounts [2].
Since as early as 2008, there has been interest in the health domain concerning the use of Facebook. For example, at the time Parslow highlighted that among the 60 million users, there were many medical students using the social network as a channel for medical education [3]. On the other hand, Ybarra et al found that teenagers shared unhealthy risk behaviors such as unwanted sexual solicitation on Facebook [4].
Since these early studies, the interest in Facebook within the health domain has continued to grow, not only due to the increase in Facebook's reach but also due to new features of the platform, which include the development of social games [5,6] and apps [7,8]. Over the last decade, Facebook has been used for medical education [9], patient education [10], peer-to-peer support, organ donation promotion [7], hospital quality estimation [11], and health policy making [12]. Overall, the 2 most popular use cases of Facebook in the health domain, as explained below, are for recruitment and health communication, and public health monitoring. Increasingly, both of these practices rely on the use of Facebook Advertising platform, as we also explain below.
Facebook for Recruitment and Health Communication
One of the main advantages of Facebook's popularity is the possibility of using it for the recruitment of people affected by not-so-common conditions such as auditory hallucinations [13]. It can also be used for targeted recruitment of people with particular demographic profiles [14][15][16] or health behavior (eg, long-term smoking [17]). This can be done by interacting with different Facebook groups [15] or via targeted advertisement. Furthermore, many health care organizations are using Facebook for communication with health consumers. For example, hospitals use Facebook to increase awareness about health-related topics and also to communicate with their patients [2]. Public health administrations also use Facebook to raise awareness about important topics, such as smoking cessation [18], organ donation [7], newborn screening [19], and health education [20]. Furthermore, this communication from public health authorities can be used as mechanisms for health policy making [12] and notifying people at risk of infectious diseases [21].
Social Media as a Health Tracking Tool
The study of new social media data sources to understand health interests and behaviors is a crucial part of infodemiology [22]. Indeed, social media has been widely used by researchers to study health trends, such as those in health care facility usage [23], abortion information seeking [24], outbreak detection [25], vaccine hesitancy [26], and others. Studies have also found that using social media for seasonal flu tracking outperforms the use of Google search logs for this purpose [27,28] as social media provides more context about why a term is used (or searched for), thus reducing false-positive rates. Moreover, mobile advertisement tools provide fine-grained demographics of mobile app users. One of the most popular is Flurry Analytics, which is owned by Yahoo Inc., which has been used to study the demographics of health apps [29,30]. However, the boundary between mobile analytics and Web analytics is becoming increasingly blurry as the usage of online websites is becoming increasingly mobile and social media companies such as Facebook acquire mobile apps such as Instagram or WhatsApp.
Facebook Advertisements
As an advertising platform, Facebook allows advertisers to selectively show their ads to Facebook users matching certain criteria, specified by the advertiser. Even before launching-and paying for-the ad, Facebook provides estimates of the expected audience size. As an example, one can ask Facebook for the number of users residing in Alabama who are male, aged 25 to 34 years, and who have shown an interest in Diabetes mellitus awareness to receive an estimate of 11,000 users. These tools are available for free in the Facebook Adverts Manager [31]. Facebook documentation explains that the interests are determined from "things people share on their Timelines, apps they use, ads they click, Pages they like and other activities on and off of Facebook and Instagram. Interests may also factor in demographics such as age, gender, and location" [32].
A few recent studies have attempted to link what people like on Facebook to behavioral aspects related to health conditions [33,34]. Gittelman et al converted over 30 Facebook likes categories to 9 factors to use in the modeling of mortality [35]. Although they show an improvement in the statistical models, their approach avoided determining relationships between each individual category with the real-world data, limiting the insight into the usefulness of each Facebook interest. On the other hand, Chunara e al explored the relationship between 2 factors, namely, interest in television and outdoor activities, and the obesity rates in metros across the United States and neighborhoods within New York City [36]. Although showing promising correlations, the latter study failed to account for baseline user activity, potentially reporting relationships indistinguishable from general Facebook. In this paper, we address the shortcomings of both these studies.
Study Goals
Previous studies have attempted to demonstrate the value of using Facebook ad audience estimates for modeling regional variations of the prevalence of certain health conditions [35,36]. However, these studies fail to compare the strength of the relationships between Facebook interests and real-world health statistics to baseline relationships, potentially reporting spurious results due to the black box nature of the tool. In this study, we propose 2 methods for gauging the strength of such relationships: first by introducing placebo interests which to a varying extent represent baseline Facebook user behavior, and second by examining alternative normalization populations.
Thus, we contribute to the methodological literature addressing the different variables that can affect the use of Facebook interest data for public health monitoring, in an attempt to lessen the barriers for comparison and reproducibility of studies employing such data.
Facebook Advertisement Audience Data Collection
All data used for the following analysis are provided by the Facebook's Marketing application programming interface (API) [37]. Equivalent data could have been obtained through the Web interface of the Adverts Manager, but using the API makes programmatic access easier and gives more precise audience estimates, down to +/-20 users as opposed to +/-1000 users. The numbers we used are the so-called Reach Estimates: "Potential reach is the number of monthly active people on Facebook that match the audience you defined through your audience targeting selections" [38]. Figure 1 shows a screenshot of Facebook's Adverts Manager [31], illustrating the capabilities. As previously defined, Facebook provides an aggregate mapping between users and interests, hiding the source of the data (whether it comes from likes, posts, or other Facebook properties which include Instagram), providing a simplified interface, while also hiding potentially useful information.
For our study, we obtained Facebook data that are potentially related to the prevalence of 4 diverse health conditions: (1) diabetes (type II), (2) obesity, (3) food sensitivities, and (4) alcoholism. As largely behavior-related conditions, these are prominent causes of serious illness and death across the United States. Moreover, they range in the extent of potential social stigma, and their impact on the personal and social life of an individual.
For each of these 4 conditions, we defined a number of marker interests. A marker interest is an interest of a Facebook user that could plausibly be used to measure the prevalence of a certain condition due to a potential causal link between the condition and the interest. We used an iterative process to obtain these marker interests-employing domain knowledge, we used the Facebook Adverts Manager interface to exhaustively enumerate interests related to the selected illnesses, selecting all those passing the threshold of US-wide audience in hundreds of thousands. For example, both of the interests Alcohol and Alcoholics Anonymous are marker interests for alcoholism.
Similarly, we defined a set of placebo interests. A placebo interest is an interest of a Facebook user that should not have an obvious causal link with a given condition, but that might still turn out to be correlated due to latent factors such as common user demographics.
Placebo interests are helpful to understand how much of any predictive power of marker interests is due to spurious correlations or due to unknown latent factors. Intuitively, these interests are meant as a placebo wherein no topic-specific treatment is performed, and any effect observed is due to the random or causal factors outside the topic. For this, we used the popular generic interests (ie, Facebook, Reading, Entertainment, Music, and Technology) that, a priori, should not have any strong link to the 4 conditions studied. Each of these interests is shared by hundreds of millions of Facebook users worldwide, and serves as approximations of the level of involvement of users with the platform in general.
Finally, we also defined a health-related baseline interest. A baseline interest is a broad health-related interest on Facebook that could plausibly be used to measure general health awareness.
In this study, we used the interest Fitness and wellness as a baseline interest. This baseline interest helps to clarify if any predictive power of a marker interest is really due to a condition-specific link to the interest, or if we are only picking up the general health awareness level.
Using these interests, we then queried the Facebook Graph API [39] for the estimation of audience size for each combination of interest and US state, as well as gender (including both), age group (18-24, 25-44, 45-64, 65+ years, and all combined), and ethnic affinity (African American, Asian American, Hispanic, none of the above, and all combined). This allows us to look at both correlations across the 50 US states, as well as at correlations across different demographic groups.
On its own, a single audience estimate is of little value. It is only when seen in context that one can judge if a number is high or low. Thus, to normalize the raw audience estimate counts, we defined 3 reference populations: (1) number of Facebook users (widest selection), (2) number of users interested in Facebook (thus who are more likely to be active on the site), and (3) number of users interested in Fitness and Wellness (thus who are more likely to be interested in health-related topics). We then divided the marker and placebo interests by the reference populations, producing 3 variants of proportionate interest measurement. Finally, the Facebook API was queried for the audience estimates in September 2016.
Public Health Data Collection
The US state-level public health data were obtained via the America's Health Rankings Annual Report [40], which combines data from well-recognized sources including Centers for Disease Control and Prevention, American Medical Association, Federal Bureau of Investigation, Dartmouth Atlas Project, US Department of Education, and Census Bureau. For our study, we used the most recent available data for 2015 [41]. Data for the District of Columbia were not used, as they had several missing values.
Comparing Public Health Data and Facebook Advertisements Data
As described above, for each of the 50 states, we have (1) a set of indices derived from Facebook's ads audience estimates, for example, the fraction of monthly active Facebook users with an interest in the topic Diabetic Diet, and (2) a set of public health indices, such as the fraction of the adult population that has diabetes. Each Facebook index f consists of a marker, placebo or baseline interest (see definitions above), and a choice of reference population (the set of all Facebook users by default). To see if an index f could be used to approximate a particular public health index h, we computed the Pearson correlation coefficient r fh across the 50 states. Thus, we hypothesized that Facebook indices (independent variables) are related to public health indices (dependent variable). We deliberately chose Pearson r for its simplicity and did not experiment with any model fitting, such as multi-variate linear regression, or with non-linear measures of correlation, such as Spearman rank correlation coefficient to clearly show the relationship of each interest, as well as to compare marker interests to the placebos and baselines.
To avoid reporting spurious correlations, we applied a significance threshold of P=.05/k. Here k, the number of experiments performed, is a Bonferroni correction factor to avoid false positives when testing multiple hypotheses. In our setting, each pair of indices f and h constitutes one hypothesis that is being tested.
Analyzing Potential Comorbidity
To explore the feasibility of using Facebook data to discover comorbidity, where suffering from one condition increases the probability of suffering from another, we choose Fatigue as a target condition. Concretely, we explored these relationships by computing the lift statistic between fatigue-related marker interests and others which may be associated with them. Lift is often used in association rule mining as a measure for strength of the association between 2 occurrences, normalized by the likelihood of them occurring by random chance, and has the following formula:
It can intuitively be understood as P(A|B)/P(A)=P(B|A)/P(B),
that is, the lift in probability of event A (or B) occurring over its baseline probability, given that event B (or A) has occurred. A value greater than 1.0 indicates an increase in conditional probability, whereas a value smaller than 1.0 indicates a decrease. Table 1 shows the US-wide audience estimates for the selected marker, placebo, and baseline interests. At the bottom, we also show the Facebook audience of US residents who are aged 18 years or older. Recall that to constrain the number of considered interests, we selected only those having at least hundreds of thousands US-wide audience. Indeed, some interests, such as Alcoholic beverages (at 74 million), span a great deal of US Facebook users (totaling at 194 million users, as listed at the bottom of the table). A bootstrapping approach was taken to these, whereby we began with a keyword relevant to the topic (such as alcohol for alcoholism), and added other related interests, which the Facebook Advertisement Marketing interface provides. Thus, the selection of the interests was seeded by domain expertise, and expanded via internal Facebook usage statistics.
Relation to Public Health Data
We began with a question-how much do the populations having particular interests in health-related topics, as determined by Facebook, correlate with ground truth statistics gathered by Centers for Disease Control and Prevention (CDC)? For visual examination, we plotted the intensities of diabetes prevalence and the percent of interest Diabetes mellitus awareness (normalized by the number of Facebook users, FB pop ) in Figure 2. The intense colors in both plots are concentrated in the south, as well as West Virginia, and less so in mountain states as well as Vermont and New Hampshire.
Next, we quantified the relationship between Facebook advertisement audience figures and the ground truth statistics. First, we examined the placebo interests (normalized by FB pop ), as shown in Table 2, along with the accompanying 2-tail significance levels. The health statistics are proportions of the population, including engaging in excessive drinking (results for binge and chronic drinking were similar; hence, we omitted them here), as well as obesity and diabetes rates. Note the strength of the association between some variables, especially obesity and diabetes, with Pearson correlation r=.68 between the placebo interest Technology and both diabetes and obesity prevalence. Regardless of the forces at play, these figures caution us against considering high r values as indicative of causal relationships in the following experiments. Table 3 shows the correlations of each health-related interest with the appropriate health statistic (eg, between Alcoholism interests and statistics on excessive drinking). The 2-tailed significance tests for these correlations have been adjusted using Bonferroni correction to address the problem of multiple comparisons and guard against false positives. We observe a complex relationship between alcohol-related variables. Although Alcohol and Bars have little correlation with excessive drinking, Alcohol abuse and Alcoholism awareness are positively related to it. Interventions, on the other hand, including Alcoholics Anonymous and 12 -step program, are negatively associated with drinking. Note, however, that most values of r achieved for Alcoholism are barely larger than the values for the placebo interests Reading and Technology of around r=−. 35.
Considering obesity and diabetes, most marker interests are positively correlated with their real-world corresponding statistics, although some correlations vary drastically with the choice of reference population. The strongest and most consistent correlations are between Plus-size clothing (r=.74) and obesity, as well as Diabetes mellitus awareness (r=.78) and Diabetic diet (r=.75) and diabetes.
The variation between correlations across the 3 different reference populations shows that the reference point used for the raw audience counts has strong effects on the results. Facebook interest (FB int ) normalization, for instance, removes the effect of users who are in general likely to be active and have interests, some of which by chance may include health-related topics. Similarly, the Fitness and Wellness interest (FW int ) removes the effect of general interest in health. As we can see in Table 3, these normalizations affect each interest in a different manner.
Furthermore, we assessed the combined power of these interests in modeling the real-life phenomena by building linear regression models to predict the real-world statistics. As there were only 50 data points in the dataset, we used feature selection using backward feature elimination optimizing Akaike Information Criterion scores, in which least-contributing features were removed until an optimal performance was achieved. The resulting linear models achieve the adjusted R 2 of .533 for modeling Alcoholism, .712 for Obesity, and .790 for Diabetes. Next, we included the following additional control variables: (1) demographics, including age, gender, and race distributions; (2) financial statistics, including median annual household income and unemployment rate; (3) health care-related statistics, including health spending per capita and rate of uninsured persons; (4) internet access rate; and (5) health-related variables, including life expectancy and poor mental health days reported. When applied to this much larger set of variables, the Facebook marker variables were still selected, and the resulting models had an improved performance of .698 (Alcoholism), .827 (Obesity), and .894 (Diabetes). Interestingly, only in the case of Obesity were placebo interests selected for the final models, which were Entertainment and Technology. We discuss the interpretation of these further in the Discussion section.
Comorbidities and Related Behaviors
In the previous analysis, we have only considered audience estimates for 1 Facebook interest at a time. However, Facebook's advertising platform supports the definition of more complex target groups, which express not only those interests that are directly related to the illnesses but also those that indicate behaviors or conditions which may be linked to it. Alcoholism, for example, is associated with depression and anxiety [41], whereas obesity has been linked to poor dietary choices and sedentary lifestyle. As described in the Methods section, we use the notion of lift to measure the relationship between 2 interests. It can intuitively be understood as the lift in probability of event A (or B) occurring over its baseline probability, given that event B (or A) has occurred. A value greater than 1.0 indicates an increase in conditional probability, whereas a value smaller than 1.0 indicates a decrease.
We selected a variety of interests that may be related to obesity, diabetes, alcoholism, and food sensitivities. Specifically, for the first 2, the interests include physical activities (like hiking and yoga), nutrition interests (healthy diet, desserts), specific restaurants (McDonald's, Subway), and spectator sports (NASCAR). For alcoholism, we included places associated with drinking (nightclubs), as well as mental health interests (mental health). As the task is exploratory, we did not include all possible related interests, but instead used a selection of 45 having the best Facebook ads audience coverage. Table 4 shows the 20 marker interests and related interests with greatest lift (that is, which appear more often together than would be predicted by chance), and with smallest lift (which appear less often together than one would observe by chance). Some relationships make sense, such as that between Alcoholics Anonymous and Anxiety Awareness, as alcoholism is associated with mental health issues. Another example may be Bariatrics and Panera Bread (a restaurant chain promoted as healthy). However, we caution the reader to impose meaning on these relationships, as these may be caused by other means. For example, the interest Nightlife may be highly expressed in urbanized states. Thus, a positive lift might be due to a latent factor, such as urbanization, giving rise to both interests. In future studies it might be worth exploring such alternative explanations by limiting the analysis to urban centers.
Demographic Exploration
Another potentially powerful feature of Facebook Advertising Manager is the demographic information of its users, including age, gender, and ethnic affinity [42]. We related these to the illness interests in Table 5, similarly listing relationships that are more likely (above) and less likely (below) than one would expect by chance. The most powerful relationship is between Plus size and African American demographic. This relationship is corroborated by the literature on obesity. For instance, according to the US Department of Health and Human Services, "In 2014, African Americans were 1.5 times as likely to be obese as Non-Hispanic Whites" [43]. The association between diabetes and elderly is also supported by CDC, with an estimated 25.9% of the US population aged ≥65 years having diabetes in 2012 [44]. Similarly, the association between diabetes and Hispanic demographic is justified by research, with Hispanic adults being 1.7 times more likely than non-Hispanic white adults to have been diagnosed with diabetes by a physician [45].
Some inverse relationships in the right-hand side columns of Table 5 can also be justified by prior literature. Food sensitivities (such as Lactose intolerance) are less likely in adult men than women [46]. Similarly, we find a lift of 1.61 between women and Gluten-free diet, and women are diagnosed with Celiac disease (hypersensitivity to gluten) 2 to 3 times more often than men [47]. However, these numbers may also show the interests of certain demographics. For instance, it may be that Facebook users over 65 years of age are not interested in Obesity awareness or Diabetes mellitus type 1 awareness (as the latter is often discovered in children), each having lifts of 0.02. However, not all interpretations are straightforward. Although men are more likely to have diabetes (13.6% males vs 11.2% females have diabetes), they are very unlikely to have an interest in Insulin index.
Methodological Contributions to Using Facebook Advertisement Audience Estimates
To use Facebook advertisement audience estimates for public health is not trivial, as there are many aspects that can affect the interpretability of the data from Facebook. At first, our results seem to confirm previous findings that variations in interests on Facebook across different geographic locations can be used for modeling lifestyle disease prevalence. We were able to find clear correlations of Facebook advertisement audience estimates with available public health data. This is consistent with some of the previous studies published in the literature [14,15,17,35,36]. However, unlike Gittelman et al [35], we examined the contribution of each marker interest, and consequently found a variety of behaviors. For example, the performance for Weight loss (r=.76 for FB pop ) and Dieting (r=.08 for FB pop ) for modeling obesity rates were vastly different. This means that, as of now, there is a certain amount of trial-and-error involved in finding marker interests that are informative.
Crucially, this work introduces the use of the placebo interests, which provides a baseline performance estimate with which the above marker interests may be compared. In this study, we show that common topics such as Reading and Technology can display a nontrivial correlation with ground-truth statistics, making them an important step in verifying the significance of health-specific results. The fact that the interests we have not expected to have a strong relationship with the illnesses have shown substantial correlation may be due to the following: (1) Facebook usage, which may predispose users to certain conditions, (2) a direct relationship between the variables (interest in reading may be associated with a sedentary lifestyle, which is in turn related to diabetes [48]), or (3) some causal relationship via latent factors influencing both variables. Regardless, the strength of the correlations found with these placebos stands as a cautionary observation for future social media researchers that marker variables need to be interpreted in the light of possible confounding factors.
As a causal explanation may still be at play in an indirect way, the choice of interests that have no relationship with health-related statistics becomes an interesting challenge, as any behavior may have a tangential connection with the lifestyles involved. For instance, during the feature selection process we find Technology and Entertainment being selected to model Obesity (although not Alcoholism or Diabetes). However, if such interests which have no theoretical grounding to be correlated with the disease are found, the extent of their observed relationship with it-as discovered in the data-may provide a glimpse into a placebo effect inherent in the data. It is precisely this effect that should determine whether marker correlations are strong enough to be considered interesting.
Similar conclusions can be drawn about the exploration of normalization factors. The employment of generic population estimates of Facebook users (FB pop ), compared with general interest in Facebook (FB int ), or domain-specific interest in Fitness and Wellness (FW int ) all provide a different interpretation of the raw audience estimates, and must be selected according to the aims of the study. Even interests which we found to have lift of 1 could be used as topic-specific placebos. In future work, we plan to design and validate methods to normalize interests across different cohorts. For example, if we know the most common interest among teenagers, we will have a better baseline for gauging interest among teenager for certain interests, such as ultra-caffeinated drinks.
Challenges of Using Facebook Advertisements as a Black Box
Going forward, the biggest question that one has to address when using Facebook's advertisement audience estimates for certain interests is the following: what does it mean when a certain user has a certain interest as detected by Facebook?
Finding the answer involves understanding 2 different aspects: (1) Facebook's algorithmic black box and (2) Facebook users. On the algorithmic side, Facebook employs a number of classifiers to detect if, for instance, a given user is interested in the topic Obesity awareness. The features that go into this classification are largely derived from Facebook pages the user has liked, but also from general Web browsing history (through tracking cookies on pages with Facebook like or share buttons), as well as other information [49]. Understanding this can have important implications for the applicability of this data source for observing stigmatized health conditions where it is less likely that a Facebook user publicly likes a page on, for example, genital herpes. However, apart from understanding the importance of various features, there is also the issue of understanding the class labels. What exactly does Obesity awareness refer to? And what is the difference between an interest in Dieting versus Weight loss? Unfortunately, Facebook does not provide the option to show pages labeled with a given interest, or any other way to obtain a better understanding.
But even if one was to perfectly understand the inner workings of Facebook's classification setup, there still is the fundamental challenge of understanding the user's inner workings. What does it mean if a user likes a page about lung cancer? Has the user been diagnosed with lung cancer? Or someone in their family? Are they just generally concerned about the topic? Having a better understanding of the user's motivations can lead to a better selection of marker interests. As an example, we observe that an interest in plus-size clothing has good predictive power in modeling regional variation in obesity rates. Arguably, this is because having and expressing this interest is closely related to being overweight. However, the same cannot be said for an interest in Alcohol and its use to model prevalence of alcoholism. A potential solution to these questions would be to employ the advertisement platform to recruit participants for a survey designed to assess the above questions, and thus evaluate the efficacy of Facebook's interest inference algorithm. Although research on even smaller regions such as ZIP codes have been performed [50,51], Facebook Advertisement Manager allows for queries focused on even smaller geographical regions-the interface allows for areas as small as 2 km across.
Finally, interests in Facebook can vary longitudinally, both as Facebook's user base expands and contracts, and as yearly seasonal variations occur. The first change in estimates would explain the general upward trend in the figures reported by this study, as compared with the previous ones [35,36]. The second will require longitudinal tracking and normalization if Facebook advertisement audience estimates are used for monitoring interests over long periods of time. Similarly, such dated information can then be synched with ground truth such as CDC reports for a more precise overlap of the time frame.
Consequences for Public Health
As explained above, there are limitations in the use of Facebook advertising for public health. We also need to be aware of potential negative consequences of using it. The focus on online sources can exacerbate health disparities due to the heterogeneous levels of digital health literacy [52,53]. If public health stakeholders are relying exclusively on social media data, they might unintentionally leave behind large segments of the population. For example, people with visual impairment might less frequently use social media due to accessibility problems [54].
Furthermore, advanced advertising allows tailoring by interests that are not necessarily health related and can be controversial. For example, it is possible to target people with interest in both cars and alcohol for a campaign to reduce driving under the influence of alcohol. This can be seen by many as a potential violation of privacy. Although users have agreed to the terms of use of social media and mobile apps, often they are not aware of the privacy implications [55]. We strongly advise the development of ethical guidelines and training for conducting health-related studies and interventions in social media. Some of those guidelines already exist, but they require continued updates as the technology evolves [56,57]. This paper, as any health-social media paper, can be also used intentionally as a source of information to do harm [58]. We need to be aware that our research can be used by communities that engage in Facebook to do harm (intentionally or unintentionally), such as promoting anorexia as a lifestyle [59], hampering vaccination efforts [60], or even promoting smoking [61]. This potential challenge should not pose a barrier for research in this area; on the contrary, more research can help identify ways to tackle the misleading use of social media in the health domain.
Privacy
As this research did not involve human subjects, it did not require approval by an institutional review board. All the information we used was collected via an open API provided by Facebook in the public domain. The data provided are always aggregated and cannot be linked to the individual. The provider of data (Facebook) is not a collaborator in the study here described. Furthermore, the Terms of Use of the platform allows the collection of data so that Facebook can provide services to third party organizations with those data, given that it is deidentified. Finally, as discussed in the Methods section, Facebook API rounds up the aggregate numbers to nearest 20, thus allowing for k-anonymity [62] for individuals within an audience for a query of any specificity. We note here that, indirectly, these data may reveal to what extent users feel comfortable revealing personal information to social media providers (ostensibly to enrich their interaction with the platform), without researchers having direct access to the said information.
Limitations
One of the most important problems we faced with our study was the temporal mismatch between validated public health data and Facebook advertising data. We compared the current Facebook advertising data with public health data collected nearly a year before. This is an important shortcoming as interests can change rapidly due to many external factors that are nearly impossible to control. As we mentioned earlier, waiting to obtain the ground truth data may be a solution. Furthermore, we do not have data on interests within Facebook from years ago. This is, however, something available in other tools such as Google Trends or Insights.
Beside the black-box limitations discussed above, more domain knowledge is required to select more marker interests potentially important in tracking illnesses, and our preliminary study by no means exhausts the potential interests that could be used for this purpose. In fact, we purposefully limited the selection of interests to avoid the multiple hypotheses problem, and to focus just on the major ones. However, a fuller list of interests may be provided by the experts when studying a particular phenomenon. We found the Facebook Advertising Manager to be a useful tool in this, as it provides suggestions of interests related to ones already selected. We also must notice that taxonomies and categories of online health data, including Facebook, do not always correspond with the taxonomies of health authorities. This is a strong limitation for the integration of social media and public health data.
One more potential limitation of this study is that some users do avoid using Facebook due to privacy concerns [63]. A danger of relying on social media platforms such as Facebook for public health monitoring is that we might be excluding parts of the population that avoid such platforms due to ethical and privacy concerns. On the other hand, the high adoption of those platforms also calls for the utilization of such platforms in public health, but always considering the overall context of the health care system. Furthermore, there might be some topics of high importance in public health that are not present in Facebook due to privacy issues and socio-cultural factors (eg, family planning, sexual health, mental health). For these, studies using hybrid methodologies, which encompass resources other than social media, are necessary.
Conclusions
In this study, we explored whether Facebook advertising audience estimates can be used to track real-world health statistics. We proposed methodological baselines, aka placebos, for the evaluation of these estimates, and illustrate their performance on selection of use cases. The health-related interests can be useful for the design of health-risk surveillance, health interventions recruitment, among many other applications. This study describes experimentally driven approaches to tackle the closed (aka black-box) nature of Facebook advertising, as in any social media tool, for the use in public health monitoring. | 8,128.4 | 2018-03-28T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Agri-Food Trade Competitiveness: A Review of the Literature
Being competitive in the international agri-food trade is an important aim of every country. It should be noted that this term has neither a commonly accepted definition nor a synthetized index to quantify it. The most commonly used indices in the international literature are the Balassa index and its modified versions (revealed trade advantage, revealed competitiveness, normalized revealed comparative advantage, and revealed symmetric comparative advantage) and different export and/or import-related indices (e.g., the Grubel–Lloyd index or the trade balance index). Based on a systematic review of the literature, these measurements were identified along with the major factors suggested for higher agri-food trade competitiveness. It seems that supportive legislation and/or (trade) policy is the most crucial factor, followed by higher value-added/more sophisticated goods, and high, efficient, and profitable production. Although the EU and its member states were overrepresented in the analyzed literature, the candidate countries, as well as other important trading partners of the EU, e.g., Canada, China, or the ASEAN countries, were also analyzed. Thus, some of these findings may be generalized.
Introduction
Competitiveness is a key element of the market economy regardless of the sector concerned. The term itself has undergone significant changes but there is still not a commonly accepted definition or synthetized index. Adam Smith's absolute advantage (cheaper production) was one of the first abstractions, followed by Ricardo's comparative advantage (price and cost differences), and the term was then finetuned by Heckscher and Ohlin based on efficiency [1]. Related to them, Balassa provided the theoretical background of comparative advantages, as well as the empirical verification of it [2]. The revealed comparative advantage (RCA), or simply Balassa, index is regularly used by researchers all over the world (Abbreviations contains the main abbreviations used in the text). As Cho and Moon [3] described in their book, Smith set up the trade theory where wealth was based on endowments, while Porter set up the competitiveness theory where wealth is created by choices. According to them, both theories involved the same steps, extension, and debate, which finally led to the national competitiveness report as a measurement of competitiveness. From this aspect, the Balassa index helps to combine international trade theories with competitiveness by using trade data for calculating revealed comparative advantages [4]. This index compares the national export share of a given product to the international export share of the same product in the reference group. When the RCA is above 1, the revealed comparative advantage is noted, while a value below 1 suggests a comparative disadvantage. Basically, the Balassa index transforms trade performance into competitiveness [5].
As competitiveness is one of the most frequently used words in international economics, a proper understanding of its meaning still represents an important research challenge. It can be differentiated between macro (regional or country level) and micro (firm level) level competitiveness. At the macro level, effectiveness and/or positive trade • competitive countries are generally productive ones as well, implying that these terms are positively related [10][11][12]; • products with higher value added (e.g., semi-processed or process goods) are generally more competitive and have higher revealed comparative advantages [10,13,14]; • the more export-oriented a country is, the more likely it is to preserve its competitive position [15][16][17].
Since 2010, the global agri-food trade fluctuated between USD 14.2 (2010) and 18.0 (2018) billion per year and its share reached almost 10% of the global trade in 2020 [18]. The size of this market highlights the importance of being competitive; therefore, earning higher export revenues. The distribution of export and import is somewhat similar as the first three regions in terms of value are the same: Europe-Central Asia, East Asia-Pacific, and North America ( Figure 1). This regional classification follows the system of the World Bank's World Integrated Trade Solution (WITS). Another remarkable piece of information from the figure is the trade balance of these regions. Latin America is the most significant net exporter of agri-food products (USD +123 billion), while East Asia-Pacific is the most significant net importer among the regions with a USD 115 billion trade deficit. Besides Latin America, Europe-Central Asia, South Asia, and Sub-Saharan Africa have a trade surplus; all the other regions were net importers of agri-food products in 2020.
The major aim of this article is to identify the most important factors of the competitive agri-food trade. Based on the properly selected international literature, it gives an overview of the applied methodologies, major results, and the identified success factors in the international agri-food trade. The article also aims to cover a wide range of agri-food related issues, especially the differences in measurements at regional and/or country and product level.
The paper is structured as follows. The introduction is followed by a description of materials and methods. The third part gives a structured overview of the measurements of agri-food competitiveness divided into two categories: country or regional level analyses and product level analyses. The final part contains the summary and conclusions.
Materials and Methods
For the selection procedure, the Scopus and Web of Science (WoS) databases were searched. The keywords of the selection were agri-food, trade, and competitiveness with the Boolean operator AND between them. In order to obtain high quality articles, the selection was restricted to scientific and review articles. Besides the English language, no other restrictions were applied for the selection, e.g., there was no restriction on the date of publication. For the selection procedure, the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) method was used [19]. A total of 79 articles were identified in the Web of Science, while the Scopus database provided 56 potential articles. At the screening stage, 35 duplications and 36 non-relevant records were sorted out. The common characteristic of the non-relevant records was the lack of a proper analysis of one of the keywords. Most of them concentrated on different policy issues and sustainability. Although the language was restricted to English, 6 articles had only their abstracts in English. This resulted in 64 articles for the eligibility stage. By assessing their full texts, 13 more articles were excluded due to the following reasons: Eight contained only a trade analysis; Two contained a theoretical analysis; One had a pure agricultural policy analysis; One had a self-sufficiency analysis; One presented only the barriers in agri-food exports. Figure 2 gives an overview of the article selection process. The most frequent year was 2019 when 7 articles were published followed by 6 articles in 2018. The most frequent journal was the Agricultural Economics published in the Czech Republic with 8 articles. This was followed by the Bulgarian Journal of Agricultural Science (4) and Agris On-line Papers in Economics and Informatics (3). A total of 2 articles were published in the British Food Journal, Ekonomika Poljoprivreda-Economics of Agriculture, Post-Communist Economies, Studies in Agricultural Economics, Scientific Papers-Series Management Economic Engineering in Agriculture and Rural Development, and Sustainability. The remaining 24 articles were published in 24 different journals.
Materials and Methods
For the selection procedure, the Scopus and Web of Science (WoS) databases were searched. The keywords of the selection were agri-food, trade, and competitiveness with the Boolean operator AND between them. In order to obtain high quality articles, the selection was restricted to scientific and review articles. Besides the English language, no other restrictions were applied for the selection, e.g., there was no restriction on the date of publication. For the selection procedure, the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) method was used [19]. A total of 79 articles were identified in the Web of Science, while the Scopus database provided 56 potential articles. At the screening stage, 35 duplications and 36 non-relevant records were sorted out. The common characteristic of the non-relevant records was the lack of a proper analysis of one of the keywords. Most of them concentrated on different policy issues and sustainability. Although the language was restricted to English, 6 articles had only their abstracts in English. This resulted in 64 articles for the eligibility stage. By assessing their full texts, 13 more articles were excluded due to the following reasons: • Eight contained only a trade analysis; • Two contained a theoretical analysis; • One had a pure agricultural policy analysis; • One had a self-sufficiency analysis; • One presented only the barriers in agri-food exports.
Measurements of Agri-Food Competitiveness
Basically, there were two types of articles that dealt with the measurements of agrifood competitiveness. Although both were based on product groups, some of them focused on a region or a country, while the others gave an overview of the international market of one agri-food product. The next two subsections are organized by this characteristic.
Regional and Country-Level Analyses
A frequently analyzed issue in the literature was the impact analysis of the EU accession. Antimiani et al. [20] applied the PRODY index which combines the RCA with the per capita GDP of the given country. They analyzed one period before the accession (1996)(1997) and another one after that (2006)(2007). They pointed out that the higher share of sophisticated (higher value-added) agri-food products resulted in a higher trade competitiveness in the case of Poland and the Czech Republic, while Bulgaria, Hungary, and Romania performed worse. Bojnec and Fertő [21] analyzed 23 countries accounting for 60% of the global agri-food trade. Surprisingly, they found out that economic development, a (higher) share of agricultural employment, and differentiated agri-food products had no significant positive impact on comparative advantages. Moreover, this was the same with the global economic crisis which was measured by a dummy variable. On the contrary, agricultural land abundance, agricultural support, and export diversification reduced the likelihood of failure and of losing comparative advantages between 2000 and 2011. Bojnec and Fertő also carried out a similar analysis on the EU member states where the Netherlands, France, and Spain were the most successful nations in terms of comparative advantages [11]. According to their results, the accession has not increased the EU's overall agri-food export competitiveness. The authors' two main explanations were the relatively short time period after the accession, i.e., the new member states need more time to catch up, and the export structure of the EU, i.e., the most competitive products do not represent
Measurements of Agri-Food Competitiveness
Basically, there were two types of articles that dealt with the measurements of agrifood competitiveness. Although both were based on product groups, some of them focused on a region or a country, while the others gave an overview of the international market of one agri-food product. The next two subsections are organized by this characteristic.
Regional and Country-Level Analyses
A frequently analyzed issue in the literature was the impact analysis of the EU accession. Antimiani et al. [20] applied the PRODY index which combines the RCA with the per capita GDP of the given country. They analyzed one period before the accession (1996)(1997) and another one after that (2006)(2007). They pointed out that the higher share of sophisticated (higher value-added) agri-food products resulted in a higher trade competitiveness in the case of Poland and the Czech Republic, while Bulgaria, Hungary, and Romania performed worse. Bojnec and Fertő [21] analyzed 23 countries accounting for 60% of the global agri-food trade. Surprisingly, they found out that economic development, a (higher) share of agricultural employment, and differentiated agri-food products had no significant positive impact on comparative advantages. Moreover, this was the same with the global economic crisis which was measured by a dummy variable. On the contrary, agricultural land abundance, agricultural support, and export diversification reduced the likelihood of failure and of losing comparative advantages between 2000 and 2011. Bojnec and Fertő also carried out a similar analysis on the EU member states where the Netherlands, France, and Spain were the most successful nations in terms of comparative advantages [11]. According to their results, the accession has not increased the EU's overall agri-food export competitiveness. The authors' two main explanations were the relatively short time period after the accession, i.e., the new member states need more time to catch up, and the export structure of the EU, i.e., the most competitive products do not represent a significant share of their export. However, when Bojnec and Fertő analyzed the duration of comparative advantage of the EU's agri-food exports, they found that not only population and GDP per capita but also the enlargement of EU positively influenced this duration, while the contribution of the new member states to the EU's export competitiveness was larger than that of the old member states [15]. This process was supported by the higher economies of scale of the specialized products. Not surprisingly, higher trade costs negatively impacted the duration of international competitiveness [15]. By using the export market share (EMS) and different RCA indices, Carraresi and Banterle [22] found that Germany and the Netherlands were the largest beneficiaries of the enlargement, while France experienced decreasing competitiveness. They also noted that competitive performance and export specialization are interlinked, although a high EMS does not necessarily mean high RCA values. Similar to Bojnec and Fertő [15], they also pointed out that the new member states profited from the accession too, especially their agriculture sectors.
In addition to the EU accession, the global financial crisis in 2008 also significantly impacted the EU's agri-food trade. Based on the normalized revealed comparative advantage index (NRCA), Bojnec and Fertő [14] pointed out that the economic crisis negatively influenced the duration of the comparative advantage of the EU-28's agri-food trade. However, this negative impact was significantly smaller for the differentiated products. By applying the revealed competitiveness index, Crescimanno et al. [23] analyzed the competitiveness of France, Italy, Spain, and Turkey and paid special attention to the global financial crisis. The agri-food sector itself turned out to be crisis resistant. Turkey had the highest competitiveness among the analyzed countries which only slightly decreased after the crisis. This is partly explained by the country's lower structural dependence on foreign markets. In general, competitive sub-sectors performed well, while uncompetitive sub-sectors faced large disadvantages. The key factors of competitiveness are agri-food production, prices, incomes, and food consumption.
By calculating price and quality competition, Bojnec and Fertő [24] concluded that the majority of the EU's agri-food trade shows successful competition, and quality is even more significant than price. This implicates the export of higher value-added products, particularly in the case of the Netherlands (old member state) and Poland (new member state). It should also be noted that the duration of the competitive performance is generally longer for the old member states compared to the new member states [25].
Not only the EU but its member states' agri-food trade was also frequently analyzed. Huan-Niemi et al. [26] applied the Global Trade Analysis Project (GTAP), a general equilibrium model, and the Delphi method to evaluate the impacts of policy changes on the competitiveness of the Finnish agri-food sector. In the case of less or no CAP subsidies, Finnish agriculture would suffer a huge decline in its competitiveness, especially in the beef sector. The major production problems they identified were high land prices and high support dependency of the farms. They also noted that lower subsidies would result in lower land prices that would make entering this market easier for new entrants. By using the RCA and the export similarity indices, Majkovic et al. [27] analyzed the Slovenian agri-food export trade and noted that Slovenia has experienced decreasing competitiveness compared to the other nine new member states that accessed the EU in 2004. Similar results were found when they applied an intra-industry trade (IIT) analysis [28]. Based on those results, the best option to increase the competitiveness of the Slovenian agri-food sector in the competitive single market would be higher product quality. This can be based on human, physical, and technological capital. Majkovic et al. [29] also found decreasing quality and price competitiveness in the Slovenian agri-food trade on the Croatian market. By using the revealed symmetric comparative advantage (RSCA) and the trade balance index (TBI), Smutka et al. [30] found that the Czech agri-food sector has a higher competitiveness on the European markets (EU-28, the other European countries, and the Commonwealth of Independent States (CIS)) compared to the other countries. The major problem they identified was the low value-added products. Stankaitytė [31] used the international competitiveness index (LIIC) and found that Lithuanian dairy products are competitive on the EU markets, and they have shown an increasing trend. When this was applied to the third countries' markets, however, it differed from country to country, e.g., it dropped remarkably on the Russian markets due to the import ban. Szczepaniak [32] applied the Grubel-Lloyd (GL) index to analyze the competitiveness of the Polish food sector. According to her findings, the Polish agri-food sector performed competitively during the analyzed period (2001-2011), especially after the EU accession. The major source of the country's competitiveness is the intense intra-industry trade driven by the high demand of the common market, the economies of scale resulting from specialization, and the increasing purchasing power of Polish consumers. The same results were received for the period of 2004-2017 when the RCA and the TBI were used [33]. Their increasing values indicated the increasing competitiveness of the Polish agri-food products on the international markets. This process is led by the appropriate transformations of the sector. By applying the RCA index, Ubrežiová et al. [34] found worsening competitiveness in the Slovak agri-food sector. They suggested modernization, investments into the production factors, and the reform of the national legislation to lower administrative burdens and to improve market access. As Mirela [35] pointed out, the underdeveloped processing industry is one of the major bottlenecks of the Romanian agri-food sector too, which results in a low unit value index (UVI). This means the exportation of raw materials and the importation of processed goods with high value added. Measuring the competitiveness of the Hungarian agri-food sector in the '90s with Balassa indices, Fertő and Hubbard [36] revealed comparative advantages for 22 product groups, although they discovered a slight decrease over the analyzed years. Government interventions have an important role in this process; however, they are inversely related to competitiveness, as it is not always the most competitive sectors that are the major beneficiaries of the different supports. By applying the constant market share analysis, Juhász and Wagner [37] examined the competitiveness of the Hungarian agri-food export. According to their results, cereals, oilseeds, and poultry are the most important and competitive export products. The major problem they identified was the Hungarian transport infrastructure, which is much weaker than its competitors.
Country-level comparisons between the EU or one of its member states and one or some of its trading partners were also of importance. Qineti et al. [38] analyzed the Slovak and the EU agri-food trade with Russia and Ukraine. They found that the EU has lost its competitiveness on these markets since the enlargement, while Slovakia performed the same on the Russian market but slightly increased its position on the Ukrainian market. Verter et al. [39] examined the agri-food trade between the EU-28 and Nigeria. Most of the Nigerian products had either no comparative advantage or they showed a decreasing trend. Their proposals were the promotion of agri-food value chains and a more protective trade policy to increase self-sufficiency. Zdráhal et al. [40] used a product mapping tool to assess South Africa's agri-food trade with the EU. South Africa turned out to be more competitive on the African markets compared to the EU-28 market. The African export trade is more diversified and provides opportunities for those players that are not competitive on the EU markets. They also identified the improved production and export of higher value-added products as key factors of competitiveness. Measuring competitiveness with the Constant Market Share (CMS) model, Guo et al. [41] distinguished short and long-term impacts. Both the CAP reform in Germany (1999) and the WTO accession in China (2001) caused negative short-term impacts in the Germany-China relationship; however, there were positive long-term impacts on their agri-food trade. They emphasized the influence of exchange rate changes on competitiveness.
The EU plays a pull role in the market for other countries' agri-food trade. Coretchi and Gribincea [42] linked competitiveness to low productivity and economic growth. In the case of Moldova, they emphasized organic production and higher product quality as solutions. Based on different Balassa indices and the GL index for intra industry trade analysis, the Moldavian agri-food sector had competitive products, although two-thirds of them were raw materials. The sector is hindered by low state support and the lack of longterm financial resources [43]. Therefore, Cimpoies [43] highlighted the need for complex reform that could provide a stable political environment. As export competitiveness is related to production, different support programs, investments, and high-quality products would be essential. Senyshyn et al. [44] suggested lots of different measures to increase the competitiveness of the Ukrainian agri-food sector, for example, a broader export commodity structure, strict quality and food security management, international marketing, a higher compliance with international standards, information sharing, and cooperation.
Markovic et al. [45] analyzed the Serbian agri-food export to the EU with the net trade (NTI) and the GL indices. In general, they found that the Serbian agri-food sector is competitive; however, its values are low and decreasing. Instead of concentrating on quantity, they proposed export restructuring and product differentiation. Based on their results, they suggested further exports of fruits and vegetables as the best example of quality competitiveness. Besides product quality, processing would also be important. Matkovski et al. [46] also pointed out the revealed comparative advantages of the Serbian agri-food products, although the majority of these products are raw materials or minimally processed goods, e.g., Serbia's major export product is cereals. Trade agreements with the EU and the CEFTA (Central European Free Trade Agreement) countries significantly helped a further expansion of the export trade. They highlighted that the sector's performance depends on its ability to respond to the demand of these foreign markets, especially in terms of product security and quality. Based on Matkovski et al.'s study [47], the export of cereals is particularly important in the Vojvodina region. They also added that there is a lower competitive pressure, i.e., quality and quantity, on the CEFTA markets that also makes them important. Having a higher competitiveness requires significant actions, such as innovations, knowledge transfer, and investments into the processing phase of agricultural products. Matkovski et al. [48] identified the competitiveness of South-East Europe as a policy challenge, which is an even larger problem for those countries that are advanced in the integration process with the EU. Remarkable steps should be taken in the fields of cooperation and infrastructure (transport, storage, finance, institutions, etc.).
Ignjatijevic et al. [49] analyzed a "mixed" region where EU member states and candidate countries were also involved. With the Balassa, the Lafay (LFI), and the GL indices, they analyzed the trade competitiveness of the Danube region. They noted that the RCA values increased for most of those countries that had comparative advantages at the beginning of the analyzed period. They highlighted the importance of cooperation (cost-efficient production, quality standards, common transportation channels, etc.). This requires not only legal but also monetary measures.
Other parts of the world were also analyzed with similar methodological tools. The case of the Association of Southeast Asian Nations (ASEAN) countries showed that productivity is only one potential source of competitiveness, but not an exclusive one [4]. A lack of adequate processing capacities seems to be a common barrier to higher competitiveness, as not only the ASEAN but also the CIS suffer from this [4,13]. However, these country groups show a common characteristic: significant agri-food producers have higher comparative advantages and a better trade performance [13,50].
Chen et al. [51] used the CMS to investigate China's export competitiveness and found that trade policy reforms may lead to decreasing agri-food export competitiveness. However, it is not always easy to separate the impacts of other changes, e.g., world and regional demand. Xue and Revell [52] analyzed China's vegetable sector with the export specialization index (XSP). Most of the Chinese foreign markets are price sensitive; therefore, efficient and cost-effective logistics are extremely important. This seems to be the basis of competitiveness, especially in the absence of economies of scale.
In addition to the RCA and the NRCA, Sarker and Ratnasena [53] applied the Heckscher-Ohlin-Vanek (H-O-V) model to analyze the competitiveness of the Canadian wheat, beef, and pork sectors. They received interesting results. While wheat was internationally competitive during the whole analyzed period, pork was not. The North American Free Trade Agreement (NAFTA) improved the performance of the beef sector rapidly, but this was turned over due to the bovine spongiform encephalopathy (BSE). Since that, this sector has not been able to fully recover. Regarding the major influencing factor of trade competitiveness, the authors identified the following items: • Seed cost and the Western Grain Stabilization Act had a significant negative impact on the competitiveness of the wheat sector, while the Western Grain Transportation Act had a positive impact. This implies that different policy measures may have completely different impacts; • Meat processing costs negatively impacted both the pork and the beef sectors; • The decoupled safety net program and the National Tri-Partite Stabilization Program favored the beef sector but had a negligible impact on the pork sector; • Unlike the beef sector, the Canada-US exchange rate negatively impacted the wheat and pork sectors.
Fertő [10] carried out a global agri-food trade competitiveness analysis. He distinguished between gross and value-added exports and measured competitiveness with the NRCA index. One of his results was the difference between the two NRCA rankings. In the case of agriculture, China had the highest value for gross exports but the lowest one for value-added exports. Regarding value-added exports, Brazil performed the best. The NRCA values for the food sector were generally higher with Brazil (gross export) and Thailand (value-added export) at the top.
Product-Level Analyses
Due to the accession, Benus et al. [54] noted that the Slovak producers faced increased rivalry on the common European market and their competitiveness declined significantly in the case of the spirit industry. Only liqueurs and cordials products were able to increase their competitiveness, while vodka products could keep some of theirs from 2004 to 2018, based on their Balassa type indices. Benus [55] also used Balassa-type indices to analyze the Czech meat industry. Fresh, chilled, or frozen poultry meat performed the best, although every index indicated a decreasing trend, especially after the accession. In contrast, the meat of bovine animals (fresh or chilled) had no relative export advantage at the beginning of the analyzed period, but all their indices showed an increasing trend. The author also highlighted that the competitiveness of the meat industry is closely related to the profitability of the livestock farms.
Balogh and Jámbor [16] analyzed the European cheese market and found that factor endowments measured by GDP per capita, the protected designation of origin (PDO), and the EU accession positively influenced trade competitiveness by leading to higher RCA values. However, the impact of foreign direct investment turned out to be negative because most of the cheese producer companies are in national hands. At the country level, the Netherlands, Denmark, Cyprus, and Luxemburg are the most competitive cheese producers in the EU [56]. Based on different Balassa indices and the unit value difference (UVD), Török and Jámbor [57] analyzed the competitiveness of fruit spirits labelled with geographical indications in six Central and Eastern European countries. Most of these spirits had a comparative advantage, as well as being competitive. However, they had experienced a decreasing trend of competitiveness and worsening market positions since the accession. Török and Jámbor also analyzed the European ham market [17]. Their results, based on the RSCA, showed that four member states were competitive (Portugal, Spain, Italy, and Slovenia); however, only Portugal was able to increase its competitiveness during the analyzed period. Regarding the major elements of competitiveness, and in line with the literature, factor endowments, geographical indication, and the EU accession had a positive impact, while the FDI negatively influenced it. In the case of the Italian wine sector, specialization and product quality are important to answer the diversified consumer demand [58]. The first is more important on the new markets, while the second is essential for keeping traditional export partners. By using the export and import market share, and the relative trade advantage (RTA), Crescimanno and Galati [58] identified the increased competitiveness of the Italian wine sector between 2000 and 2011. El Chami et al. [59] used the RTA and the revealed competitiveness (RC) to analyze the Lebanese wine industry and revealed a comparative advantage. As it was more competitive, they highlighted the importance of a long-term and stable sectoral policy.
In relation to spices, it seems that countries with the highest comparative advantages (Guatemala, Sri Lanka, and India) concentrate on the most competitive products (cardamoms, cloves, dried pepper, and cumin seeds) and the positive determinants of their trade competitiveness are land and labor productivity [12]. In the peanuts market, Nicaragua and Senegal show the highest, as well as stable, competitive potential; however, the survival test indicated intense competition [60].
de Oliveira et al. [61] found a decreasing competitiveness in the Portugal tomato industry. As the international tomato market is highly competitive, product differentiation is crucial to ensure a lower dependence on prices. Besides product quality, they also noted that the use of good agricultural and environmental practices can also be an important factor for their traditional markets. As climate change is a serious threat, collaborative research and development actions are essential at both at the production and processing levels.
Used Methodologies and Recommendations
Although the selected articles were divided into two groups, they had many common characteristics in terms of methodology and the identified factors of higher agri-food trade competitiveness. Table 1 gives a summary of them. Table 1. Summary of the methodology and the recommendations for higher agri-food trade competitiveness *.
IIT
• cooperation [28] • innovation and investment [28] Trade equation model • factor endowments, product diversification, and organic products [42] GTAP, Delphi method • profitable production [26] * Articles were found in two rows when the Balassa-type and export and/or import-related indices were simultaneously applied in the same article.
According to the analyzed literature, the most commonly suggested factors of higher agri-food trade competitiveness were supportive legislation and/or (trade) policy (12), higher value-added/more sophisticated goods (11), and high, efficient, and profitable production (9). These are the fundamentals of the competitive agri-food trade. As the included articles covered not only the EU but also other parts of the world, these findings may be generalized. However, it should be noted that these factors are interrelated, as supportive policies encourage all the stakeholders to invest in their production and became more efficient and/or process their products more.
Summary and Conclusions
Measuring competitiveness, and agri-food competitiveness within, is still a challenge. There are many opportunities for this and all of them provide advantages and disadvantages. The choice of the available methodological tools depends on the available datasets, as well as on the choice of the researcher. From a purely methodological point of view, there are basically export and export-import-related indices to measure and analyze agri-food competitiveness. Table 2 gives an overview of the most commonly used indices of the analyzed articles. As can be seen from the table above, the vast majority of the indices used in the selected articles were based on trade data. Simpler indices use only export data, while more sophisticated indices also take import data into consideration. The only exception is the PRODY index which uses the RCA index and weights its values with GDP per capita.
The selected articles were grouped into two categories. Most of them dealt with regional or country level issues, while a smaller number of articles analyzed the competitiveness of one agri-food product or product group. The country and regional level analyses can be divided into two parts. First, most of the included articles analyzed the competitiveness of either the EU or one of its member states. They evaluated the three significant agri-food trade-related events of the last 15 years: the EU accession, the global financial crisis, and the Russian embargo. At the country level, the results suggest that the old member states generally have a more favorable trade structure with a higher share of processed goods. This seems to be one of the most important elements of agri-food trade competitiveness. The high share of the EU-related articles can be explained by the composition of the authors, most of whom were from a European country.
Second, a similar methodology (Balassa-type indices and export and/or importrelated indices) was applied to other counties too. Some of them were closely related to the EU, e.g., the candidate countries, while the others were also important trading partners of the EU, e.g., Canada, China, or the ASEAN countries.
As the basis of any trade activities is agricultural production, strengthening this sector seems to be an advisable strategy for most of the countries. This can be realized by more efficient or profitable production, horizontal and vertical cooperation, research and development (R&D), innovation and investment, developed/efficient infrastructure including logistics, and supportive legislation and/or (trade) policy. This partly overlaps with the list of the most important identified factors: supportive legislation and/or (trade) policy; higher value added; and high, efficient, and profitable production. This list was the same in the case of the Balassa-type indices and the export and/or import-related indices as well.
The impact of the EU accession on the agri-food trade at the product level was also a frequently analyzed topic in the international literature. At the product level, the success factors of agri-food trade competitiveness are basically the same as what was found at the country/regional level. It should be noted that it is easier and faster to increase agrifood competitiveness at the product level with targeted supports and policies compared to the country-level competitiveness. The most important and efficient product-level measures were product differentiation, the use of different geographical indications, and organic production.
In relation to future research paths, there are many potential options. Adding endowments to the current analysis may lead to different conclusions. The use of different keywords would result in a different sample, and therefore, different results. Another option could be a focused analysis of each of the major indices, as they can be used to analyze other sectors of the economy too. Data Availability Statement: A publicly available dataset was analyzed in this study which can be found here: https://wits.worldbank.org/ (accessed on 3 October 2021).
Conflicts of Interest:
The author declares no conflict of interest. | 8,078 | 2021-10-12T00:00:00.000 | [
"Economics"
] |
Androgen Receptor Coregulator Long Non Coding RNA CTBP1-AS Is Associated With Polycystic Ovary Syndrome in Kashmiri Women
Objective: Polycystic ovary syndrome (PCOS) is one of the most common reproductive, endocrine and metabolic disorders in premenopausal women. Even though the pathophysiology of PCOS is complex and obscure, the disorder is prominently considered as the syndrome of hyperandrogenism. C-Terminal binding protein 1 antisense (CTBP1-AS) acts as a novel Androgen Receptor regulating long noncoding RNA (lncRNA). Therefore, the present study was aimed to establish the possible association of androgen receptor regulating long noncoding RNA CTBP1-AS with PCOS. Methods: A total of 178 subjects including 105 PCOS cases and 73 age-matched healthy controls were recruited for the study. The anthropometric, hormonal and biochemical parameters of all subjects were analysed. Total RNA was isolated from peripheral venous blood and expression analysis was done by quantitative real time PCR (qRT-PCR). The correlation analysis was performed to evaluate the association between and various clinical parameters and lncRNA CTBP1-AS expression. Results and conclusion: the mean expression level of CTBP1-AS was found to be signicantly higher in the PCOS women than in the healthy controls (-lnCTBP1-AS, 4.23 ± 1.68 versus 1.24 ± 0.29 p<0.001). Further, subjects with higher expression level of CTBP1-AS had signicantly higher risk of PCOS compared to subjects with low levels of CTBP1-AS expression (actual OR = 11.36, 95% C.I. = 5.59-23.08, P = < 0.001). The area under ROC curve (AUC) was 0.987 (SE 0.006 and 95% C.I. 0.976–0.99). However, lncRNA CTBP1-AS was found to have no association with different clinical characteristics in PCOS. In conclusion, androgen receptor coregulating lncRNA CTBP1-AS is associated with PCOS women and high expression of CTBP1-AS is a risk factor for PCOS in Kashmiri women
Introduction
Polycystic ovary syndrome (PCOS) is considered one of the most common reproductive, endocrine and metabolic disorders affecting approximately 10% of premenopausal women. PCOS affects approximately 10% of women worldwide 1 . It is a heterogeneous disorder and comes with a wide spectrum of clinical manifestations that include endocrine, metabolic, reproductive and sometimes psychological anomalies. Even though the pathophysiology of PCOS is complex and obscure, the disorder is prominently considered as the syndrome of hyperandrogenism 2 . Hyperandrogenism can either be attributed to excessive serum testosterone levels or hyperactivity of the androgen receptor. The transcriptional activity of AR is regulated by a number of coregulators 3 . Consequently, any abnormal activity of these androgen receptor modulators either due to altered expression or by any mutation may lead to impaired activity of androgen receptor and thereby play a role in the pathogenesis of hyperandrogenism and PCOS.
The recent advancements in genome sequencing technologies revealed that the major portion of the genome is transcriptionally active and encodes for noncoding RNAs 4 . Long Non Coding RNAs (lncRNAs) are the noncoding RNA transcripts of more than 200 nucleotide length, they constitute a signi cant Page 3/19 proportion of noncoding RNAs 5 . A number of studies have demonstrated their role in a wide variety of biological processes 6,7 . However, the mechanism of action is yet poorly understood. Primarily they act as epigenetic modulators, regulating gene expression at the transcriptional level as well as at posttranscriptional level. Besides dysregulation of lncRNAs has been implicated in various diseases 8 long non coding RNA C-terminal binding protein 1 antisense (CTBP1-AS) is associated with androgen receptor signalling pathway, acting as AR modulator. CTBP1-AS is located in the antisense region of Cterminal binding protein 1 (CTBP1), which acts by regulating the expression of AR corepressor C-terminal binding protein 1. LncRNA CTBP1-AS acts by repressing the transcription of CTBP1. CTBP1-AS rst associates with transcriptional repressor Polypyrimidine Tract Binding Protein associated splicing factor (PSF) and then recruits histone deacetylases-paired amphipathic helix protein Sin3A complexes on to the promoter region of the gene. Thus CTBP1-AS downregulates transcription of CTBP, thereby promoting the transcriptional activity of AR. In addition to this CTBP1-AS itself mimics androgen receptor signalling by inducing upregulation of the androgen receptor regulated genes 9 .
The aim of this study was to evaluate the association of androgen receptor coregulator lncRNA CTBP1-AS with the risk of PCOS in ethnic Kashmiri women. We also attempted to nd the correlation of lncRNA CTBP1-AS expression with various clinical manifestations of PCOS.
Recruitment of Subjects
In this case-control study, a total of 178 subjects were recruited of which 105 were drug naïve PCOS women and 73 were age-matched healthy controls. PCOS cases were recruited from the patients
Anthropometric and clinical evaluation
All the subjects underwent general anthropometric measurements that include height, weight, waist, hip, waist-hip ratio (WHR), body mass index (BMI) and FG score. A detailed history of clinical symptoms like menstrual cycles, hirsutism, acne, alopecia, and acanthosis nigricans was also taken from all study subjects. Height was measured using a height measuring scale in standing position without shoes and weight was measured using a weighing scale with light clothing. For measurement of waist circumference, minimum value between the iliac crest and the lateral costal margin was determined in a standing position, and hip circumference was measured as the maximum value over the buttocks. WHR was calculated by dividing waist circumference by the hip circumference. BMI was calculated as weight(kg)/Height(m²). Hirsutism was measured by Ferriman-Gallwey scoring system and a score of greater than 8 was taken as signi cant. All the PCOS women underwent transabdominal ultrasonography (USG) to measure the number of small peripheral cysts and/or increased ovarian volume.
Hormonal and Biochemical assessment
For hormonal and biochemical assessment, blood sample was collected in clot activator vials from all the participants after an overnight fast on 2 nd -3rd day of their menstrual cycle. The hormonal pro le included Testosterone, luteinizing hormone, Follicle stimulating hormone. Thyroid stimulating hormone to exclude thyroid disorders, Prolactin to exclude hyperprolactinemia were estimated by Radioimmunoassay (RIA) using RIA kits (Immunotech S.R.O, Prague, Czech Republic) on Beckman coulter UniCelDxl 800 (Access Immunoassay system). Enzyme-linked immunosorbent assays were used to measure Sex Hormone-Binding Globulin (SHBG), androstenedione, dehydroepiandrosterone sulfate (DHEAS), and fasting insulin using Calbiotech, CA USA and DGR Instruments GmbH Marburg ELISA kits using SkanIt RE 4.0 software on Thermo Scienti c Multiskan FC ELISA reader. The biochemical parameters done included oral glucose tolerance test, cholesterol, triglyceride, High Density Lipoprotein cholesterol levels, Low-Density Lipoprotein cholesterol levels, Urea, Creatinine, SGOT, SGPT. All the biochemical parameters were measured on semi-automatic analyzer (ERBA Chemtouch 7, Biochemistry Analyzer, Wiesbaden, Germany) using ERBA bioassay diagnostic kits.
The free androgen index (FAI) was derived using the formula: Insulin resistance was calculated Quantitative Insulin Sensitivity Check Index (QUICKI) was calculated by formula: Peripheral blood mononuclear cell isolation For isolation of peripheral blood mononuclear cell 2ml blood sample in Na-EDTA vials was obtained from all the subjects. Blood samples were diluted by addition of equal volume of phosphate buffer saline (PBS) followed by density gradient centrifugation using Ficoll-Paque plus (GE Healthcare Bio-Sciences Sweden). After density gradient centrifugation Peripheral blood mononuclear cell (PBMNCs) were isolated and washed twice with PBS to remove any contamination of Ficoll and plasma. The cells thus obtained were stored at -80°C for further processing.
RNA isolation and cDNA Synthesis
Total RNA was isolated from blood leukocytes by TRIzol method 11 . All of the RNA samples were subjected to DNase treatment using Sigma Aldrich DNase treatment kit according to manufacturer's protocol in order to eliminate any traces of genomic DNA. The RNA thus obtained was checked for integrity on 2% agarose gel. Qualitative and quantitative analysis of all the RNA samples was done on NanoDrop (Thermo Scienti c). All RNA samples with an absorbance A 260/280 ratio of 1.9-2.0 were subsequently used for cDNA synthesis by RT-PCR. 1.5mg of total RNA was reverse transcribed into cDNA using Thermo Scienti c RevertAid First Strand cDNA Synthesis Kit as per manufacturer's protocol in applied biosystems thermal cycler.
Quantitative real-time polymerase chain reaction
Expression levels of lncRNA CTBP1-AS were measured by Quantitative real-time polymerase chain reaction (qRT-PCR) using 10ml KAPA SYBR ® FAST SYBR green , 0.3ml of forward and 0.3ml of reverse primers and cDNA less than 100ng (1ml) in a 20ml reaction following manufacturer's protocol. qRT-PCR was performed in Roche LightCycler ® 480 Instrument II 96 well plate having following reaction protocol pre-incubation at 95°C for 5mins, ampli cation for 40 cycles at 95°C for 20 sec, 62°C for 15 sec and 72°C for 15 sec. Ampli cation was followed by melting curve analysis with following conditions 95°C for 5sec, 60°C for 1min and 95°C continuous. The ampli ed product was subsequently run on 2% agarose gel to con rm product size. The relative expression of CTBP1-AS was estimated by Livak method and Beta-Actin was used as reference gene 12 . Each reaction of qRT-PCR was performed in triplicates.
Statistical Analysis
All baseline quantitative variables were expressed as mean ± SD. The anthropometric, hormonal and biochemical parametric variables between PCOS and controls and were compared by unpaired student ttest. Chi square test was used to study association of CTBP1-AS expression and various non-parametric variables. The relation between different metabolic and endocrine markers with CTBP1-AS was evaluated by Pearson or Spearman rank correlation coe cient. The expression of CTBP1-AS was divided into binary groups in both PCOS cases and in healthy controls based on limits derived from the healthy control group as follows <1.81 for the low expression and >1.81 for high expression. Receiver operator characteristic (ROC) and area under curve (AUC) was calculated using Sigma Plot 10. The statistical analysis was done using statistical computing tool vassarstats (http://vassarstats.net/). P-value of <0.05 was considered as statistically signi cant.
Clinical Characteristics of subjects
The Clinical, hormonal and metabolic characteristics PCOS patients (n = 105) and controls (n = 73) are summarized in Table Expression of lncRNA CTBP1-AS in PCOS women and controls Shown in Fig. 1, the mean expression level of CTBP1-AS in peripheral blood leukocytes was signi cantly higher in the PCOS women than that in the healthy controls (-lnCTBP1-AS, 4.23 ± 1.68 versus 1.24 ± 0.29 p < 0.001).
Association Between Expression of lncRNA CTBP1-AS and Presence of PCOS Further, we sought to evaluate the relationship between different expression levels of CTBP1-AS and risk of PCOS. The subjects were divided into binary groups of high and low expression based on restrictions set by control group. As shown in Table 2, subjects with higher expression level of CTBP1-AS had signi cantly higher risk of PCOS compared to subjects with low levels of CTBP1-AS expression.
Receiver operator characteristic of lncRNA CTBP1-AS Expression between PCOS cases and healthy controls
Receiver operator characteristic (ROC) curve analysis was performed with the healthy controls as a reference, the area under the curve (AUC) was 0.987, with a standard error of 0.006 and 95% con dence interval of 0.976-0.99 (Fig. 2).
Correlation Between lncRNA CTBP1-AS Level and Clinical Parameters in PCOS Correlation between CTBP1-AS expression level and various clinical parameters in PCOS cases and in controls was done by calculating Pearson or Spearman rank correlation. The expression of lncRNA CTBP1-AS was used as dependent variable, and clinical parameters both in PCOS cases and in controls as independent variables. As shown in Table 3, no association was found between expression of lncRNA CTBP1-AS and any of the biochemical characteristics .
Association between lncRNA CTBP1-AS expression and Hirsutism We sought to evaluate the association between expression levels of CTBP1-AS and clinical hyperandrogenic trait, hirsutism in PCOS women. The PCOS cases were divided into binary groups based on presence and absence of hirsutism and expression levels were compared by T-test. As shown in Table 4 no signi cant association was found between expression of CTBP1-AS and hirsutism (P = 0.64)
Association of CTBP1-AS Expression with total testosterone
To explore the association of lncRNA CTBP1-AS expression with total testosterone (TT). The PCOS subjects were grouped into high TT (> 50ng/dl) and low TT (< 50ng/dl) group and T-test was used to compare the expression levels. No signi cant association was found between the expression of CTBP1-AS and total testosterone (P = 0.93) ( Table 5).
Association between lncRNA CTBP1-AS expression obesity Association between expression of lncRNA CTBP1-AS and obesity was evaluated by T-test. The subjects were categorised into two groups those having BMI < 25 kg/m 2 and those having BMI > 25 kg/m 2 . As shown in table 6, no signi cant difference in the expression of CTBP1-AS was found between two groups in both cases (P = 0.897) and controls (P = 0.154).
Discussion
This study is rst of its kind that investigated the association of lncRNAs CTBP1-AS with the risk of PCOS and its clinical manifestation in North Indian PCOS women. Androgen receptor (AR) regulating lncRNA CTBP1-AS was signi cantly overexpressed in PCOS women than in healthy controls. We also found that high expression CTBP1-AS as a risk factor for PCOS but has not a signi cant role in driving various clinical manifestations associated with PCOS.
PCOS is a multisystemic disorder associated with varied immediate complications like hirsutism, acne, alopecia, menstrual irregularities and a range of long term metabolic, cardiovascular, and psychological complications 13,14 . Previous studies have shown that PCOS women are predisposed to obesity [15][16][17][18] . Our study showed similar results, BMI (P < .001), WHR (P < .001) were signi cantly higher in women with PCOS than in healthy women. Studies have shown PCOS women have abnormal metabolic, hormonal parameters [15][16][17][18][19][20] . Consistently, in our study various metabolic parameters were found deranged in PCOS women. Insulin resistance marker HOMA IR (P < .001) was signi cantly higher in PCOS women compared to healthy females and insulin sensitivity was also found signi cantly lower in PCOS than in controls; QUICKI (P < .001). Women with PCOS are predisposed to high risk liver disease and have elevated levels of liver enzymes 21,22 . Similar results were demonstrated in our study, liver enzymes AST (P < .001), ALT (P < .001) were signi cantly higher in PCOS women than in controls.
The etiology of PCOS is still obscure nevertheless, hyperandrogenism is the cardinal feature of PCOS which plays a central development of PCOS 23 . The primary complaints of PCOS like hirsutism, alopecia, Acne, polycystic ovaries, menstrual irregularities are the outcomes of hyperandrogenism [23][24][25] . Hyperandrogenic effect is either due to high levels of serum testosterone or hyperactivity of androgen receptor. The activity of the androgen receptor is regulated by different coactivators and corepressors 3,26 .
Studies have shown hyperactive AR at ovarian and extraovarian sites including hypothalamus, skeletal muscles, and adipocytes play a crucial role in PCOS pathophysiology. The ampli ed androgenic response via hyperactivity of androgen receptor especially at neuroendocrine site promotes abnormal GnRH pulsation, early follicular development, subsequently follicular arrest, ovulatory dysfunction and other hyperandrogenic manifestations associated with PCOS 23,27,28 . Thus, androgen receptor, its downstream signalling molecules together with coregulators are critical elements for understanding the mechanism of progression of PCOS, its clinical manifestations and development of novel AR based therapeutic treatments.
LncRNAs have a tremendous role in normal cellular processes and development [29][30][31] . LncRNAs act as regulators of gene expression, post-transcriptionally they can act as regulators by regulating splicing, translation, stability of proteins and mRNA decay etc 32 . Consequently, they assume important role in disease pathogenesis and progression. Studies have shown that dysregulation of lncRNAs are involved in various disorders like cancer, metabolic diseases and other endocrine-related disorders such as diabetes mellitus. Studies have revealed that lncRNAs have been involved in the development and regulation of the female reproductive system moreover, they play a crucial role in the pathogenesis of various gynaecological-related diseases like endometriosis and various gynaecological cancers such as cervical, endometrial, and ovarian cancers 33 .
C-Terminal binding protein 1 (CTBP1) is one of the androgen receptor coregulator which plays an important role in androgen signalling by acting as a corepressor of AR thereby decreasing the androgenic effect. The CTBP1 corepressor is in turn regulated by its antisense lncRNA CTBP1-AS. This lncRNA is extensively studied in AR signalling cascade. The role of CTBP1-AS as novel positive regulator of AR transcriptional activity was con rmed by Takayama and co-workers. They showed that CTBP1-AS acts by promoting the repression of CTBP1 protein and it also the releases repressors of AR-regulated genes from their regulatory regions. Thus CTBP1-AS lncRNA promotes hyperactive androgen response and subsequent transcriptional activation of genes regulated by the androgen 9 . In present study, we showed CTBP1-AS is signi cantly overexpressed (P < .001) in PCOS women compared to controls. Further, we also found women with higher expression of CTBP1-AS are at greater risk of developing PCOS than women with lower CTBP1-AS expression. Similar results were reported by Liu et.al, who investigated 23 PCOS women and 17 controls of Chinese ethnicity and found signi cantly higher CTBP1-AS expression in PCOS women 34 . Taken together these studies suggest CTBP1-AS as an important factor for hyperactivity of AR and development of androgenic disorders like PCOS.
The clinical manifestations of PCOS are varied and complex. Primary clinical manifestations like hirsutism acne alopecia menstrual irregularity are outcomes of multiple biological interactions which may require simultaneous presence of various genetic, epigenetic and environmental factors [35][36][37][38][39][40] . Although, in present study we showed CTBP1-AS is associated with PCOS. However, we didn't nd any correlation between CTBP1-AS expression and various clinical manifestations of PCOS. Hirsutism (P = 0.74207) Acne (P = 0.74207) Alopecia (P = 0.689985) were not signi cantly associated with expression of CTBP1-AS. Our study is consistent with previous study by carried out by Liu et.al, on Chinese women with PCOS 34 . This may be attributed to the fact that in this complex syndrome, multiple factors come into the play and act together so as to bring about a phenotypic change or a speci c clinical manifestation. We assume that CTBP1-AS may be one of these contributing factors that in association with other factors is responsible for development of various clinical manifestations in PCOS. However, further mechanistic insights are needed so as to understand how and to what extent CTBP1-AS in uences the different phenotypic features in PCOS.
We further evaluated the correlation of CTBP1-AS expression with different hormonal and biochemical parameters. We didn't nd any signi cant correlation between CTBP1-AS expression and various hormonal or biochemical parameters. Testosterone (P = 0.99 ), Insulin (P = 0.53 ), SHBG (P = 0.67 ), LH (P = 0.51 ), FSH (P = 0.72 ), Androandrostenrdione (P = 0.93 ) didn't show signi cant correlation with CTBP1-AS expression. However, contrary to our results, Lui and co-workers found a positive correlation between expression of CTBP1-AS and serum total testosterone (P = 0.027 ) in a small case control study on Chinese women (cases/controls 23/17) 34 . The possible reason for such correlation may be because of small sample size in their study. Furthermore, it is believed that CTBP1-AS affects downstream signalling of androgen by enhancing the transcriptional activity of AR and may not have direct effect on biosynthesis of androgens. However, more elaborated studies need to be carried out to so as to understand the effect of CTBP1-AS on biosynthesis of testosterone.
Conclusion
In conclusion, our study showed that AR coregulating lncRNA CTBP1-AS is associated with PCOS women and high expression of CTBP1-AS is a risk factor for PCOS in Kashmiri women. Higher expression levels of CTBP1-AS may act as an important factor for hyperactivity of AR and may contribute towards the pathophysiology of PCOS. However, further investigations are needed to understand the mechanism of action of CTBP1-AS and identify signalling pathway. This will help to further understand the role played by lncRNA CTBP1-AS in pathophysiology of PCOS, its clinical manifestations and in identifying therapeutic targets.
Con ict of interest
On behalf of all the authors, the corresponding author declares that there is no con ict of interest.
Availability of data and materials
The data and materials will be provided by authors on reasonable request.
Consent of participation
All participants were recruited after written informed consent was obtained from them.
Consent for publication
All authors have approved the manuscript for submission. free androgen index. The correlation coe cient (r). Abbreviations: CTBP1-AS; C-terminal binding protein 1 antisense; PCOS, polycystic ovary syndrome Table 5 Association between expression of lncRNA CTBP1-AS and TT Figure 1 The expression pro le of CTBP1-AS in peripheral blood leukocytes in PCOS cases and healthy controls measured by quantitative real-time PCR. Data are expressed as mean ± SD *P < 0.05. CTBP1-AS indicates C-terminal binding protein 1 antisense; PCOS, polycystic ovary syndrome Figure 2 | 4,762.8 | 2021-06-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Ultrahigh brightness electron beams by plasma-based injectors for driving all-optical free-electron lasers
We studied the generation of low emittance high current monoenergetic beams from plasma waves driven by ultrashort laser pulses, in view of achieving beam brightness of interest for free-electron laser (FEL) applications. The aim is to show the feasibility of generating nC charged beams carrying peak currents much higher than those attainable with photoinjectors, together with comparable emittances and energy spread, compatibly with typical FEL requirements. We identified two regimes: the first is based on a laser wakefield acceleration plasma driving scheme on a gas jet modulated in areas of different densities with sharp density gradients. The second regime is the so-called bubble regime, leaving a full electron-free zone behind the driving laser pulse: with this technique peak currents in excess of 100 kA are achievable. We have focused on the first regime, because it seems more promising in terms of beam emittance. Simulations carried out using VORPAL show, in fact, that in the first regime, using a properly density modulated gas jet, it is possible to generate beams at energies of about 30 MeV with peak currents of 20 kA, slice transverse emittances as low as 0.3 mm mrad, and energy spread around 0.4%. These beams break the barrier of 10 18 A = (cid:1) mm mrad (cid:2) 2 in brightness, a value definitely above the ultimate performances of photoinjectors, therefore opening a new range of opportunities for FEL applications. A few examples of FELs driven by such kind of beams injected into laser undulators are finally shown. The system constituted by the electron beam under the effect of the electromagnetic undulator has been named AOFEL (for all optical free-electron laser).
I. INTRODUCTION
Recently a few authors proposed to use plasma injectors as an electron beam driver for the self-amplified spontaneous emission (SASE) x-ray free-electron laser (FEL): the aim is to design and build compact FELs, taking advantage from the capability of plasma accelerators to produce GeV beams on mm-scale lengths, to be compared to km based rf linacs. Gruner et al. [1] proposed the use of a plasma injector operated in the bubble regime to generate an electron beam with unprecedented high brightness at an energy of 1 GeV, carrying a beam peak current in excess of 100 kA with rather good normalized emittance (1 mm rad) and low energy spread (0.1%). This ultrahigh brightness beam is an ideal candidate to drive FELs with radiation wavelength down in the angstrom range: in Ref. [1] the transport and matching of this beam into a magnetostatic undulator is analyzed by means of numerical simulations showing the strong blowup that the beam undergoes along the transport due to its very intense space charge field. The possibility to preserve the beam quality throughout the undulator, as required to maintain the FEL exponential gain regime, is somewhat controversial, mainly because the space charge effects in the longitudinal and in the transverse planes are not negligible on the scale of the FEL gain length.
In this paper we present a different approach, which has in common the goal to use a plasma injector to drive a compact SASE FEL, but differing in the type of regime used in the plasma channel to generate the electron beam and in the energy of the beam itself, which is in the range of a few tens of MeV instead of a few GeV. To this purpose the use of an electromagnetic undulator, i.e., a counterpropagating laser pulse of proper wavelength, is foreseen, as recently proposed [2,3]. The technique of controlling the breaking of the plasma wave by tapering the plasma density in the gas-jet is applied, as proposed in [4 -7]: we believe this technique is suitable to produce higher quality beams than the bubble regime, though with lower beam currents (tens of kA instead of higher than 100 kA). In addition, the use of an electromagnetic (e.m.) undulator allows one to conceive an ultracompact device, a few mm of total length, compared to meters of the scheme using GeV beams. If the e.m. laser pulse is of the same wavelength of the laser pulse driving the plasma wave (or close to it), the electron beam does not even need to be extracted from the plasma channel, and the FEL can be driven in absence of space charge effects at all. Indeed the e.m. laser pulse can be injected into the plasma channel, so that the FEL interaction can take place inside the plasma. If the e.m undulator is made out of a CO 2 laser pulse, while the plasma density is larger than 10 19 cm ÿ3 , it cannot propa-gate through the plasma because it is undercritical: in this case the typical layout is presented in Fig. 1.
II. E-BEAM GENERATION VIA LWFA
Laser wakefield acceleration (LWFA) of e-beams [6 -8] has been proved to be able to produce high energetic (up to the GeV scale) quasimonochromatic electron beams [9,10] by using ultrashort (tens of femtosecond long) laser pulses and a plasma as accelerating medium. In LWFA the longitudinal ponderomotive force of the laser pulse excites a plasma wave whose longitudinal (i.e. accelerating) electric field can exceed thousands times that of rf guns. Both experimental and modeling/numerical sides are under active investigation, with emphasis on beam quality, i.e., monochromaticity and low emittance.
A critical issue for achieving high-quality e-beams is represented by a good control in particle injection into the plasma wave, whose phase speed equals the group velocity of the laser pulse. Several schemes have been proposed and investigated so far, including injection of externally preaccelerated e-beams, injection of short burst of newborn electrons produced via tunneling ionization, transverse wave breaking in density downramp [11], transverse injection in the bubble regime [12], and longitudinal wave breaking in density downramp [4 -7].
In this paper we will show results of numerical simulation in the nonlinear LWFA regime with longitudinal injection after density downramp. In this scheme the electrons of the crest of the waves are longitudinally injected in the fast running wakefield by a partial break of the wave induced by a sudden change in its phase speed at the transition [4]. If the laser pulse waist w 0 is much larger than the wave wavelength p 1:1 10 21 =n 2 e p , n e being the plasma electron density in cm ÿ3 , the transverse dynamics is negligible at the transition and an electron bunch with an extremely low transverse emittance is created and injected in the acceleration region of the wave [5,6].
An analogous phenomenon, but in the plasma Wakefield regime, has been described in Ref. [7].
We searched for parameters giving rise to electron bunches suitable for driving the laser-based FEL source. This means that the e-bunches must have a high current but do not need to be characterized by overall low emittance and energy spread. They should contain slices with very low slice emittance and very low slice energy spread, instead. We expect space charge effects to occur over a beam-plasma wavelength scale, which is given by ?bp 2R 2I 0 I q , R being the beam radius, I its current, the mean Lorentz factor of the electrons, and I 0 17 kA the Alfven current. For our typical beam this length is of the order of a cm, much longer than the expected saturation length in the e.m. undulator (which is of the order of 1 mm or a fraction of it). On the other hand, this length is typically of the order of km for an x-ray FEL driven by a GeV rf electron linac (again much longer than saturation length in a magnetostatic undulator). This implies that we should not expect serious emittance degradation due to transverse space charge along the e.m. undulator, as can be seen from Fig. 2, which presents a simulation made with ASTRA [13] for a typical all optical free-electron laser (AOFEL) beam initially focused to 1 m rms spot size and then freely propagating in a 1.5 mm long drift. The rms beam envelope h x i (left scale) and the rms normalized emittance " n (right scale) are shown on the range of interest along the coordinate z, both with [curves (a)] and without [curves (b)] the effects of the energy spread for I 20 kA, " n 0:3 mm mrad, and 60. The predicted gain length is about 50 m, so that the beam evolution is followed for about 30 gain lengths. In 1.5 mm of drift the emittance does not change significantly, while the envelope evolves as if the space charge is absent. Clearly, there may be effects of energy spread increase due to longitudinal and transverse space charge fields, including retarded effects at plasma-vacuum interface, which should be modeled by a self-consistent simulation taking the beam from the plasma to the vacuum region and all the way through the e.m. undulator. Simulations were performed with the fully self-consistent particle-in-cell (PIC) code VORPAL in the 2.5D (3D in the fields, 2D in the coordinates) configuration. The parameters used in the calculations are presented in Table I. VORPAL [14] is a multipurpose code and in the present work it worked in the self-consistent PIC configuration with dynamics in a moving window box moving at the light speed.
The initial plasma density profile (see Fig. 3) is composed of a smooth rising edge (laser coming from the left) and a first plateau of density n 01 , with an appropriate phase of the plasma wave excited in the second plateau (accel- erating region). Simulations of the injection-acceleration process were performed in a moving window of longitudinal and transverse size of 50 and 60 m, respectively, sampled in an 800 120 box with 20 macroparticles/ cell, corresponding to a longitudinal and transverse resolution of 0 =14 and 0 =2, respectively. In Fig. 4 some snapshots of the electronic density are reported. In Fig. 4(a) the laser pulse (coming to the lefthand side) is crossing the transition and the plasma wave in the first plateau is experiencing nonlinear steepening. In Fig. 4 The final parameters of the initial plasma density have been obtained by an optimization procedure in the parameters double-plateau profile, i.e., the second plateau density, the transition scale length, and the amplitude of the transition. The first parameter controls the plasma wavelength and maximum accelerating gradient in the acceleration region and must be tuned according to the e-beam length and charge. The transition scale length controls the charge and length of the beam and must be as small as possible in order to ensure the trapping of the largest number of particles [4], while a tuning in the amplitude of the transition is necessary for selecting the appropriate phase of the particles in the Langmuir wave.
III. FEL SIMULATIONS
The electron bunch produced by VORPAL has been analyzed for finding the slices characterized by the highest brightness for producing FEL radiation [15].
In Fig. 5 the beam current along the beam coordinate s is presented, while superradiant single spike pulse. In our case, L b =2L c 0:566 and the condition L b =2L c < 1 for a clean single spike production turns out to be satisfied. We want to point out that the dynamics of the beam in the transition between the plasma and the interaction region with the laser has not been so far simulated consistently, so that the sequence of our numerical calculations cannot be considered a complete start-to-end simulation. The effective parameters of the beam at the entrance of the e.m. undulator could be worse than those used. However, estimates of the emittance and of the energy spread made by importing the plasma fields given by VORPAL] in RETAR [13] are in progress [16].
The laser beam interaction has been simulated with the code GENESIS 1.3 [17], which tracks the particles in a static wiggler. The code GENESIS 1.3 is widely used in the FEL predictions and integrates the FEL equations based on the slowly varying envelope approximation [15]. By exploiting the equivalence between static and electromagnetic undulators [3], we have inserted the mean parameters of the bunch radiating slice into GENESIS 1.3, obtaining the results presented in Fig. 7. Window (a) shows the average power P vs the coordinate along the undulator z, while window (b) presents the peak power vs z. In Fig. 7(c) there is the shape of the power vs s and in (d) the spectrum of the x-rays pulse at z 2:5 mm. The peak power value is 0.75 GW and the structure of the power on the beam is that typical of superradiance.
In fact, even if the initial radiation field and bunching present a sequence of several random spikes, during the propagation in the wiggler they clean up assuming a smooth shape on the bunch [see (c)], with a neat, thin spectrum. The saturation occurs at 2.5 mm, the subsequent decrease of average and peak power being due to the slippage of the radiation outside the simulated slices.
The effect of , =, and I profiles has been then analyzed. In Fig. 8, and = profiles deduced from the effective particle packet are presented, while Fig. 9 shows the results obtained taking the profiles into account.
A situation substantially similar to the preceding one is shown by the graphs, with a slight reduction of the satura- tion power and an increase of about 15% in the saturation length, that turns out to be slightly less than 3 mm.
A further step has been the use of the original files by VORPAL, the results being presented in Fig. 10. The superradiant behavior maintains, and there is the occurrence of a first large peak of radiation, high in power and temporary very thin, with a FWHM width of about 0:1 m, corresponding to about 300 attosec, and with a maximum value of 200 MW . The transverse coherence is only partial, with the development of a dozen of peaks, while the temporal coherence is strong as can be seen from the pulse in the frequency domain. The same bunch, but without the defo- cusing phase, has been simulated in the field of a laser with wavelength L 1 m. We observe the production of a clean superradiant peak of width less than 0:05 m (100 attosec), of tens MW and with a clean spectrum.
The results of the FEL simulation have been summarized in Table II, where the peak power P max , the energy extracted E, the pulse length L R , the saturation length L sat , the radiation wavelength R , and the spectral width R = R at saturation have been reported. Case 1 refers to the calculation presented in Fig. 7, where the average beam parameters have been assigned. Case 2, shown in Fig. 9, refers to the calculation made with an ideal beam with profiles, while in case 3 (Fig. 10) the beam produced by VORPAL has been imported in GENESIS 1.3.
IV. CONCLUSIONS
In this paper we propose the possibility of using a LWFA plasma driving scheme on a gas-jet modulated in areas of different densities with sharp density gradients to generate a monoenergetic, high current ( > 20 KA) electron beam.
This bunch of electrons, characterized by the presence of low emittance ( < 0:5 mm mrad), low energy spread slices, when interacting with a counterpropagating CO 2 laser pulse, radiates a single spike, highly coherent, ultrashort ( < 1 fsec) x-ray pulse via excitation of a SASE superradiant FEL instability. The peak power achieved at 1.2 nm is of the order of 200 MW.
Taking an electromagnetic undulator at 1 m wavelength, the electron beam can drive a 1 A FEL, with similar performances.
Although the presented analysis is based on threedimensional simulations, effects due to alignment errors, spatial and temporal jitters, as well as realistic laser pulse profiles for the e.m. undulator and consistent exit from the plasma, that have not been yet considered, must be thoroughly investigated in order to assess the experimental feasibility of such a new FEL scheme. Nevertheless, we believe that our analysis, although very preliminary and missing a full self-consistent start-to-end simulation, shows a potential for the future development of an ultracompact source of coherent x rays at brilliance levels comparable to those typical of fourth generation light sources. Further work is in progress with more consistent simulations able to describe the exit of the beam from the plasma and the transport through the e.m. undulator, in order to address the problem of performing a real start-toend simulation from the plasma to the FEL radiation production. | 3,883.6 | 2008-07-29T00:00:00.000 | [
"Physics"
] |
Polyamine transporter in Streptococcus pneumoniae is essential for evading early innate immune responses in pneumococcal pneumonia
Streptococcus pneumoniae is the most common bacterial etiology of pneumococcal pneumonia in adults worldwide. Genomic plasticity, antibiotic resistance and extreme capsular antigenic variation complicates the design of effective therapeutic strategies. Polyamines are ubiquitous small cationic molecules necessary for full expression of pneumococcal virulence. Polyamine transport system is an attractive therapeutic target as it is highly conserved across pneumococcal serotypes. In this study, we compared an isogenic deletion strain of S. pneumoniae TIGR4 in polyamine transport operon (ΔpotABCD) with the wild type in a mouse model of pneumococcal pneumonia. Our results show that the wild type persists in mouse lung 24 h post infection while the mutant strain is cleared by host defense mechanisms. We show that intact potABCD is required for survival in the host by providing resistance to neutrophil killing. Comparative proteomics analysis of murine lungs infected with wild type and ΔpotABCD pneumococci identified expression of proteins that could confer protection to wild type strain and help establish infection. We identified ERM complex, PGLYRP1, PTPRC/CD45 and POSTN as new players in the pathogenesis of pneumococcal pneumonia. Additionally, we found that deficiency of polyamine transport leads to up regulation of the polyamine synthesis genes speE and cad in vitro.
Polyamines are ubiquitous small cationic molecules that are important for growth and virulence of pneumococcus 14,15 . The most common intracellular polyamines, such as putrescine, spermidine and cadavarine, carry a net positive charge at physiological pH and form electrostatic bonds with negatively charged macromolecules, particularly nucleic acids, to maintain a stable conformation and regulate transcription among other processes 16 . In bacteria, polyamines are implicated in scavenging iron and free radicals, conferring acid resistance, promoting biofilm formation, helping escape from the phagolysosomes, and interactions with various components of cell envelopes 15,17 . Although the effect of polyamines on bacterial virulence has been studied in Neisseria gonorrhoeae, Escherichia coli, Vibrio cholerae, Helicobacter pylori, and Shigella flexneri 15 , the role of polyamines in pneumococcal pneumonia remains largely unexplored. Polyamine transport and synthesis genes are highly conserved among pneumococcal serotypes 18 and are potentially attractive therapeutic targets. In pneumococcus, the polyamine transport operon potABCD consists of a membrane associated cytosolic ATPase (PotA), trans-membrane channel forming proteins (PotB and PotC) and the extracellular polyamine recognition domain (PotD) 15 . Deletion of potD in a type 3 pneumococcus resulted in severe attenuation of virulence in systemic and pulmonary models of pneumococcal disease 19 . In mice, PotD vaccination confers protection against colonization, pneumococcal pneumonia and sepsis 20 . PotD, is a potential next generation vaccine candidate and its efficacy increases in combination with several other pneumococcal protein candidates such as sortase A and glutamyl tRNA synthetase 13,[19][20][21] . Recent studies showed delayed autolysis in a serotype 2 strain in which potD was deleted 22 . Although polyamines are implicated in a wide array of cellular processes, specific mechanistic roles for polyamines are yet to be assigned 17 .
It is known that pneumococci can invade lungs as early as a minute following intranasal challenge 23 . Our earlier findings showed that isogenic deletion of potABCD in TIGR4 (ΔpotABCD), a type 4 strain, led to attenuation in a mouse model of pneumococcal pneumonia 48 h following infection compared to the wild type (WT) strain, and active immunization with recombinant PotD affords protection against systemic infection 18,24,25 . The inability of polyamine transport mutants to survive in the host could be due to altered expression of pneumococcal genes regulated by polyamines, including the expression of virulence factors such as pneumolysin and capsular polysaccharides 18 . Differences in early host innate immune responses against the WT and polyamine transport deficient mutants could also explain the observed attenuation, but remain unknown.
In this study, we compared the early host immune responses to WT and polyamine transport deficient strain, in an intranasal challenge model of pneumococcal pneumonia in mice. Impaired polyamine transport in S. pneumoniae TIGR4 ∆potABCD resulted in its clearance from the lung and blood by 24 h. Concentration of several cytokines/chemokines were higher at 4 h and 12 h in mice infected with ∆potABCD. Consistent with this observation we found a significant increase in the infiltration of neutrophils in the lung. Comparative expression proteomics analysis of mouse lung tissue using 1D LC ESI MS/MS identified differential regulation of proteins involved in neutrophil killing and bacterial clearance such as PTPRC/CD45, PGLYRP1 and Ezrin-Radixin-Moesin, to name a few. In pneumococci, synthesis and transport mechanisms regulate intracellular polyamine concentrations 18,26 . We show upregulation of polyamine biosynthetic genes in vitro, in transport deficient ∆potABCD. We conclude that polyamine transport in the pneumococcus is essential to evade early innate immune responses. A thorough understanding of dedicated polyamine transport in the pneumococcus is an attractive avenue for developing small molecule based intervention strategies that do not depend on host immune status.
Clearance of a polyamine transporter mutant in a mouse model of pneumococcal pneumonia.
S. pneumoniae is a commensal of the nasopharynx and invasive disease requires transition to sterile sites such as lungs and blood. To test if polyamine transport modulates pneumococcal transition and adaption to the lung environment, we infected mice with TIGR4 or polyamine transporter mutant intranasally to produce pneumonia. Lung and blood were aseptically collected in PBS at 4 h, 12 h and 24 h post infection (p.i.). We found significant differences in the bacterial burden at all time points in the lung of TIGR4 and ∆ potABCD infected mice. Enumeration of bacteria from lungs showed a significantly higher number of ∆ potABCD compared to TIGR4 at 4 h (Fig. 1A) post infection (p.i.). In contrast, TIGR4 ultimately persists better in the lower respiratory tract, as there were significantly more WT bacteria at 12 h (Fig. 1B) and 24 h p.i. (Fig. 1C) compared to ∆ potABCD. The mutant strain was cleared by the host immune system by 24 h as demonstrated by the lack of bacteria recovered from lungs. Blood was sterile in all animals infected with WT and ∆ potABCD at 4 h, 12 h and 24 h p.i. suggesting that neither WT nor ∆ potABCD could invade the blood stream within 24 h of infection. Together, these results demonstrate that the deletion of the polyamine transporter in a capsular type 4 strain renders it more invasive yet susceptible to bacterial clearance mechanisms than the parent strain at early stages of infection.
Polyamine transport in S. pneumoniae effects cytokine/chemokine expression in mice.
Pneumococcal invasion of lungs results in complex early immune responses that can be characterized by cytokines and chemokines that induce pro-or anti-inflammatory responses necessary for bacterial clearance. The observed differences in bacteria in the lungs at 4 h and 12 h p.i. could result from differential cytokines/chemokine levels in the lung. To evaluate this, lung homogenates from animals challenged with TIGR4 and ∆ potABCD were evaluated for 32 unique cytokines and chemokines (See methods). Consistent with the observed increase in the bacterial burden, the concentrations of G-CSF, LIF, IP-10, KC, GM-CSF, IL-5, IL-17 and MCP-1 were significantly higher in lungs from mice infected with ∆potABCD at 4 h. p.i. (Table 1). Assessment of cytokine/chemokine levels at 12 h. p.i. showed elevated levels of IL-4, IFN-γ , MIG, GM-CSF, IL-17 and IL-7 relative to WT ( Table 2). Majority of these cytokines and chemokines are associated with neutrophil recruitment response. Increased IL-17 levels correlate with the recruitment and activation of neutrophils and pneumococcal clearance in murine models of pneumococcal colonization 27,28 . Together, the 4 h and 12 h cytokine/chemokine data suggests a model in which ∆ potABCD has decreased resistance to host immune response, potentially by decreased expression of virulence factors that facilitate evasion of early host cytokine responses and their effector functions.
Differences in immune cell infiltration in mice infected with TIGR4 WT and ∆potABCD strains.
We hypothesized that the comparatively higher bacterial burden at 4 h with ∆ potABCD and the observed increase in chemokines and cytokines could lead to significant differences in the recruitment of neutrophils and macrophages to infected lungs. We tested this hypothesis using flow cytometry and determined changes in the distribution of neutrophils and macrophages in infected mice compared to control mice. Infiltrating non-alveolar macrophages in lung homogenates were quantified by counting cells expressing F4/80 and CD11b (Fig. 2) and neutrophils were quantitated by Gr-1 and CD11b expression (Fig. 3). We identified significantly higher number of neutrophils at 4 h p.i. with ∆ potABCD mutant when compared to TIGR4 (Fig. 3A). No differences were observed at 12 h (Fig. 3B). Unlike neutrophil infiltration, there were no significant differences in the recruitment of infiltrating macrophages to the lung tissue with WT or ∆ potABCD infection at 4 h or 12 h p.i. (Fig. 2A,B). Although we observed significant increase in the levels of MCP-1 in lung homogenates at 4 h (Table 1), a potential indication of macrophage and monocyte recruitment, previous studies have shown that these may not correlate with increased infiltration of macrophages 29 .
Uptake of ∆potABCD by neutrophils does not require antibody opsonization. Increased influx of neutrophils and clearance of ∆ potABCD suggests a role for functioning polyamine transport in the WT for evading host neutrophil response. Increased uptake of ∆ potABCD by neutrophils when compared to the WT could result in enhanced clearance of the polyamine transport deficient strain (Fig. 4). Our results show that there is ~50% increase in the uptake of ∆ potABCD by murine neutrophils compared to WT. This uptake is dependent on phagocytosis and complement activation as addition of cytochalasin D and heat inactivation of serum had an inhibitory effect on bacterial uptake (See supplementary Fig. S1). However, we found that the uptake of ∆ potABCD by neutrophils was independent of antibody opsonization as serotype 4 specific antibody was not required for phagocytosis (See supplementary Fig. S2B). Although WT bacterial numbers were reduced in the presence of neutrophils, opsonization with serotype 4 specific antibody was required for its efficient uptake (see supplementary Figs S1 and S2A). Hence, we conclude that polyamine transport is critical for protecting S. pneumoniae from opsonophagocytosis by the host. Table 2. Comparison of significantly altered cytokine/chemokine concentrations in lung of C57BL/6 mice infected with TIGR4 WT and ∆potABCD strains12 h p.i. Data shown represents the geometric means (SD) of concentration in pg/ml and represents values from 5 animals in each group with the significance level set at 0.05. One-way ANOVA and Tukey's multiple testing comparison was used to calculate the statistical significance (***p-value = 0.0001; ****p-value = < 0.0001).
Scientific RepoRts | 6:26964 | DOI: 10.1038/srep26964 Polyamine transport is required for invasion of phagocytic and non-phagocytic cells. Our flow cytometry analysis of phagocytic cell infiltration was specific for non-alveolar macrophages. Unlike interstitial macrophages, alveolar macrophages express lower levels of CD11b rendering them CD11b low or negative for flow cytometric analysis 30 . Clearance of pneumococci from the lower respiratory tract also requires functional tissue resident alveolar macrophages 31 . Based on the observed differential uptake of WT and mutant by neutrophils, we asked if a similar difference could be observed in alveolar macrophages. Results from our invasion assays of mouse alveolar macrophages (AMJ2.C8 cells) at a MOI of 1:10 (cell: bacteria) show that ∆potABCD is taken up more efficiently by macrophages (Fig. 5A) compared to WT. Our in vivo analysis showed differences in the transition of WT and ∆potABCD from nasopharynx to lungs. This could be the result of altered invasiveness of the mutant strain in epithelial cells. We tested this in BEAS.2B lung epithelial cells and found that ∆potABCD could invade lung epithelial cells better than WT (Fig. 5B). However, we did not see any differences in the ability of TIGR4 or transport mutant to adhere to either murine alveolar macrophages or lung epithelial cells (Fig. 5C,D respectively). These results suggest the requirement for a functional polyamine transporter for uptake of TIGR4 by phagocytic alveolar macrophages and invasion of the lung epithelial cell barrier.
Proteomic analyses of lung from mice infected with TIGR4 and ∆potABCD. The propensity of pneumococci to establish infection in a healthy host relies, in part, on its inherent capacity to evade innate host defense mechanisms. Our results demonstrate that an intact polyamine transporter is critical for evading 1 hr control represent the reaction mixture with neutrophils with no bacteria and T4 represents TIGR4. Data are represented as mean ± SEM. Two-way ANOVA and Sidak's multiple comparison test were used to calculate the statistical significance (**p-value = 0.003; ****p-value = < 0.0001). recruitment and uptake by host neutrophils. To identify specific changes in the lung proteome in response to TIGR4 and ∆potABCD infection, we conducted mass spectrometry-based expression proteomics 4 h and 12 h p.i.
[B] Deletion of polyamine transporter enhances the invasion of S. pneumoniae TIGR4 in human BEAS.2B lung epithelial cells. Cells were cultured in appropriate medium to 90% confluency, and were infected with TIGR4 or ΔpotABCD at MOI of 1:10 for 2 h, followed by 1 h incubation with penicillin and gentamicin at 37 °C in 5% CO 2 . Cells were lysed in 0.0125% Triton X-100 for CFU enumeration. [C,D] Deletion of polyamine transporter does not affect the ability of S. pneumoniae. TIGR4 to adhere to murine AMJ2.C8 [C] and human BEAS.2B [D] lung epithelial cells. Cells were cultured as mentioned in 5A, infected with TIGR4 or ΔpotABCD at MOI of 1:10 for 2 h at 37 °C in 5% CO 2 , rinsed 3X in sterile PBS, and CFU enumerated. All assays were performed at least three times with two or more technical replicates. One-way ANOVA and Tukey's multiple comparison tests were used to calculate the p-values. (****p-value = < 0.0001, ns = p-value > 0.05). (Table S2). Differentially expressed mouse lung proteins in response to WT and ∆potABCD strain were analyzed in the context of molecular function, signaling pathways and biological networks using Ingenuity Pathways analysis (IPA) software. Based on the expression pattern of proteins in a given dataset, IPA causal network analysis 32 predicts either the activation or inhibition of upstream regulators. Causal network analysis with WT and ∆potABCD at 4 h, predicted the activation of IL-6, TNF, MyD88 (activation Z-score ≥ 2.0). Predicted activation of interferon gamma was observed at 4 h and 12 h with ∆potABCD and only at 12 h with WT. At 12 h IL-1β , TNF, MyD88 were predicted to be activated while MAPK12 was inhibited with WT. Based on the observed differences in recovered bacteria, neutrophil infiltration and cytokine concentrations at 4 h and 12 h with WT and ∆potABCD, we expect to see a shift in host innate immune processes. In brief, a delay in early innate immune responses should be observed with WT. Regulator effects algorithm in IPA linked differentially expressed proteins, predicted activators/repressors to known impact/phenotype. Comparison of impact of observed differential expression on host immune responses with regulator effects algorithm identified that host response to ∆potABCD at 4 h (Fig. 6) is comparable to the response to WT at 12 h (Fig. 7).
In vitro analysis of speE and cad gene expression in ∆potABCD. Polyamines regulate a wide array of cellular processes; consequently, their intracellular concentrations are tightly regulated 18 , by balancing extracellular transport with de novo synthesis of intracellular polyamines. Similar to E. coli, pneumococcus has SpeE that converts putrescine to spermidine, and Cad, which catalyzes the conversion of L-lysine to cadaverine. To address if lack of transport is compensated by synthesis, we evaluated the expression of speE and cad genes by qRT-PCR and found that compared to TIGR4, speE and cad levels were up regulated by 14.4 and 17.0 fold respectively in ∆ potABCD (Table 5). Although these changes were found in cells grown in vitro we posit that cellular synthesis of polyamines by pneumococci growing in a host may wholly or partially compensate for diminished extracellular transport.
Discussion
Despite intensive investigation into the natural history and pathogenesis of pneumococcal pneumonia, the ability of the pneumococcus to interact and adapt to the host milieu to establish infection is still confounding. Although much experimental focus has been placed on capsule, components of the cell wall, toxins, sensing systems and membrane transporters critical for adaption to host environments are just starting to gain attention in pneumococcal physiology and virulence 18 . Pneumococci can subvert innate immune responses by several mechanisms such as impeding the recruitment of phagocytic cells, circumventing intake by phagocytic cells, inducing the expression of host proteins that can block antimicrobial properties and inhibit/reduce complement activation and antibody-mediated opsonization 33 . Here we show that significantly reducing polyamine transport by deletion of a polyamine membrane transporter dampens several of these resistance mechanisms. In the present study, we used a polyamine transporter-deficient mutant to identify host innate immune effectors that normally serve to protect against pneumococcal pneumonia.
Our previous findings indicated that deletion of polyamine transport led to attenuation at 48 h 18 , in an intranasal challenge model of pneumococcal pneumonia in mice. In this study, we studied the kinetics of bacterial clearance in the lung in an intranasal challenge model and identified a significant difference in the bacterial load between the WT and ΔpotABCD as early as 4 h p.i. While the WT can persist in the lungs 24 h p.i., the mutant is cleared by host early innate immune responses. The larger number of mutant bacteria at 4 h reflects either a differential suppression of growth by the host or inherent differences in the two bacterial strains to replicate in vivo, early after entering the lower respiratory tract. Also, pneumococcal derived factors could produce a net growth advantage for the mutant bacteria early in the course of infection. The observed increase in the expression of specific cytokines/chemokines suggests that the host innate immune cells are differentially recruited to the sites of infection in pneumococci lacking potABCD. G-CSF coordinates the maturation and release of neutrophils from bone marrow 34 . Binding of KC to CXCR2 is required for releasing neutrophils into the circulation and G-CSF is known to facilitate this release by KC 35 . The up regulation of these neutrophil attractants suggest increased recruitment of neutrophils by polyamine transporter-deficient pneumococci. Although MCP-1 is shown to be involved in the recruitment of neutrophils and bacterial clearance in Gram-negative bacterial infections, studies on serotype 3 strains showed that MCP-1 is not required for recruitment of innate immune cells in pneumococcal pneumonia 36 . At 4 h and 12 h p.i. with ∆potABCD we observed increased IL-17. Levels of IL-17 correlate with the recruitment and activation of neutrophils and pneumococcal clearance in murine models of pneumococcal colonization 27,28 . This finding suggests that intracellular polyamine levels in pneumococci are critical to bacterial defenses against host innate immune responses. Other physiological pathways in the pneumococcus will undoubtedly be shown to have similar roles in bacterial adaption and response to various host environments. The clearance of bacteria in pneumococcal pneumonia is primarily mediated by neutrophils 37,38 . Increased early infiltration of neutrophils into the lungs and clearance of bacteria seen in transporter mutant infected mice suggested neutrophil killing as a potential mechanism of bacterial clearance. S. pneumoniae has evolved mechanisms to escape neutrophil extracellular traps (NETs) by degradation of the DNA scaffold through the endonuclease, endA 39 . Although we did not test neutrophil killing, our opsonophagocytosis assay clearly demonstrates that unlike WT, transporter-deficient mutants are taken up by neutrophils independent of opsonization. We show that transporter mutants are cleared by phagocytosis, as treating the neutrophils with cytochalsin D effectively inhibited the uptake of both WT and transporter mutant. Similarly, heat-inactivated serum reduced uptake of WT and transporter mutants by neutrophils, suggesting complement-mediated killing mechanisms. Our results demonstrate a role for ABC transporters such as PotABCD to alter the bacterial physiology to adapt to different host niches as infection is established. These adaptations can significantly alter the interface of pathogen and host immune responses. In lungs infected with WT pneumococci, we show increased expression of SerpinA1 (alpha-1-antitrypsin precursor), a known inhibitor of neutrophil serine proteases involved in bacterial killing (Table S2). This significant up regulation of a host resistance mechanism is lost with ΔpotABCD, which could in part explain its susceptibility to neutrophil killing. Further, increased expression of Leukotriene A4 hydrolase (Table 3) in ΔpotABCD infected animals could lead to increase in leukotriene mediated neutrophil ROS. This observation suggests that ΔpotABCD is possibly cleared by ROS mediated neutrophil killing mechanisms, which TIGR4 is resistant to 37 .
Proteome analysis of the lung showed significant up regulation of complement component C3 with both WT and mutant at 4 h, with a subsequent reduction at 12 h, which supports our neutrophil uptake data. C3 is critical for clearance of pneumococci 40 . Here we have shown that WT and polyamine transporter-deficient pneumococci both require complement for uptake by neutrophils, although the transport mutant succumbed quickly to phagocytosis (Fig. 4). This suggests that despite the differences in bacterial load, TIGR4 and ∆potABCD elicit similar host responses. However, lack of polyamine transport could alter pneumococcal surface properties rendering it more susceptible to complement activation. This is further substantiated by the fact that the expression of ELANE, a neutrophil elastase, was significantly elevated in both WT and transport mutant at 4 h. Neutrophil elastase is a serine protease required for neutrophil killing 37 . Similarly, MMP9 is crucial for phagocytosis of S. pneumoniae by murine PMNS and intracellular bacterial killing 41 . The expression of MMP9 was comparable in WT and the mutant at 4 h, despite the differences observed in the early immune responses.
The proteome of mouse lungs challenged with ∆potABCD parallels observations with immunoassays. Our proteomic screen revealed unique proteins that were differentially regulated in ∆potABCD at 4 h. We found . IPA's regulator effects algorithm connected the upstream regulators, proteins in our dataset to downstream functions to generate regulator effects hypotheses with a consistency score. The predicted top regulatory network (13.8 consistency score) in response to TIGR4 will impact chemotaxis of granulocytes, phagocytes and activation of neutrophils consistent with the establishment of infection, a delayed response when compared to ∆potABCD 4 h (Fig. 6). up regulation of PTPRC/CD45 in ∆potABCD relative to WT at 4 h (Table 3). PTPRC/CD45 participates in the clearance of Staphylococcus aureus in animal models 42 . Ptprc −/− mice display impaired neutrophil recruitment, weaker host defense and die prematurely 43 . The observed increase in the levels of PTPRC/CD45 may correlate with increased recruitment of neutrophils and clearance of ∆potABCD by 24 h. Similarly, the expression of PGLYRP1 was up-regulated in ∆potABCD relative to WT at 4 h. PGLYRP1 is an antimicrobial innate immunity protein commonly implicated in allergic asthma. Interestingly, pglyrp1 −/− mice are compromised in the recruitment of neutrophils, lymphocytes, eosinophils and macrophages to lung 44 , thereby substantiating increased recruitment of neutrophils by ∆potABCD at 4 h p.i. Though not detected above baseline in TIGR4 after 4 h, we observed elevated expression of PGLYRP1at 12 h. The majority of differentially regulated proteins unique to mice challenged with ∆potABCD were down regulated at 4 h ( Table 3). For example, periostin (POSTN) is a secreted extracellular matrix protein commonly associated with bronchial hyperresponsiveness and subepithelial fibrosis in asthmatic patients 45,46 and its concentration is thought to correlate with disease severity 45 . Postn −/− mice exhibit reduced inflammation of the airways in response to allergens such as house dust mites 45 . The observation that POSTN expression is significantly reduced in animals challenged with ∆potABCD suggests its requirement in establishing infection by WT TIGR4 in lungs. We found all three components of the Ezrin-Radixin-Moesin (ERM) complex significantly down regulated in ∆potABCD challenged animals 4 h p.i. Ezrin connects apical membranes to the cytoskeleton by crosslinking to actin filaments. Listeria monocytogenes effectively use the ERM complex to link the actin in their tail to membrane cytoskeleton enabling cell-cell spread 47 . Impairing the ERM complex results in inefficient establishment of infection 47 . Consequently, Listeria monocytogenes requires the ERM complex to evade host immune responses. Ezrin and radixin were significantly down regulated by 10-and 5-fold respectively (Table 3) and moesin levels down significantly by 1.7-fold (data not included, as it did not meet the ≥2-fold cut off). Neisseria meningitidis recruit ERM proteins that inhibit the formation of the endothelial docking structures critical for leukocyte diapedesis. When these docking sites with endothelial cells were eliminated by a dominant negative approach, leukocyte diapedesis was inhibited 48 . Taken together, the down regulation of Ezrin, Radixin and Moesin with ∆potABCD suggest a novel mechanism by which pneumococci can escape host innate immunity and spread infection in the lung by regulating the expression of ERM complex. Although ERM complex seems to be critical for TIGR4 to establish infection, the pathogen-directed differential expression of this and other host factors warrants further investigation.
Previous studies showed that Δ speE and Δ cad were severely attenuated in a manner similar to Δ potABCD, in a mouse model of pneumococcal pneumonia 18 . Polyamine levels are stringently controlled by intracellular synthesis and transport mechanisms in pneumococci 18,26 . We found increased expression of speE and cad genes in ΔpotABCD in vitro. We previously reported proteomics of ΔpotABCD, and showed the increased expression of capsular polysaccharide biosynthesis proteins, pneumolysin and pneumococcal surface protein A 18 . The contribution of these proteins to the observed differences in the host innate immune response remains open for investigation.
In summary, deficiencies in polyamine transport cause attenuation of virulence in murine models of pneumonia 18 , and genes involved in polyamine transport appear to be conserved within the species, providing a potential new class of broad-based vaccine candidates or therapeutic targets. However, to leverage this knowledge for vaccine development, a better understanding of how extracellular polyamine transport promotes invasive infection is necessary. This will require an understanding of polyamine dependent expression of pneumococcal genes, and host immune mechanisms in pneumonia. This study focused on host response during infection and identified reduced resistance to neutrophil killing in polyamine transport deficient pneumococci. Comprehensive description of host-pneumococcal interactions responsive to altered polyamine metabolism will be critical for therapies that reduce the global disease burden in public health domain due to this important human pathogen.
Materials and Methods
Bacterial strains and growth conditions. Streptococcus pneumoniae serotype 4 strain TIGR4 was used in this study 49 . All strains were grown in Todd-Hewitt broth supplemented with 0.5% yeast extract (THY) or on 5% sheep blood agar (BA) plates. An isogenic mutant of TIGR4 deficient in the polyamine transport operon potABCD was generated by PCR-ligation mutagenesis as described previously 50 . Briefly, PCR primers were designed to amplify chromosomal DNA 600 nt 5′ to the start codon and 600 nt 3′ from the transcription termination site of potABCD. Erythromycin resistance gene (ermB) was amplified from pJY4163 51 . Genomic pieces were joined by gene splicing by overlap extension (SOEing) PCR 52 using a forward primer of the upstream flanking region and the reverse primer of the downstream flanking region. The recombinant product was transformed into TIGR4 as described previously 53 . Transformants were selected on BA plates with erythromycin (0.5 μ g/ml) by an overnight incubation at 37 °C with 5% CO 2 . The identity of mutant bacterial colonies were confirmed by PCR and sequencing. There was no significant difference between the WT and ∆potABCD growth in THY as reported previously 18 . . Sequest was set up to match mass spectra and tandem mass spectra (see supplementary information) against a non-redundant mouse protein database appended with S. pneumoniae proteins and common laboratory contaminants. A reversed decoy database was utilized to assess false discovery rate for peptide identification. Scaffold 4 (Proteome Software, Portland, OR) was used to filter Discoverer output. Peptide identifications were accepted if they could be established at greater than 92.0% probability to achieve an FDR less than 0.5% by the Scaffold Local FDR algorithm (see supplementary tables S5B-8B). Protein identifications were accepted if they could be established at greater than 99.0% probability and contained at least 2 identified peptides (see supplementary tables S5A-8A). Significant changes in protein expression between uninfected control versus TIGR4 and uninfected control versus ΔpotABCD at 4 h and 12 h were identified by Fisher's exact test at a p-value of ≤ 0.05 and fold change in protein expression was calculated using weighted normalized spectra with 0.5 imputation value. The PRoteomics IDEntifications (PRIDE) database is a centralized, standards compliant, public data repository for proteomics data. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium 55 via the PRIDE partner repository with the dataset identifier PXD002300 and 10.6019/PXD002300. Significantly altered lung proteins were analyzed using Ingenuity Pathways Analysis (IPA) as was done earlier 56 (see supplementary methods). Based on the distinct up-and down regulation pattern of protein expression, IPA predicted activation of upstream and downstream regulators and utilized the regulator effects algorithm to connect identified upstream regulators with proteins in a dataset and downstream functions to generate a hypothesis with a consistency score. The predicted regulatory networks helped interpret the impact of observed protein expression changes on host response.
Isolation of murine neutrophils. Murine neutrophils were isolated as previously described 37 . In brief, mice were intraperitoneally injected with 1 ml of 10% casein in PBS. A second dose was administered after 24 h and neutrophils were harvested 2 h following the second dose. Murine polymorphonuclear leukocytes (PMNs) were collected by lavage of the peritoneal cavity with Hanks buffer (no Ca ++ and Mg ++ ) supplemented with 0.1% gelatin. Neutrophils were enriched by Ficoll density gradient, washed with 5 ml of Hanks buffer (no Ca ++ and Mg ++ ) with 0.1% gelatin and resuspended in Hanks buffer (Ca ++ and Mg ++ ) and 0.1% gelatin.
In brief, 10 3 bacterial cells (in 10 μ l) were preopsonized with Type 4 specific (Hyp4M3) monoclonal antibody 57 unless mentioned otherwise (gift from Dr. Moon H. Nahm, The University of Alabama, at Birmingham) for 30 min at 37 °C. Neutrophils were then added at 1:10 and 1:100 (bacteria:neutrophils) and incubated for 1 h at 37 °C with rotation. Percent survival was determined relative to control without neutrophils. Cytochalasin D (20 μ M) was used to pretreat neutrophils (30 min at 37 °C), and serum was heated at 56 °C for 30 min to inactivate complement.
Quantitative Real Time PCR. The primers used for quantitative PCR (qRT-PCR) are listed in Table S9.
In brief, total RNA was purified from mid-log phase cultures of TIGR4 and ΔpotABCD grown in THY media using the RNeasy Midi kit and QIAcube (QIAGEN, Valencia, CA). Purified total RNA (7.5 ng/reaction) was transcribed into cDNA and qRT-PCR was performed using the SuperScript ™ III Platinum ® SYBR ® Green One-Step qRT-PCR Kit (Fisher Scientific, Pittsburgh, PA) according to manufacturer's instructions. Relative quantification of gene expression were obtained using the Stratagene Mx3005P qPCR system (Agilent, Santa Clara, CA). Expression of target genes, speE (SP_0918) and cad (SP_0916) were normalized to the expression of gdh (glucose-6-phosphate 1-dehydrogenase, SP_1243) housekeeping gene and fold change determined by the comparative Ct method.
Statistical Analysis. GraphPad Prism (version 5.02 for Windows, GraphPad Software, USA) was used for all statistical analysis performed unless mentioned otherwise. Statistical significance (p-values) were calculated as described in the figure legends. | 7,287 | 2016-06-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Authenticity Assessment of (E)-Cinnamic Acid, Vanillin, and Benzoic Acid from Sumatra Benzoin Balsam by Gas Chromatography Combustion/Pyrolysis Isotope Ratio Mass Spectrometry
Authenticity assessment of (E)-cinnamic acid, vanillin, and benzoic acid from various origins (n = 26) was performed using gas chromatography-isotope ratio mass spectrometry coupled with combustion and pyrolysis mode (GC-C/P-IRMS). For that reason, the above three compounds (1–3) from synthetic, natural, and Sumatra benzoin balsam (laboratory prepared, adulterated, and commercial) were investigated. The δ13CV-PDB and δ2HV-SMOW values for compounds 1–3 from synthetic samples (S1–S5) ranging from −26.9 to −31.1‰ and 42 to 83‰, respectively, were clearly different from those of authentic samples (N1–N4, L1–L9) varying from −29.8 to −41.6‰ and −19 to −156‰. In adulteration verification testing, for compounds 1 and 3, both δ13CV-PDB and δ2HV-SMOW data of A1 (5.0% added) and A2 (2.5% added) except A3 (0.5% added) fell into the synthetic region, whereas for compound 2, the δ2HV-SMOW data of adulterated samples (A1–A3) fell into the synthetic region, and even the lowest adulterated sample A3 is no exception. With this conclusion, some commercial Sumatra benzoin balsam samples were identified to be adulterated with synthetic 1 (C1, C3, and C5) and synthetic 2 (C3, C4, and C5) but not with synthetic 3. GC-C/P-IRMS allowed clear-cut differentiation of the synthetic and natural origin of 1, 2, and 3 and definite identification of whether a Sumatra benzoin balsam was adulterated or not.
Introduction
Sumatra benzoin is a natural balsamic resin, exuded from a small tree, Styrax benzoin Dryander, grown extensively in Sumatra and Malaya, mostly cultivated in Vietnam, ailand, and China [1]. Sumatra benzoin balsam was obtained by extraction, filtration, and vacuum distillation of the crude benzoin. It has a sweet, balsamic-cinnamic characteristic odor, which is used extensively as fixative in perfumery, food, and tobacco flavoring [2]. Driven by business interests, Sumatra benzoin balsam was often adulterated with synthetic flavors (E)-cinnamic acid (1), vanillin (2), and benzoic acid (3) (Figure 1) to claim that it was a better grade or "originated from Siam Benzoin" [3].
Current research on benzoin resin and balsam mainly focuses on the analysis of the different volatile and nonvolatile components in various species or different places of origin [4][5][6][7][8], and little information is available about authenticity assessment. Concerning flavor authenticity and traceability, IRMS has been widely used due to the high precision of the method, the requirement for small samples, and the fact that the same technique can be used for almost any type of food or beverage [9][10][11][12][13]. Fink et al. studied hydrogen isotope ratio and carbon isotope ratio of the natural, synthetic, and semi-synthetic methyl cinnamate and showed that different sources of methyl cinnamate, 2 H/ 1 H and 13 C/ 12 C, have different distribution ranges [8].
is result shows that isotope analysis can be used to verify the authenticity of flavor compounds. In this study, we undertook the authenticity study of (E)-cinnamic acid (1), vanillin (2), and benzoic acid (3) from synthetic, natural, and Sumatra benzoin balsam through 13 C/ 12 C and 2 H/ 1 H isotope ratios measured by GC-C/P-IRMS analysis.
GC-C/P-IRMS Conditions.
A Finnigan Delta V Advantage isotope ratio mass spectrometer coupled to an HP 6890°N gas chromatograph via the open-splitof combustion and pyrolysis interface was used. e GC was equipped with an HP-INNOWAX fused silica capillary column (30 m × 0.32 mm × 0.25 μm). e following conditions were employed: for GC: 1-μL solution was injected in splitless mode (250°C); the injector temperature was 250°C; the initial oven temperature was 60°C, held for 1 min, then heated to at a rate of 180°C at 8°C/min, raised to 240°C at a rate of 6°C/ min and held at 240°C for 17 min; the carrier gas was He at a flow rate of 1.5 mL/min. For 13 C/ 12 C: the solutions flow was online combusted into to CO 2 at 960°C in the oxidative reactor (Al 2 O 3 , 0.5 mm×1.5 mm×320 mm) with Cu, Ni, and Pt (each 240 mm×0.125 mm); the water separated by Nafion membrane. For 2H/1H: the effluent from the GC were directed to a high-temperature ceramic tube (Al 2 O 3 , 0.5 mm×320 mm) and pyrolyzed to H 2 at 1440°C. In addition, coupling GC isolink elemental analyzer system to the IRMS was realized for offline control determination of reference samples. Daily system stability checks were performed by measuring reference samples with known 13 C/ 12 C and 2 H/ 1 H ratios. e reference samples were using International Atomic Energy Agency (IAEA, Vienna, Austria) standards, and IAEA-601 used for 13 C/ 12 C and IAEA-601, and VSMOW used for 2 H/ 1 H, respectively. e isotope ratios are expressed in per mil (‰) deviation relative to the V-PDB and VSMOW international standards, and the calculation method is the same as reference [8]. In general, 6-fold determinations were carried out and standard deviations were calculated. e latter were ±0.2 and ±5‰ for δ 13 C V-PDB and δ 2 H V-SMOW determinations, respectively.
Results and Discussion
To check potential isotope discrimination in the course of sample preparation, the three synthetic reference samples under study (S1, 1-3) were subjected to the steps used for the laboratory prepared balsam. e data summarized in Table 1 showed that sample preparation did not affect the isotope values.
e data from treated samples S1a did not differ significantly from those the untreated reference samples S1.
Conclusion
In conclusion, the δ 13 C V-PDB and δ 2 H V-SMOW values for authenticity assessment of (E)-cinnamic acid (1), vanillin (2), and benzoic acid (3) from various origins including synthetic, natural, and Sumatra benzoin balsam (laboratory prepared, commercial, and adulterated) were demonstrated. Despite the limited number of samples, GC-C/P-IRMS allowed clear-cut analytical differentiation of the synthetic and natural origin of 1, 2, and 3 and definite identification of whether a Sumatra benzoin balsam was adulterated or not. Future work will be done to extend the amounts of 1, 2, and 3 from natural plant sources, particularly Siam benzoin, which has antioxidative effect, economic value, and flavoring application [2], until finally build the IRMS database for their authenticity identification.
Data Availability
e reference data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors are affiliated to and funded by China Tobacco Yunnan Industrial Co. Ltd. e authors attest that China Tobacco Yunnan Industrial Co. Ltd. had no influence on design of this study or its outcomes. International Journal of Analytical Chemistry 5 | 1,516.2 | 2022-09-07T00:00:00.000 | [
"Chemistry"
] |
Implementation of Personal Health Device Communication Protocol Applying ISO/IEEE 11073-20601
In 2010, IEEE and ISO announced the exchange protocol standard (ISO/IEEE 11073-20601), optimized to secure mutual compatibility between all sorts of PHDs and the gateways for collecting bioinformation from the devices and activating related services. This international standard is the first official document that has dealt with communication related to the healthcare device. This paper is about implementing communication protocols between a weight sensor, a kind of personal health devices (PHDs) used in homes, and a gateway collecting a variety of biometric information from multiple sensors, applying international standard ISO/IEEE 11073-20601 for interoperability between medical devices. Moreover, security is enhanced by applying the international symmetric key encryption standard, advanced encryption standard (AES), for secure data transmission from the weight sensor to gateway. When the cipher algorithm was applied, we confirmed that the implementation took about 0.008 second on average than the previous.
Introduction
As the core of care has recently moved from treatment through medical practice to prevention or healthcare, owing to the influence of the concept of wellbeing and wellness, devices for letting users undergo an examination and diagnosis at home, such as a hemadynamometer, a blood sugar device, and scales, have been continuously released.Frequently, these are called personal health devices (PHDs), in comparison with point of care (POC) devices, which refer to medical equipment at the point of care in hospitals.
In 2010, IEEE and ISO announced the exchange protocol standard (ISO/IEEE 11073-20601) [1], optimized to secure mutual compatibility between all sorts of PHDs and the gateways for collecting bioinformation from the devices and activating related services.This international standard is the first official document that has dealt with communication related to the healthcare device.
Most of the communication modules for external interface are currently developed by companies based on need, so it is difficult to secure compatibility with other companies' devices, and because they are developed without a standard system, problems arise when linking them to a hospital's information system.As communication between medical devices that support the present network is becoming important, a standardized medical information protocol for sharing and transmitting information is required.
This study uses ISO/IEEE 11073-20601 to realize a communication protocol between the weight sensor and gateway and its purpose is to implement a standard technology for mutual interoperability between medical devices and hospital systems.Moreover, advanced encryption standard (AES), which is an international standard for symmetric key encryption, has been applied to enhance security.As a result, when a cipher algorithm is applied in field of data transmission from PHD to gateway, it takes approximately 0.078 seconds longer on average compared to before.
In Section 2 of this dissertation, the recently revised ISO/IEEE 11073-20601 standard protocol is examined and Section 3 describes the actual implementation method related to the weight sensor.Section 4 proposes the results of implementation and Section 5 draws conclusions.
ISO/IEEE 11073-20601 Standard [1]
This standard is a document for defining the standard format for information sent between health devices and data managers, for collecting bioinformation measured by the devices, and for the mutual exchange of information.
A sensor (such as a hemadynamometer, comprising scales, and a blood sugar device, hereafter simply referred to as PHD) collects personal bioinformation and then transmits the information to a gateway (such as a cell phone, a health device, and a personal computer) for the purpose of collection, display, and further transmission.A gateway can transmit data for the purpose of additional analysis to an healthcare service center for teleassistance and utilize information from various domains such as disease control, health and fitness, or an independent age measuring device.The communication path between a PHD and the gateway is assumed to be a logical point-to-point connection.Generally, a PHD communicates with a single gateway at a specific point when necessary.Gateways can communicate with a plurality of PHDs simultaneously using separate point-topoint connections.
Refer to the document for standard [1] for other protocols in further detail.
Protocol Implementation Method
In this study, as mentioned previously, a weight sensor was used to apply the ISO/IEEE 11073 standard protocol.ISO/IEEE 11073-20601 (communication protocol standard between PHD and gateway) and ISO/IEEE 11073-10415 (weight sensor communication standard) were used for the application.
The ASN.1 encoding regulation (also known as a medical device encoding rule (MDER)) defined in the standard was used for the exchange of information between the weight sensor and gateway.
According to the definition by the International Telecommunication Union (ITU), ASN.1 is a protocol defining the data exchange on the network and is a formal language used to exchange abstract messages between different models.It is simply a language that defines the standard and the data created with ASN.1 becomes the standard.If MDER is expressed in C language, it is declared as a strict type that sends basic data using a structure called APDU.In APDU, there are six message formats: AARQ apdu, AARE apdu, RLRQ apdu, RLRE apdu, ABRE apdu, and PRST apdu.According to the circumstances, communication takes place in 1 of the 6 messages (refer to Figure 1).In order for communication between the weight sensor and the gateway to take place, the two devices must be mutually connected and in each status it can be divided into 10 types as shown below (refer to Figures 2 and 3).
From the weight sensor perspective, first, one's configuration information is sent and the gateway receives this information.The configuration information of the first connection is then saved and if connection is attempted again, only its ID is verified to enable immediate communication.On the other hand, Figure 4 demonstrates the communication process for the initial connection or if there is no configuration information of the weight sensor.Figure 5 demonstrates the communication process in cases where the gateway has configuration information from the weight sensor.
Protocol Implementation Result
In this study, the method described in Section 3 is used to apply a mutual communication protocol between the weight sensor and the gateway.The weight sensor (InBody R20 model) of the Biospace company [2] which was being sold on the market was used and the measurements from the weight sensor were received by a Pentium PC 3.0 GHz laptop for sending to the gateway (laptop Pentium PC 3.0 GHz).In order to secure accurate transmission, Visual C++ language was used in the laptop PC to create a viewer interface where features such as saving the data of the weight sensor, sending saved data, and displaying the received data were applied.Figure 6 shows the flow of the viewer program created and Figure 7 shows the weight sensor, while Figure 8 shows the viewer screen of the gateway.For a secure transmission of PRST APDU data from PHD to gateway, advance encryption standard (AES) protocol [3], an international standard for symmetric key encryption, was applied and used [4][5][6] (refer to Figure 9).As a result of this application, when the cipher algorithm was applied, in 10,000 attempts, the average implementation time was approximately 0.078 seconds longer than when it was not applied (refer to Table 1).
It was determined that applying encryption for secured transmission did not significantly influence the entire implementation time.
Conclusion
Until now, we have used international standard ISO/IEEE 11073-20601 to apply the communication protocol between a weight sensor and a gateway.The purpose of this dissertation is to realize a standard technology for mutual interoperability between a PHD, a health device used in households, and the hospital systems.The AES protocol, an international standard of symmetric key encryption, was applied to strengthen the security of transmission between devices, which was not available previously.As a result of this realization, when the encryption algorithm was applied, it took approximately 0.078 seconds longer on average than without.
In the future, we intend to expand the range of PHDs for application not only to the proposed weight sensor but also to ECG sensors, blood pressure devices, blood glucose device, and others.
Figure 7 :
Figure 7: Viewer screen of the weight sensor implementing ISO/IEEE 11073-20601 communication protocol.
Table 1 :
Comparison of average transfer time from PHD to gateway (unit: seconds). | 1,864.8 | 2014-05-01T00:00:00.000 | [
"Computer Science"
] |
Estimation of Content Spread on NDN
In currently. IP architecture have more important in daily life. But because the IP architecture still has problems. When there are a large number of users Will cause data collisions and Security in data transfer and a lot of costs. Therefore, Named Data Networking or NDN is an interesting new architecture. By using the name as the center, Unlike IP architecture that will use host as the center. NDN has a unique name packet for each package. And forward to the router to find the route to the destination.[1] Naming the data rather than the location enables NDN routers to cache each packet and improve network bandwidth, the better. The spread of content will use a lot of computed bandwidth of NDN to distribute the content with the caching function. this paper will Explain the examination of the spread of content for Name Data Networking by running an implementation of NDN by using CCNx. NDN's content distribution tests have shown that content packet sizes have a greater impact on system workloads and poor CCNx decisions [2] will increase system workloads in the CCNx program. In addition, found That inefficient applications will cause unnecessary calculations to be decoded. This article, in section1, Explain the experimental content of the initial distribution. section 2, shows the background for content distribution in NDN, for section 3 shows design and experimentation, section 4 describes experimental performance, and sections 5 and 6 show performance improvements and profiling results CCNx. Keywords—Named Data Networking (NDN), ContentCentric Network (CCNx), Content Spread, Performance Examination.
INTRODUCTION
The internet was created with the purpose of providing communication between hosts. the important aspects of content distribution are the host. Lots of requests for the same content must be transferred multiple times over the network from the server, which is clearly not effective. [3] Focusing on the content will have the two main NDN properties, namely 1. Packet with specific name Will be sent according to the search results of the packet name There are two types of packets in the NDN network: interesting packets and data packets. When users need some content will send interesting packets to show requests the data packets are used to respond to interest packets with appropriate content. [4] 2. NDN routers can store a number of forwarded data packet caches. When the interest packet has arrived at the router. The router will have the content requested. Content, which will be sent back to the host immediately instead of forwarding the request to this server. [5] This can reduce a lot of bandwidth in the NDN network and support better content distribution, Therefore, this test will measure the efficiency of the speed of packet content distribution in the active NDN and find a suitable solution to the current bandwidth problem.
II.
BACKGROUND This section 2 presents the content distribution systems, Named Data Networking and testbed.
a) NDN PRIME
From each node of the NDN network will run the daemon program. There is no difference between the client and the router server. But in the network, there are two types of packets which are : (1) Interest Packet and (2) Data Packet in which the packet names are arranged in a hierarchy for flexibility in the operation of the server router. [4] In NDN's architecture, there are three main components. For forwarding [5] When attention arrives at the NDN router, the CS will be examined first for the corresponding data. If the CS can respond to the attention, the data packets will be sent back to the receiving face. Otherwise, it will be added to the PIT. If there is an item with the same name, a new face number will be added to the face list to be able to send a copy of the matching data packet on all the faces that the interest packet has arrived. and Eventually, if interest does not have a matching PIT item, it will forward the attention to the nexthop(s) according to the FIB. If there are many more hops in the FIB list, a module called "Forwarding strategies" will determine how to use multiple routes to pass attention on. [
b) Content distribution
Normally, the distribution of NDN content affects many users. There are many methods, which are Client-Server method the server that stores content and clients can receive content directly from the server to reduce the workload of the server cache. [7] In the second method, a content distribution network (CND) is a network for displaying data and storing in many places. CDN will travel through the network. Gather the closest content with full use of the content delivery area for participants. And the last method is Peer-to-Peer (P2P). In this method it is divided into small sections and the host will download the content according to the group. [8] When the host has finished downloading some content Will continue to act as the content provider for that data the router is close to other preferred hosts, and P2P is still very popular in the network for distributing NDN content and is the easiest way for NDN networks.
c) Implementation of prototypes
The forward design of the NDN developed in PARC is a Content-Centric Networking Project (CCNx). The Important content of CCNx is the ccnd daemon. [9] It is a program intended for packets to be forwarding and caching. Generally, NDN does not use IP for real work. If having a real NDN network, IP networks can be used as overlays in the NDN network. In the CCNx project, there are three formats used in the CCNx network are : 1.Data Structures: Logical FIB and PIT use hash tables that provide Name Prefix Hash Table (NPHT), which indexes the Propagating Entries (PEs) and Forwarding Info Entries (FIE) There will be a position to store interest and forwarding information respectively. The Propagating Hash Table (PHT) is sent via the packet space that is shown in PEs and PHT, preventing replay in the network. Each content packet in the CS will receive a unique number. For daemon entry. Content Array (CA) is an array will store cached content packets. Straggler Hash Table (SHT) will collect some of the content packets to increase space efficiency. And there are two types of data structures: Content Hash Table (CHT) and Content Skip List (CSL) for making indicators that store content. CHT is a hash table with the name of the content packet. and The CSL will arrange a content-order lookup. [6] 2. Interesting packet processing: When the router receives the interest packet. will ask PHT first to make sure that no loops occur. CSL provides with search based on reliable queries and matched content. From CS, content will be pulled out, conducting a comprehensive matching examination. If the match is satisfactory Content packages will be sent to users who send this attention packet. If the matching is not satisfactory or there is no match, the daemon will find an interesting name on NPHT. The longest prefix match will be performed to find the appropriate FIE. [10] The interest packet will be sent to that host and will be added to PIT and PHT 3. Content packet processing: CHT will ask when the content packet arrives. To ensure that this packet is not stored in CS. If the packet is not repeated, it will be inserted into CSL and CS. After searching on CSL, NPHT will be asked the longest matching prefix to find all interest packets that can be made with this content packet. The content packet will be sent to that host.
III.
DESIGN To assess the content distribution in NDN has to systematically designed our tests and evaluations of the first results obtained. by selecting the metric to use for evaluating the performance of parameters the results obtained from the tests show the effect of the performance in the mentioned systems. and explain the technique for evaluation results and experimental designs 2 k a) System Explanation NDN has a distributed system consisting of three components: are NDN Clients, NDN Servers and NDN routers. These three components run the same ccnd daemon program for the corresponding host in the system. To be able to forward and caching packets. The client runs on the client-end software. While the server uses the server-end application running on ccnd. [11] Other applications do not run on the router. But will be forwarded in the content distribution, the content is stored on the server. The client will download the content by sending interest packets. interest packets Respond to application servers by sending content packets. follow the content packet will be cached in the content store of the NDN router.
b) Indicators
Examination of the spread of Content for NDN in the context of the NDN. Should choose indicators that show the quality of service the system generates. By using the necessary functions and provided by the content distribution system used on NDN, namely distributing content to end-users.
the ideal indicators are Content Download Time (CDoT) of destination host and Content Distribution Time (CDiT) of the system. [12] CDoT is the time between the client sending the first interest packet and receiving all content packets. CDiT is the time period that sends the first interest packet between clients or all clients that received that content. [13] For evaluation, NDN router forwarding rate will be used as an evaluation measure. Can measure the NDN forwarding rate using a router based on NP-based. and when the CPU is saturated, the forward rate of the NDN router is set at the maximum throughput that the NDN router can accept for evaluation.
c) Parameters
There are many parameters in a computer network that can affect the performance of the NDN-based content distribution system. In this section, all parameters are described before the workload parameters. System performance studies in Table 1 show software parameters. The CCNx can be configured and modified on each node. While the hardware parameters cannot be modified as easily as the software parameters. Since there is no NDN packet tracking, therefore, the design and use of simulated workloads for this evaluation study. Table 2 lists the workload parameters for this performance study. Determinants are selected from the parameters specified in Table1 and Table 2. The main factors of the study listed in Table 3. NDN routers can change the capacity of the configured content store. Usually, CS that are larger can store more content and make it more likely to respond to requests that arrive. [14] The average time of a larger CS increases search and recovery time. Each NDN packet is assigned a unique name. And was assigned the path by name. Many elements of the NDN packet name, each consisting of a slash symbol.
NDN packet name length is determined as a component in the name by Long names increase search time and affect the workload of the system. [15] There is a hypothesis that every host request only one content. The amount of content that is varied is the amount of different content requested by the host.
The parameters are modified using the program. Ccncatchunks2 to download the client Which will respond to random data interests Which consists of server number 4 and the client set to 8. This number is set to ensure that the NDN router's CPU can be saturated. In order to be able to measure the maximum workload of the system, choose TCP as the transmission protocol for this assessment. Figure 3 shows the network topology It can be seen that there are 4 NDN servers, namely Server1 to Server4. That are connected to 8 Host NDN is Host1 through Host7 through. A router NDN NP is R1 through R4. Used for network connections and measures the amount of data transmitted by the system.
d) Evaluation
The program used in the experiment have been developed and study the measurement of the system using the network topology created using CCNx [16], as have seen in Figure 3, with script development. Some to control the hosts participating in the test the output of the NDN router is measured by a router using NP in Network. The measured data is displayed directly in the Terminal.
e) Experimental Design
Since there are 4 factors in the study Therefore, use the 2 k factor design for the design of the experiment, where k = 4. This gives the experimental result 2 4 = 16. The list of factors and levels of each factor in Table 4. IV. EFFICIENCY This section provides a description of the performance of the content distribution system on the NDN. Results are presented and explained in this section. By using data analysis techniques to identify key factors affecting distribution efficiency.
a) Operating Results
After the experiment design in Table 5 and Figure 4. There are 16 experiments. The amount of incoming traffic going into the NDN router and the outbound traffic quantity from the router. will be measured using the traffic test tools available on NP-routers. Bandwidth usage is shown in Table. And the value of the workload Can be read directly from SSH. The total workload of the 16 measured systems is in Table 5. With a minimum bandwidth load of 7.2 Mbps and a package starting at 41.72 Mbps in 16 tests, the package volume from the 12th Will increase dramatically, the workload bandwidth will increase step by step, respectively Which the bandwidth and packages are still very far apart.
b) Data analysis
Assessing the factors that affect the system and the highest efficiency from the analysis of the data specified in Table 5 or Figure 5, the top 5 factors in terms of participation in the changes specified in Table 6. from the data in Table 6, factors C are Content size and the most important factor affecting the workload of the system. Factor B is the length of the content title, the second priority of the BC factor table is the intersection of two factors. Wash is the third factor. Followed by the D factor is the content diversity, the CD factor is the intersection of the two factors C and D. In addition, Factor A, the storage size of the content is insignificant in terms of impact on system workloads. IMPROVING PERFORMANCE The tests in the previous section had the highest workload used. The speed that can be achieved is 7.2 Mbps and the maximum is 31 Mbps, which results in an increase in volume. But it takes longer for the data to be transmitted more efficiently. Line speed compared to current speed to illustrate that NDN has the ability to provide content distribution functions and other methods of content distribution, which are important in improving and increasing system performance.
This section explains how to increase the forwarding rate. by changing the source code can increase the number of forwarding nodes by up to double to improve forwarding, there are two basic methods: 1) Designing or using data structures for a more effective search for packet names. 2) Rent or buy advanced server hardware to find an efficient route. Today, server hardware has the ability and capability to manage a system on an NDN network. It can find the longest prefix which can increase speed. In addition, the use of nodes for forwarding NDNs on the network processor can improve performance. CCNx is a tool that has the potential to improve the efficiency of the application to suit the current application. the top 10 long-time functions. All functions are from ccn.h and coding.h, which show the CPU processing time of decoding in Table 7 and Figure 6-4 shows the italic function as a function. Related to decoding packets, as in Table 7, more than half of the time it takes to decipher a particular packet name. ccn_skeleton_decode That takes as much as 47.34 percent.
Percentage (%) Which takes an unusually long time in general network programs. What causes the problem is the content packets to be stored in the Content Store (CS) are encoded. When inquiring about skipping content in the log (n) packet, it must be decoded immediately. Where n is defined as the number of items stored in the Content Skip List (CSL). Which will take a very long time When making program changes. In which the name of the content packet that is decoded Will be stored in the daemon. In this way, there will be no special packet decoding when CSL is asked each time.
VI.
CONCLUTION In this article, assess the effectiveness of the content distribution system for the named data network. Which will check the workload of NDN. The experiment will show that the workload of CCNx in the operating system Still far from current demand and experiments show that the key factor is the size of the content packet may have the greatest impact on performance and the number of storage name prefixes that will affect. Forwarding if it is too small in comparison to the current application. | 3,963.4 | 2020-04-07T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
PTPD: predicting therapeutic peptides by deep learning and word2vec
* Background In the search for therapeutic peptides for disease treatments, many efforts have been made to identify various functional peptides from large numbers of peptide sequence databases. In this paper, we propose an effective computational model that uses deep learning and word2vec to predict therapeutic peptides (PTPD). * Results Representation vectors of all k-mers were obtained through word2vec based on k-mer co-existence information. The original peptide sequences were then divided into k-mers using the windowing method. The peptide sequences were mapped to the input layer by the embedding vector obtained by word2vec. Three types of filters in the convolutional layers, as well as dropout and max-pooling operations, were applied to construct feature maps. These feature maps were concatenated into a fully connected dense layer, and rectified linear units (ReLU) and dropout operations were included to avoid over-fitting of PTPD. The classification probabilities were generated by a sigmoid function. PTPD was then validated using two datasets: an independent anticancer peptide dataset and a virulent protein dataset, on which it achieved accuracies of 96% and 94%, respectively. * Conclusions PTPD identified novel therapeutic peptides efficiently, and it is suitable for application as a useful tool in therapeutic peptide design.
Background
Cancer continues to a burden worldwide and its frequency is expected to double in the coming decades [1]. Available treatment regimens include radiation therapy, targeted therapy, and chemotherapy, all of which are often accompanied by harmful side effects and result in high financial costs for both individuals and society [2,3]. Anticancer peptides (ACPs) provide a new costefficient approach to cancer treatment, have minimal side effects, and have been shown to be promising in the treatment of various tumours by targeting mitochondria or membranolytic mechanisms [4]. Although progress has been made in preclinical applications of peptide-based methods against cancer cells, the mechanism behind the success of ACP treatments are still elusive. It is therefore highly important to be able to efficiently identify ACPs in both cancer research and drug *Correspondence<EMAIL_ADDRESS>1 School of Control Science and Engineering, Shandong University, Jingshi Road, 250061 Jinan, China Full list of author information is available at the end of the article development purposes. Due to the high costs and lengthy process of identifying ACP experimentally, various computational models have been developed to identify ACPs from peptide sequences. These advances include iACP development by g-gap dipeptide component (DPC) optimization [5,6], and SAP peptide identification by 400-dimensional features with g-gap dipeptide pruned by the maximum relevance-maximum distance method [7]. In addition, various types of amino acid compositions (AACs) of peptide sequences have been introduced to develop prediction models such as Chou's pseudo amino acid composition (PseAAC) [8], combinations of AACs, average chemical shifts (acACS) and reduced AAC (RAAC) [6], pseudo g-Gap DPC, amphiphilic PseAAC, and reduced amino acid alphabet (RAAAC) [9]. Other methods include computational tools developed based on the q-Wiener graph indices for ACP predication [10]. In addition, machine learning methods were adopted to promote model efficiency [6,9,11]. Several models have utilized support vector machine (SVM) and random forest (RF) machine learning methods [11,12], combinations of the quantitative outcomes of individual classifiers (RF, K-nearest neighbor, SVM, generalized neural network and probabilistic neural network) [9], or a pool of SVM-based models trained by sequence-based features [13].
Novel computational models based on machine learning have also been applied to identify virulent proteins in infection pathophysiology. Virulent proteins consist of a diverse set of proteins and are important for host invasion and pathogenesis. Drug resistance to bacterial pathogens has created an urgent need to identify novel virulent proteins that may facilitate drug target and vaccine developments. Several computational models have been developed to identify virulent proteins. The first methods were developed based on similarity search methods such as the Basic Local Alignment Search Tool (BLAST) [14] and Position-specific Iterated BLAST (PSI-BLAST) [15]. Machine learning algorithms for predicting virulent proteins have also been reported that apply SVM-based models based on AAC and DPC [16], an ensemble of SVMbased models trained with features extracted directly from amino acid sequences [17], a bi-layer cascade SVM model [18], and a model based on an SVM and a variant of input decimated ensembles and their random subspace [19]. Studies have also focused on conducting feature extraction of sequences such as protein presentations, by using amino acid sequence features and evolutionary information of a given protein [19]. Moreover, a computational tool based on the q-Wiener graph indices was also proposed to effectively predict virulent proteins [10]. Despite substantial progress, identifying specific peptides from massive protein databases remains challenging.
To date, deep learning applications have been successful in numerous fields other than medicine, including image classification and recognition [20][21][22], object detection [23,24], scene recognition [25], character recognition [26], sentence classification [27], chromatin accessibility prediction [28] and so on. Inspired by these successful deep learning applications, we propose a novel computational model called PTPD, which is based on deep learning, to identify ACPs and virulent proteins from peptide sequences (Fig. 1). To verify the efficiency of our approach, we also performed ACP and virulent protein prediction on publicly available datasets [12,18,29]. Our results show that PTPD is able to identify ACPs and virulent proteins with high efficiency.
Datasets
The ACP datasets were extracted from publicly available resources [12,29]. A total of 225 validated ACPs from the AMPs dataset and the database of Anuran defence peptides (DADP) [30] were used as positive samples, while 2,250 randomly selected proteins from the SwissProt protein database were used as negative samples. This dataset was used to build the model. An alternative dataset and two balanced datasets were employed to evaluate the model. To compare our methods with other existing methods, we also obtained an independent dataset (i.e. Hajisharifi-Chen (HC)) from a previous study [12]. The HC dataset, which contains 138 ACPs and 206 non-ACPs, was also employed to develop prediction models in [31,32].
Fig. 1 Flowchart of PTPD
The virulent protein datasets were obtained from VirulentPred [18] and NTX-pred method [16]. We adopted the SPAAN adhesins dataset, which contains 469 adhesion and 703 non-adhesion proteins, to build the PTPD model for virulent protein prediction. The neurotoxin dataset was applied as an independent dataset to evaluate the model. It contains 50 neurotoxins (positive samples) and 50 non-virulent proteins (negative samples) obtained by the NTX-pred method [16].
Representation of k-mers by word2vec
Each peptide sequence was divided into k-mers by windowing method as previously described in [33,34]. To represent the k-mers, we used the publicly available word2vec tool, which creates high-quality word embedding vectors according to a large number of k-mers.
The word2vec tool computes vector representations of words and has been widely applied in many natural language processing tasks as well as other research applications [35][36][37][38]. Two learning algorithms are available in word2vec: continuous bag-of-words and continuous skip-gram. These algorithms learn word representations to help to predict other words in the sentence. The skipgram model in word2vec trains the word vectors of each word based on the given corpus. Given a word (W (t)) in a sentence, skip-gram can predict the probabilities P(W (t + i)|W (t)) of nearby words W i (t − k ≤ i ≤ t + k) based on the probability of the current word W (t). Each word vector reflects the positions of the nearby words, as illustrated in Fig. 2. The goal of the skip-gram model is to maximize the following value: where k denotes the size of the window, and W (t + i)(−k ≤ i ≤ k) denotes k words near the current word W (t), and n denotes the number of words. Because word2vec can reflect the positional relationships of words in a sequence and preserve structural information, we treated the k-mers as the words. Using word2vec, the word embedding vector of each k-mer with 100 dimensions was obtained.
Input layer
After constructing the word representation of all the k-mers, we mapped the peptide sequence to numeric vectors. First, we used stride st to divide a peptide sequence S with length L 0 into k-mers of length k. The number of k-mers and the subsequent number of vectors varied because the peptide sequences (S) had different original lengths (L 0 ). The vectors for one peptide were set to be the same length L-the length of the longest vector for those peptide sequences. Vectors with lengths shorter than L zero-padded at the end as in the natural language process. Finally, the peptide sequence was converted to a vectorS by the word vectors with dimensions L × 100.
To prevent over-fitting and to improve model generalization, dropout was applied to a fraction of the inputs (i.e., a portion of the inputs was randomly set to zero).
Feature map
To extract features, a set of one-dimensional convolution filters was adopted to process the vectors of peptide sequences. The convolution kernel was a shape kernel with a size of (c×100). We used three types of convolution filters with sizes of three, four, and five. All the kernels performed convolutions on the entire representation vector. For example, using one convolution kernel with a size of (c × 100), the feature map was constructed as follows: where f (m) denotes the mth element of the feature map, ReLU denotes the rectified linear unit (ReLU) activation function, w(i, j) denotes the weight of the convolution kernel compiled by training, c denotes the size of filter, and S m denotes the mth block of the representation vector of the peptide sequence. ReLU [39] was used to set the negative results of the convolution calculation to zero, and is defined as follows: Multiple filters were used for each filter type. Let nc be the number of convolution filters, we applied To reduce the spatial dimensions of the feature maps, max pooling was adopted following a convolution operation. A max pooling layer with a pooling window of size 2 × 1 and a stride of 2 was defined by the function (:, 1), . . . , maxF c (:, j), . . . , maxF c (:, nc)] , where The results were finally merged concatenated as follows: where c1=3, c2=4, and c3=5 denote the three filter sizes we used. Then FA m was processed by a fully connected hidden layer to produce FM = ReLU(FA m W ft ), where ReLU represents a rectified linear activation unit, and W ft is the weight matrix of the fully-connected layer.
Classification
The last layer of PTPD adopted a fully-connected layer to obtain a single output. A sigmoid activation function was set to generate the output probability between zero and one, which was defined as
Loss function and optimizer
A binary cross entropy loss function was used to train the model. The model was trained with the RMSprop optimizer. The binary cross entropy loss function between the predictions and targets was defined as The total cost of the two classes was
Model evaluation
The performance of PTPD was evaluated by various metrics, including the sensitivity (Sn), specificity (Sp), prediction accuracy (Acc), Matthew's correlation coefficient (MCC), and the area under the curve (AUC) of , where TP denotes true positives, TN denotes true negatives, FP denotes false positives, FN denotes false negatives.
Model performance
To verify the proposed method, we executed the proposed model on ACPs and virulent protein datasets. Each dataset was randomly divided into three groups. The first group, which consisted of 75% of the complete dataset, was used to train the model. The second group of data, 15% of the entire dataset, was used to minimize overfitting. The third group, 10% of the entire dataset, was used to evaluate the performance of the trained PTPD model. For ACP identification, the performance of PTPD was first measured using the test data from the main dataset, and then further tested on an alternative dataset. Furthermore, we also evaluated the performance of PTPD on two types of balanced datasets (Table 1). PTPD achieved high performance scores of Sn = 94.2%, Sp = 86.2%, Acc = 90.2%, Mcc = 0.8, and AUC = 0.97, respectively. Moreover, to evaluate the generalizability or robustness of the prediction model, we executed PTPD on the independent HC dataset, as shown in Table 1. The AUCs of the five data sets were all higher than 0.97. Thus, PTPD offers stable performance even on unbalanced data sets (Table 1).
To evaluate the performance of PTPD, we conducted an evaluation on the test data of the SPAAN adhesins dataset. We also tested the performance of PTPD on an independent Neurotoxins dataset ( Table 2). The five performance metrics (Sn, Sp, Acc, MCC, and AUC) achieved by PTPD on the virulent protein dataset are higher than 95.6%, 73.3%, 88.2%, 0.7, and 0.93, respectively, which confirms the good performance of PTPD. Sp on the SPAAN adhesins dataset had a relatively lower value ( Table 2).
Comparison with the state-of-the-art methods
For verification purposes, we compared the proposed method with other state-of-the-art methods on the identification of ACPs and virulent proteins on two independent datasets.
Comparison performed on independent aCP dataset
To further evaluate the performance of PTPD to predict ACPs, we compared its performance with those of some state-of-the-art methods (i.e., AntiCP [29], MLACP [12], and mACPpred [40]) on an independent HC dataset ( Table 3 and Fig. 3). PTPD performed equally as well as MLACP (RF) on the HC dataset. The proposed PTPD
Comparison performed on an independent virulent protein dataset
We also compared the performance of PTPD with that of q-FP [10], AS and 2Gram [41], VirulentPred [18], and NTX-pred [16] on a bacterial neurotoxins dataset (Table 4 and Fig. 4). Again, the overall performance of PTPD was relatively better than those of other methods. Thus, we can conclude that PTPD is able to predict potential virulent proteins with high accuracy. The model achieved its highest accuracy (98.5%) and the lowest loss (0.03) when the learning rate was set to 0.0001, which was subsequently selected for model training. The detailed parameter settings are shown in Table 5.
Discussion
The model performance presented in this study suggests that PTPD possesses good generalizability and robustness. The comparison between PTPD and other methods showed that PTPD outperformed the other tested stateof-the-art methods for independent data analysis.
The performance of PTPD benefits from several major factors: (1) word2vec was applied to extract representation vectors of k-mers to consider the co-existence information of k-mers in peptide sequences. (2) For the feature
Conclusions
Identifying new ACPs and virulent proteins is an extremely labour-intensive and time-consuming process.
In this paper, we proposed a computational model based on deep learning that predicts therapeutic peptides with in a highly efficient manner. We then present a new deep learning-based prediction model that achieves better recognition performances compared to those of other state-of-the-art methods. We first trained a model to extract feature vectors of all k-mers using word2vec. Next, the peptide sequences were converted into k-mers, and each peptide sequence was represented by the vectors compiled by word2vec. The CNN then automatically extracted features without expert assistance, which decreases the reliance on domain experts for feature construction. The CNN was configured with three types of filters, and dropout and max-pooling operations were applied to avoid over-fitting. After fusing the features, ReLU activation was used to replace any negative values in the output of the CNN layer with zeros. Finally, the sigmoid function was used to classify the peptide. The performance and generalizability of PTPD were verified on two independent datasets. The trained model achieved AUCs of 0.99 and 0.93, respectively, which confirmed that the proposed model can effectively identify ACPs and virulent proteins.
In summary, the PTPD model presented in this paper outperformed other tested methods. Nevertheless, the approach still suffers because the inability to assign values regarding which features are most important for identifying favourable bioactivity. In future studies on potential structures and feature selection methods, we may consider other available network architectures such as generative adversarial networks. Some new methods that have been successfully applied to natural language processes might also facilitate further research. Our study confirmed that PTPD is an effective means for identifying and designing novel therapeutic peptides. Our approach might be extensible to other peptide sequencebased predictions, including antihypertensive [42,43], cell-penetrating [44], and proinflammatory [45]. | 3,785.6 | 2019-09-06T00:00:00.000 | [
"Computer Science"
] |
Analysis of word-formation processes in the English and Russian thematic groups “insectophones”
. The article is devoted to the consideration of derivative processes in the English and Russian thematic groups “insectophones” / «инсектофоны». Due to the diversity of the compared languages, the features of word formation in the studied groups are revealed and the processes of occurrence of insectophones in the language are described. For the English language, the key methods are derivation methods such as stem composition, suffixation, back derivation and conversion. The most productive is the stem composition. For the Russian language within the framework of the studied thematic group, the leading methods can be called the same as in English with the exception of conversion. In Russian, the most common way of forming insectophones is suffixation. In both languages, insectophones formed using diminutive suffixes, which subsequently lost the meaning of subjective assessment, are identified. In compound words in English and Russian, the place of the repeating element is different (in English - in the second place, in Russian - in the first one).
Introduction
The objectives of our research are to compare the word-forming processes in the Russian and English thematic groups «инсектофоны» / "insectophones", to identify derivational processes, to clarify and systematize isomorphic and allomorphic wordforming characteristics of groups of lexical units. To designate the name of an insect, but not any name, but only onomatopoeic, that is, phonetic imitation of insect signals, lexicophonetic onomatopoeia and lexical means of imitation of an acoustic signal of an insect, we introduce the term INSECOPHONE into linguistic scientific use [1]. We regard the collection of insectophones as a THEMATIC GROUP [2].
As part of the thematic groups «инсектофоны» / "insectophones", two subgroups can be distinguished according to the time of appearance. The first group consists of basic vocabulary, ancient in origin, a long period existing in the language. The number of such tokens, both in English and in Russian, is relatively large. They represent the core of the «инсектофоны» / "insectophones" thematic groups. The second group is much smaller, it includes borrowed insectophones, mastered by the language over time. Thus, the mosquito token borrowed from Spanish into English retains its original form in English. As you know, the diminutive suffix -ito is typical of the Spanish language, and the insectophone mosquito means "little fly." In English, this word is used to refer to another insect -"gnat", and it loses its diminutive meaning. In Russian, the borrowed word mosquito loses the ending -o, that is, it changes its form under the influence of the host language. The form of the insectophone scorpion / скорпион, borrowed in English and Russian from the French language, remained unchanged.
The first group of insectophones becomes the basis for the appearance of "new" names of insects, their further differentiation, subcategorization of the species.
When differentiating insects belonging to the order of beetles in newly formed lexemes in both languages, tokens were added to the beetle component, reflecting the distinctive features of each insect -these are click beetle / жук-щелкун, bombardier beetle / жукбомбардир, corngroundbeetle.
Methodology
Today, the question of the sound-letter correlation, relationship between the form and the meaning is very relevant in modern linguistics. This article is devoted to phonosematics which deals with this issue. The issues of phonosemantics attract the attention of many researchers [3,4,5,6,7,8].
The article is a brief overview of the deep and long research in the field of the thematic group INSECTOPHONES. The procedure the authors followed for extracting insectophones was based on using many different dictionaries (English-English, Russian-Russian, English-Russian, Russian-English, dictionaries of proverbs and saying, phraseological dictionaries and special etymological dictionaries).
With the help of the selected material the authors come to the idea of the concept of the insectophones and the thematic group of insectophones. Then they try to find out their common and specific derivative features. This is of high importance not only for the sound-letter correlation but also for wordbuilding issue. Word formation deals with both the existing words as well as the newly created words. Many scientists pay attention to this phenomenon [9,10,11,12,13].
The data of the research include the processes of word formation in both English and Russian thematic groups, as well as the lexemes are given as examples of these processes.
First, the processes of word formation in the English thematic group "insectophones" will be discussed. Then, the processes of word formation in the Russian thematic group «инсектофоны» will be discussed. Finally, the processes in the two languages will be compared to find some common and different features.
Word-formation processes in the English thematic group "insectophones"
When the composition of the thematic group "insectophones" is replenished due to the internal resources of the English language, the most productive method of word formation is word-compounding (using word formation, about a third of the total composition of the thematic group -18 units). At the heart of a compound word, two or more roots are distinguished. As the study showed, the most general model for insectophones, in which one of these roots is a free lexical morpheme and the root of the word, expressing the lexical meaning and coinciding with the stem [14]. The thematic group consists of five such units: bee, fly, beetle, hopper, bug: bee -bumble-bee, humble-bee, bug -billbug, bedbug, fly -butterfly, caddiesfly, dragonfly, sawfly, firefly, mayfly, green-fly, horse-fly, sandfly, gadfly, beetle -click beetle, corn ground beetle, bombardier beetle, hopper -grasshopper, froghopper.
As the structural analysis of complex insectophones shows, the most frequent is the N + N model in the studied thematic group. Very few insectophones are formed by the V + N model (bumble-bee (bumble -verb) and Adj. + N (humble-bee (humble -adjective)).
Insectophones with the free lexical morpheme hopper represent not just complex words, but complex derived lexemes. In addition to two roots in grasshopper and froghopper insectophones, the presence of a suffix (grass + hop + er, frog + hop + er) of the producer of the action -er, a productive affix of German origin, is also noted [14].
The group of insectophones is replenished by new words formed by analogy -by copying a certain pattern, which is a lexical unit with a certain form. Thus, the grasshopper model is an insectophone froghopper. In both insectophones, the hop root, which transfers the ability of the insect to jump, serves as a sound-reflecting element. At the same time, not only the onomatopoetic root element is repeated, but the construction scheme of the token with affix is also preserved.
Insectophones with a free token beetle can be divided into two groups. The first group is completely phonetic motivated insectophones in which both roots of onomatopoeic origin are click beetle, bombardier beetle. Moreover, these English insectophones completely coincide in structure with their Russian correlates, жук-бомбардир, жук-щелкун. They differ only in the sequence of the repeating element. In English, the beetle element follows the defining words click and bombardier. In Russian, a repeating element takes first place in compound words -жук-бомбардир, жук-щелкун. The second group includes the insectophone -corngroundbeetle, where the free lexical morpheme beetle has the onomatopoeic origin, and the elements of cornground are phonetically unmotivated.
Insectophones with the free lexical morpheme fly are also partially motivated. Sound is only the second element of the word -fly, reflecting the ability to fly.
Compounding in insectophones differentiates an insect by a particular, individual attribute.
The thematic group "insectophones" in the English language is replenished with the help of this type of word formation, such as suffixation, back formation and conversion. In addition to the suffix -er (skeeter, spider, froghopper, grasshopper), diminutive suffixes -ie (mozzie), -et (midget, cricket, hornet) are involved in the formation of insectophones. The method of suffixation differs from the compounding in that during the compounding there is a process of isolating, clarifying one stem with another stem, and the name of the others similar to the class is selected. With a suffix, the called is included in this class. Suffixing is the second most productive way of word formation in the English thematic group 210, 21014 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021021014 "insectophones" (fifteen percent of the tokens that make up the thematic group are formed by suffixing).
The replenishment of the composition of the thematic group also occurs with the help of back formation. In this way, the thematic group "insectophones" in the English language was replenished by five units, that is, nine percent. During back formation, a new word is formed from the existing one not in the form of a derivative, but in the form of a producing one. Insectophone grasshopper in modern English leveled up to hopper, insectophone cockroach in order to avoid sexual connotations was reduced to roach, bedbug, denoting a bug (bed), having lost the first member of the composite, turned into a bug, expanding the meaning to "any bug" [15,16]. The insectophone termite, formed from the Latin termites, has lost its ending. The insectophone, borrowed into English from the Middle French scarabeéscarab, lost the last syllable.
Moreover, the composition of the thematic group is replenished due to conversion (a total of five such units were identified, that is, nine percent of the composition of the thematic group). The analysis shows that the most frequent model of the formation of insectophones in the English language is V → N. Insectophone a breeze "gadfly, horsefly" comes from to breeze, a drone "drone" goes back to drone, a fly "gnat" from to fly, a sting "stinger" from to sting, a tick "mite" from to tick.
It is interesting that, despite the fact that conversion is the most productive way of word formation in English, in the analyzed thematic group it is inferior in productivity to such word formation methods as word compounding and suffixation.
Word-formation processes in the Russian thematic group «инсектофоны»
In the Russian thematic group «инсектофоны», affixation is the most productive way of word formation, and word compounding and back formation are also used.
The insectophones of the Russian language are divided into two groups according to the affixal method of word formation: suffixal and prefixal. The first and most numerous group is insectophones formed by suffixation (more than half of the composition of the thematic group). Some suffixes of the Russian language are considered primordially Russian: -щик (-чик), -овщик, -льщик, -лк(а), -овк(а), -к(а) [17]. Accordingly, insectophones incorporating such word-forming elements also belong to the native Russian words. As the analysis of the composition of the thematic group shows, the most frequency model is the onomatopoeic / sound symbolic stem + suffix (or two suffixes): -лк(а) -журчалка, жигалка, трещалка, -к(а) -бабочка, букашка, мошка, немка, трещотка, хрущак, кукушка (каменного шмеля), -льщик -пилильщик, -икdiminutive suffix in this case (кузнечик, хрущик), -ек/ок -variants of one diminutive suffix (мотылек, сверчок). However, the suffix -ик compared to suffixes -ek / ok is more expressive, since the end of the еk / ok simply expresses a reduced appearance of the subject, and at the end of the -ик "one already feels a joke" and "the reduced, miniature object seems cute" [18], -ецa productive derivational morpheme, which in this case means a male insect (звонец), and the female suffix corresponding to it is a female insect -ица (жужелица). Obviously, the insectophone жужелица is formed by analogy with such tokens as тигрица, медведица, волчица, representing female animals. Only two insectophones with these suffixes were detected. The suffix -ец stands somewhat apart from the affectionatediminutive suffixes -иk / ok / ek. In the suffix -eц "brighter than in the suffix -иk, shades of tenderness, sympathy, humility, humiliation, contempt, familiar participation" appear [18], 210, 21014 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021021014 -унreflects the name of the insect according to a certain attribute -the sound it produces (скрипун, щелкун, пискунья), -ень -when a verb is added to the stem, it forms nouns with the meaning of an insect, characterized by an action called a word-forming verb (трутень, шершень), -ельthe unproductive suffix, however, one of the most sonorous and expressive in the Russian language, has survived from the 14th -18th centuries (шмель), -л(о)another unproductive suffix in the composition of insectophones of the Russian language (жало, жужжало, сверлило), because "in the modern language, the suffix -л(o) is alive and productive only in a morphologically determined position (in combination with the verbal stems ending in -a and -и: -aлo, -илo) " [18]. The suffix -л(o), denoting the name of the insect, forms nouns from verbs denoting the sounds of insects. Despite the fact that this suffix is considered "dead", we noted three insectophones in which it is present. Perhaps this is due to the fact that "a wide range of use of such entities in professional dialects" [18].
Results
Now let's make the comparative word-formation analysis of the studied thematic groups.
In the Russian thematic group "insectophones" of the twelve suffixes involved in the formation of insectophones, we noted four diminutive suffixes (three diminutive and caressing suffixes: -ик, -ok, -ek and one affectionate and derogatory suffix), in the English thematic group "insectophones" -two diminutive suffixes out of three (-ie, et). Tokens formed using such suffixes are considered "forms of subjective assessment" [18]. Obviously, such a variety of suffixes of subjective assessment in the Russian and English thematic group "инсектофоны" / "insectophones" is associated with the small size of insects, and in the process of nomination this feature was taken into account, along with the sounds made by these creatures. However, insectophones with such suffixes are not perceived as diminutive, affectionate and derogatory by native speakers, insectophones are perceived only as the names of certain insects. Thus, the diminutive meaning of the form erased, disappeared.
The second group includes insectophones formed by the prefixal method using the negative prefix не-. This group included only one insectophone нехрущ, that is, the insect's ability to make sounds is denied in the insectophone itself.
The thematic group «инсектофоны» in the Russian language is replenished not only with the help of affixation, but also with the help of stem composition. There are only three such insectophones, which is about six percent of the total number of insectophones in the Russian language. Composites of a compound word can be divided into: 1) neutral, that is, formed with the help of two stems without a connecting morpheme, by a simple juxtaposition of the stems. These are insectophones: жук-бомбардир, жукщелкун. As the analysis of the structure of these two complex insectophones shows, they are formed according to the model noun + noun [N + N], 2) morphological, that is, composites are connected using connecting elements, connecting service morphemes. We have identified one such insectophone -златогузка. The two stems of златand гузкare connected by a connecting morpheme -о-. The Insectophone structure is adj. + noun [Adj + N]. Therefore, there is an attributive form.
Furthermore, unlike the English language, where the repeating composite takes second place in complex words (click beetle, bombardier beetle), in Russian it takes the first place (жук-щелкун, жук-бомбардир). We dare to suggest that the sequence of composites in 210, 21014 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021021014 complex insectophones of both languages depends on the actual division of speech -a concept designed to describe the functional components of a statement: the topic -the starting point of the message and the comment -the part being communicated (new). It is known that in English, the comment is most often located at the beginning of a statement, in Russian, at the end. Communicative dynamism grows in time for the words that determine the type of insect, in this case a beetle (бомбардир, щелкун, click, bombardier)comment. Beetle / жук is a theme in both languages. For the addressee, it is more important to obtain information not about the insect order (the order of beetles), but about the type of insect that differentiates it from the rest of the representatives of this order.
The composition of the thematic group is also replenished due to back formation (four percent of the composition of the thematic group). These are borrowed insectophonesтермит and москит, having lost their endings, and assimilated in Russian.
Conclusion
In English, the analyzed thematic group is replenished with the help of affixless word formation. In Russian, insectophones formed by conversion were not detected. This is understandable, since conversion is not the leading way of word formation in the Russian language.
Thus, in the English language, four methods for the formation of insectophones are revealed: compounding, suffixation, back formation and conversion. The most productive is the stem composition. In the Russian language there are the same word-formation methods, with the exception of conversion. In Russian, the most common way of forming insectophones is suffixation.
In both languages, insectophones formed using diminutive suffixes, which subsequently lost the meaning of subjective assessment, were identified. In compound words in English and Russian, the place of the repeating element is different (in English -in the second place, in Russian -in the first one). | 3,929.8 | 2020-01-01T00:00:00.000 | [
"Linguistics"
] |
Image Quality Comparison between Digital Breast Tomosynthesis Images and 2D Mammographic Images Using the CDMAM Test Object
Abstract: Purpose To evaluate the image quality (IQ) of synthesized two-dimensional (s2D) and tomographic layer (TL) mammographic images in comparison to the 2D digital mammographic images produced with a new digital breast tomosynthesis (DBT) system. Methods: The CDMAM test object was used for IQ evaluation of actual 2D images, s2D and TL images, acquired using all available acquisition modes. Evaluation was performed automatically using the commercial software that accompanied CDMAM. Results: The IQ scores of the TLs with the in-focus CDMAM were comparable, although usually inferior to those of 2D images acquired with the same acquisition mode, and better than the respective s2D images. The IQ results of TLs satisfied the EUREF limits applicable to 2D images, whereas for s2D images this was not the case. The use of high-dose mode (H-mode), instead of normal-dose mode (N-mode), increased the image quality of both TL and s2D images, especially when the standard mode (ST) was used. Although the high-resolution (HR) mode produced TL images of similar or better image quality compared to ST mode, HR s2D images were clearly inferior to ST s2D images. Conclusions: s2D images present inferior image quality compared to 2D and TL images. The HR mode produces TL images and s2D images with half the pixel size and requires a 25% increase in average glandular dose (AGD). Despite that, IQ evaluation results with CDMAM are in favor of HR resolution mode only for TL images and mainly for smaller-sized details.
Introduction
Digital mammography has many advantages over classic screen-film mammography, due to the wide dynamic range and the processing capabilities of digital mammography systems, especially for the dense/glandular breasts of younger women [1]. The advent of Digital Breast Tomosynthesis (DBT) further enhanced the benefits of digital mammography over classic mammography and storage phosphor plate (CR)-based digital mammography [2][3][4][5][6]. DBT systems acquire a number of 2D projections, while the X-ray tube moves in an arc around 0 degrees (perpendicular to the detector, or Z-axis), sweeping an angle ranging from ±12.5 degrees up to ±25 degrees. The sweep angle and the movement/exposure mode vary between manufacturers (continuous or step-and-shoot). The transmission data from the DBT data set are processed to produce 2D tomographic layer (TL) images, parallel to the detector level (usually 1 mm thick), also referred to as DBT slices or focal planes. In this way, any existing lesions in the various layers can be imaged in focus, while the overand underlying structures are blurred, thus increasing the detection efficiency. Additionally, a synthesized 2D image (also called synthetic 2D; s2D) is generated by post-processing the original DBT data set, which mimics the actual 2D projections acquired in digital mammography [2]. DBT was initially introduced as adjunct to the 2D mammography, but later it was proposed that DBT can replace one of the 2D views (the mediolateral) or even 2 of 13 both [2,[7][8][9]. The original idea behind s2D images was that achieving similar image quality (IQ) to the actual 2D images would dispense with the need to perform 2D mammography in addition to DBT. It should be mentioned that the dose of DBT is comparable to the dose of 2D mammography; thus, abolishing the need for the latter would be very beneficial for the patient.
A variety of image metrics and phantoms have been employed to quantify the IQ achieved in clinical practice with the various available mammography systems. The phantoms for digital mammography systems introduced by the American College of Radiology (ACR) are still used as an IQ benchmark for assessing 2D screening of a mammography system [10,11]. However, for more elaborate IQ evaluations, the use of the CDMAM phantom is considered as the gold standard [12,13]. According to the study of Mackenzie et al. [14], the clinical effectiveness of mammography for the task of detecting calcification clusters was found to be correlated with the IQ assessment using the CDMAM phantom. Therefore, it was concluded that IQ assessment using CDMAM is justified as a surrogate for assessing the cancer detection performance of mammography systems. However, it should be noted that for non-calcification lesion detection, such a correlation was not established [14].
Regarding IQ evaluation in DBT, the Protocol for the Quality Control of the Physical and Technical Aspects of Digital Breast Tomosynthesis Systems [15] (henceforth referred to as the DBT QC protocol) acknowledges that the current phantoms designed for evaluation IQ in 2D mammography cannot be used to assess image reconstruction. Furthermore, they should not be used for performance comparisons between different models, because they do not include mammographic backgrounds and exhibit disadvantages when used on DBT systems. However, it is also acknowledged that, until 3D phantoms have been developed and validated, the current 2D phantoms can be used for stability assessment and quantification of some aspects of IQ in DBT. In the same document, it is specifically noted regarding CDMAM that: (a) although the IQ evaluation results of CDMAM DBT images have not been extensively validated (since the methods and software used to convert automated analysis into predicted human values are validated for 2D images only), such evaluation of DBT images may be a useful interim tool for monitoring the IQ stability of DBT; (b) CDMAM DBT images may require special processing before automated reading; and (c) the EUREF performance limits for 2D systems are not applicable for DBT [15]. Despite these reservations, a number of published reports (e.g., the NHS Breast Screening Programme Equipment Report series) have used CDMAM for the evaluation of DBT images in the same way as in 2D mammography systems.
In the present study, the CDMAM phantom and its accompanying software were used for IQ evaluation of a new DBT mammography system, in all 2D and DBT acquisition modes available for clinical use. The results of IQ evaluation of 2D projections, and s2D and TL images, were compared to the relevant EUREF acceptable and achievable limit values [12].
CDMAM Phantom Description
The CDMAM version 3.4 phantom (Artinis Medical Systems, Elst, The Netherlands), whose radiographic appearance is shown in Figure 1, consists of a 0.5 mm thick aluminum base, on which are attached gold disks of various thicknesses (0.03 to 2 µm) and diameters (0.06 to 2 mm), and is enclosed in a PMMA cover. Starting from the upper left corner of the image, a matrix rotated by −45 degrees contains columns with gold disks of constant thickness and progressively smaller diameters, and rows with constant diameters and progressively increasing thickness. Each one of the 205 square matrix elements contains two disks: one in the center of the square and one in the periphery. The peripheral disks are located in one of the 4 corners of each of the matrix elements, following a random pattern. The phantom is enclosed between two 4-Polymethylmethacrylate (PMMA) plates of 1 cm thickness, with dimensions 18 cm × 24 cm. The phantom is considered equivalent to 5 cm of PMMA and 6 cm of compressed breast (50% glandular−50% adipose). More details about the phantom characteristics can be found in the phantom user manual and relevant literature [13,16]. two disks: one in the center of the square and one in the periphery. The peripheral disks are located in one of the 4 corners of each of the matrix elements, following a random pattern. The phantom is enclosed between two 4-Polymethylmethacrylate (PMMA) plates of 1 cm thickness, with dimensions 18 cm × 24 cm. The phantom is considered equivalent to 5 cm of PMMA and 6 cm of compressed breast (50% glandular−50% adipose). More details about the phantom characteristics can be found in the phantom user manual and relevant literature [13,16].
Image Quality Evaluation Using the CDMAM Phantom
The main task of the CDMAM phantom is to correctly detect the peripheral disk in each square matrix element by selecting the correct corner of the square (thus, there is a 25% probability that this may be done correctly by chance). For the automatic scoring of digital phantom images, the phantom manufacturer offers the CDMAM 3.4 Analyser software v2.3 (henceforth called "software"), with a very user-friendly graphical user interface (GUI). After importing the images in DICOM format and adjusting the image rotation and pixel intensity relationship sign (if default values do not work), automated scoring of all images together or one by one is performed. It must be mentioned that image quality scores with the CDMAM phantom may vary depending on the relative position of the phantom's gold disks in respect to the image receptor elements. For this reason, 8 to 16 images should be acquired for each acquisition protocol and CDMAM should be slightly moved between exposures. The automatic scoring of the phantom produces 4 basic outputs, an example of which is shown in Tables 1 and 2 and Figure 2a-c.
The first output consists of two tables. The first table (see Table 1) contains the values of the image quality figure IQFinv and total detected scores for each individual image and respective average scores (for all images). The IQFinv is defined by the following equation: where, for each column i (of the 16 columns) of the phantom, with diameter di, tthr,i is the respective threshold gold thickness. For completely visible or invisible columns, the smallest or the largest disk diameter is used, respectively. Smaller threshold thickness values,
Image Quality Evaluation Using the CDMAM Phantom
The main task of the CDMAM phantom is to correctly detect the peripheral disk in each square matrix element by selecting the correct corner of the square (thus, there is a 25% probability that this may be done correctly by chance). For the automatic scoring of digital phantom images, the phantom manufacturer offers the CDMAM 3.4 Analyser software v2.3 (henceforth called "software"), with a very user-friendly graphical user interface (GUI). After importing the images in DICOM format and adjusting the image rotation and pixel intensity relationship sign (if default values do not work), automated scoring of all images together or one by one is performed. It must be mentioned that image quality scores with the CDMAM phantom may vary depending on the relative position of the phantom's gold disks in respect to the image receptor elements. For this reason, 8 to 16 images should be acquired for each acquisition protocol and CDMAM should be slightly moved between exposures. The automatic scoring of the phantom produces 4 basic outputs, an example of which is shown in Tables The first output consists of two tables. The first table (see Table 1) contains the values of the image quality figure IQF inv and total detected scores for each individual image and respective average scores (for all images). The IQF inv is defined by the following equation: where, for each column i (of the 16 columns) of the phantom, with diameter d i , t thr,i is the respective threshold gold thickness. For completely visible or invisible columns, the smallest or the largest disk diameter is used, respectively. Smaller threshold thickness values, which denote better IQ, decrease the denominator value, thus increasing the value of IQF inv . The second table presents the average threshold values of gold thickness (automatic, predicted human, and fit-to-predicted human) in relation to the gold disk diameters (Figure 2a). For IQ evaluation the fit-to-predicted human threshold (last row of the table) is used, especially the values for 0.1, 0.25, 0.5, and 1 mm diameter disks. These values are compared with the acceptable and achievable values given by EUREF as performance limits, which have been set for the above four disk diameters, as presented in Table 3. The smaller the threshold gold thickness, the better the IQ [14]. The second output (shown in Figure 2a) is the contrast detail score diagram which consists of a gridline representing the matrix of the phantom, with red dots, pink dots, and vacant gridline intersection positions, which denote respectively the correct detection of both the central and peripheral disk, only one of them, and neither of them. The number of red dots expressed as a percentage of the total number of squares (205 plus the 2 missing corners of the phantom, 0.03 µm/2 mm and 2 µm/0.06 mm, which are counted as detected when both their neighbors are detected) is the total detected score (%) shown in Table 1.
The third output (shown in Figure 2b) is a graph with the contrast-detail curves for each individual image (thin colored lines) and the respective average curve (thick blue line) for all images. The fourth (shown in Figure 2c) is the average psychometric detection probability (data points and fitted curves) for all images in relation to gold disk thickness, for disk diameters from 0.1 to 1 mm. More details about the software, the scoring procedure, and the theoretical background of the IQ evaluation with CDMAM can be found in the referenced literature [12,13,[16][17][18][19][20]].
Mammography System and Acquisition Modes
The mammography system evaluated was a Fujifilm Amulet Innovality (Software version: FDR-3000 AWS V9.1). This specific model was recently installed in a public hospital in Greece and, unlike its predecessor model, it allows DBT acquisitions with both highresolution (HR) mode and standard (ST) resolution mode, using iterative reconstruction algorithms (ISR) for s2D images and TL image formation. In ST mode, the sweep angle is 15 degrees (−7.5 • to 7.5 • ) and the pixel size of both s2D and TL images is 100 µm, whereas in HR mode, the sweep angle is 40 degrees (−20 • to 20 • ) and the pixel size of both s2D and TL images is 50 µm (the same as in 2D acquisition mode). For both ST and HR DBT acquisition modes and for the 2D acquisition mode, two dose modes are available: the N-mode (normal dose) and the H-mode (high dose).
Sets of eight 2D images were acquired using the N-mode, H-mode, and the four DBT modes (N-mode (ST), H-mode (ST), N-mode (HR), and H-mode (HR)) available for the clinical practice. All images were acquired with the small compression paddle (18 cm × 24 cm). It must be noted that TL and s2D images of the CDMAM phantom from the DBT system were scored in the original format and no additional processing was applied, so as to reflect the IQ using the same processing conditions as those in clinical practice. For all images, the mammography system information, technical parameters, and exposure conditions reported later in the figure legends and the table were derived using free software named DICOM Info Extractor, which facilitates the automatic extraction of the DICOM header information [21].
To investigate the impact of the compression paddle height setting on exposure factors and breast average glandular dose (AGD) in Fujifilm Amulet Innovality and the effect of field size, two additional sets of CDMAM images were acquired using the auto 2D and DBT acquisition modes (only 1 exposure per acquisition mode) with the compression paddle positioned at 60 and 45 mm, and one more set (only 1 exposure per acquisition mode) with the large compression paddle (24 cm × 30 cm) positioned at 60 mm.
Results
In the following, the results of IQ evaluation of the new Fujifilm Amulet Innovality DBT system are reported in terms of the fit-to-predicted human gold thickness (TFit) values. These are shown in Figures 3-5 Table 4, where, along with IQ evaluation results, the exposure factors, AGD, and pixel size of the 2D and DBT images are reported. All acquisitions were performed with the compression paddle set at 60 mm and manually selected exposure factors to match the respective exposure factors selected by the AEC system, for imaging 50 mm of PMMA plates with the compression paddle set at 60 mm. Figure 3 shows the contrast-detail curves obtained from the 2D images. It can be observed that the curves obtained using H-mode and N-mode practically coincide for detailed diameters in the range 0.3 to 0.6 mm. However, it is obvious that H-mode offers better IQ according to the respective IQF Inv and total detected values, as shown in Table 4. Both curves lie below the achievable EUREF curves (with the exception of the first point of the curve for N-mode). system, for imaging 50 mm of PMMA plates with the compression paddle set at 60 mm. Figure 3 shows the contrast-detail curves obtained from the 2D images. It can be observed that the curves obtained using H-mode and N-mode practically coincide for detailed diameters in the range 0.3 to 0.6 mm. However, it is obvious that H-mode offers better IQ according to the respective IQFInv and total detected values, as shown in Table 4. Both curves lie below the achievable EUREF curves (with the exception of the first point of the curve for N-mode). Figure 4 depicts the contrast-detail curves obtained from the s2D (called S-view) images. It is apparent that s2D images have inferior IQ, as only H-mode (ST) nearly satisfied the EUREF acceptable value limits (except for the first data point, namely for 0.1 mm diameter details). It is noticeable that HR mode produced s2D images of lower quality (larger threshold thicknesses) than ST mode, for both N-mode and H-mode. Again, an increase in IQ scores using H-mode was observed, compared to N-mode, which was true irrespectively of the disk diameter size, only for ST resolution mode. The respective IQFInv and total detected values shown in Table 4 verified that HR s2D images are inferior to ST s2D images and that the increase in IQ using H-mode is more pronounced for ST mode. Finally, Figure 5 shows the contrast-detail curves obtained from the TL (tomographic layer) images. It must be noted that for each DBT acquisition mode, at least five TL images around the actual position of the CDMAM phantom aluminum base were scored. The results made evident that the phantom's base was best focused at a height of 22 mm above the breast support table (TL22), which corresponded to the 23rd image of the DBT image set, since image numbering starts from the TL image that corresponds to a layer height of As mentioned in the footnote of Table 4 and the legend of Figure 5, the results for Hmode (HR) are based on only one image. The remaining seven images of the set produced erratic results, an example of which is shown in Figure 6. Unlike the contrast detail score diagram shown in Figure 2b, where, as expected, both the central and peripheral disks of larger thicknesses and diameters are detected first, and disks of smaller diameters and thicknesses are progressively missed, in Figure 6 disk detection follows a rather random pattern. The reason why these images were rejected could not be explained. It was initially thought that this could be attributed to wrong phantom positioning in the small field, but visually the images were perfect, gold details were conspicuous, and they were no missing areas of the phantom. Moreover, it was rather strange that the respective images at the other focal planes (i.e., TL20, TL21, TL23, and TL24) did not produce erratic results. However, since the same problem was observed with the TL22 image acquired with H-mode (HR) and the 24 cm × 30 cm compression paddle, it became clear that the problem was not the field size. It must be noted that the results of the single H-mode (HR) image scores were considered reliable, because similar results were obtained for two more images acquired with auto-dose mode and the compression paddle positioned at 45 and 60 mm.
and in
From the additional sets of CDMAM images acquired using the auto modes, it was seen that, for CBT = 60 mm, the automatically selected exposure factors with CDMAM were practically identical (the mAs were only 2-4% less) as those determined using 50 mm PMMA and CBT = 60 mm. Therefore, in this DBT system, the CDMAM does not increase the exposure factors and CDMAM phantom images can be also acquired using the AEC mode. With CBT set at 45 mm, the automatically selected kVp was 1 kV less than that with CBT = 60 mm, for both 2D and DBT acquisitions. For 2D acquisitions, the mAs and AGD values were respectively 27% and 31% larger. By comparison, for the DBT acquisition modes, the mAs and AGD values were only increased by 1 to 5%. Figure 4 depicts the contrast-detail curves obtained from the s2D (called S-view) images. It is apparent that s2D images have inferior IQ, as only H-mode (ST) nearly satisfied the EUREF acceptable value limits (except for the first data point, namely for 0.1 mm diameter details). It is noticeable that HR mode produced s2D images of lower quality (larger threshold thicknesses) than ST mode, for both N-mode and H-mode. Again, an increase in IQ scores using H-mode was observed, compared to N-mode, which was true irrespectively of the disk diameter size, only for ST resolution mode. The respective IQF Inv and total detected values shown in Table 4 verified that HR s2D images are inferior to ST s2D images and that the increase in IQ using H-mode is more pronounced for ST mode.
Finally, Figure 5 shows the contrast-detail curves obtained from the TL (tomographic layer) images. It must be noted that for each DBT acquisition mode, at least five TL images around the actual position of the CDMAM phantom aluminum base were scored. The results made evident that the phantom's base was best focused at a height of 22 mm above the breast support table (TL22), which corresponded to the 23rd image of the DBT image set, since image numbering starts from the TL image that corresponds to a layer height of 0 mm (the surface of the support table). Scores were maximum for the TL22 images and deteriorated for layer images above or below this plane.
In Figure 5, it can be seen that TL images had very good IQ, as all four curves satisfied the EUREF acceptable value limits. In fact, a few scores, e.g., for H-mode (ST) and for diameters 0.25 and 0.5 mm, were even better than the respective scores of the 2D images. Furthermore, and partially in contrast to what was observed for s2D images, for TL images the HR resulted in reduced threshold thicknesses (i.e., increased detectability) compared to the ST mode, by~30% for disk diameter 0.1 mm (both for N-and H-modes) and 10% for disk diameter 0.25 mm (N-mode). For other gold disk diameters, the HR mode resulted in increased threshold disk thicknesses, by up to about 17%. For TL images obtained with standard resolution, the H-mode resulted in bigger IQ scores compared to the N-mode. A similar trend was observed for HR mode, except for the smallest disk diameters, where the H-mode resulted in a slightly larger threshold thickness than that with the N-mode. The respective IQF Inv and total detected values shown in Table 4 verified that IQ of TL increases with H-mode for both resolution modes, and suggested that TL images with HR are superior to those obtained with ST mode. As mentioned in the footnote of Table 4 and the legend of Figure 5, the results for Hmode (HR) are based on only one image. The remaining seven images of the set produced erratic results, an example of which is shown in Figure 6. Unlike the contrast detail score diagram shown in Figure 2b, where, as expected, both the central and peripheral disks of larger thicknesses and diameters are detected first, and disks of smaller diameters and thicknesses are progressively missed, in Figure 6 disk detection follows a rather random pattern. The reason why these images were rejected could not be explained. It was initially thought that this could be attributed to wrong phantom positioning in the small field, but visually the images were perfect, gold details were conspicuous, and they were no missing areas of the phantom. Moreover, it was rather strange that the respective images at the other focal planes (i.e., TL20, TL21, TL23, and TL24) did not produce erratic results. However, since the same problem was observed with the TL22 image acquired with H-mode (HR) and the 24 cm × 30 cm compression paddle, it became clear that the problem was not the field size. It must be noted that the results of the single H-mode (HR) image scores were considered reliable, because similar results were obtained for two more images acquired with auto-dose mode and the compression paddle positioned at 45 and 60 mm.
From the additional sets of CDMAM images acquired using the auto modes, it was seen that, for CBT = 60 mm, the automatically selected exposure factors with CDMAM were practically identical (the mAs were only 2-4% less) as those determined using 50 mm PMMA and CBT = 60 mm. Therefore, in this DBT system, the CDMAM does not increase the exposure factors and CDMAM phantom images can be also acquired using the AEC mode. With CBT set at 45 mm, the automatically selected kVp was 1 kV less than that with CBT = 60 mm, for both 2D and DBT acquisitions. For 2D acquisitions, the mAs and AGD values were respectively 27% and 31% larger. By comparison, for the DBT acquisition modes, the mAs and AGD values were only increased by 1 to 5%.
Discussion
Concerning the DBT images, it was seen that the IQ of tomographic layers of the CDMAM phantom is, in general, comparable (although usually slightly inferior) to that of 2D images and satisfies the EUREF acceptable value limits. However, the IQ of s2D (S-view) images was lower than that of TL images and did not satisfy the EUREF limits. A noticeable observation was that although HR acquisition mode, in comparison with ST mode, resulted in images with half the pixel size, it worsened the IQ of s2D images; in contrast, for TL images, improvement was seen only in the detection of smaller disk diameters (<0.4 mm for N-mode and <0.2 mm for H-mode). It must be also noted that for DBT acquisitions with H-mode (HR), the AGD is 3.15 mGy (as can be seen in Table 4), higher than the limiting value of 3 mGy for DBT (60 mm breast) [15]. Finally, it was seen that the use of H-mode (ST) instead of N-mode (ST) results in better quality 2D images (for disk diameters <0.5 mm), and better s2D and TL images (for all disk diameters), at the expense of an increase in AGD (40% for 2D and 24% for DBT). Table 4 shows that the two additional IQ indices calculated by the software, IQF Inv and total detected (%), are both larger in H-mode than in N-mode, and increase in HR mode compared to ST mode for TL images but decrease for s2D images. Overall, larger IQF Inv and total detected values were observed for the 2D images produced with H-mode and the second-largest values for TL images acquired with H-mode (HR).
As previously mentioned, in the DBT QC protocol, concerns about the suitability of the CDMAM phantom for IQ evaluation of DBT images have been expressed [15]. Therefore, the results of this study should be interpreted with caution and are not indented to be used to demote the actual diagnostic IQ of TL or s2D images. However, the reason that s2D images and, in part, tomographic images are inferior to the original 2D images of the CD-MAM could be attributed to the fact that s2D images and tomographic images are the result of complex reconstruction procedures of the DBT data set, which inevitably introduce some inaccuracies, unlike the 2D projections, the production of which is quite straightforward.
Concerning the performance comparison of s2D and 2D images, in a study by Stacampiano et al. [22], where the CDMAM phantom was used, it was shown that the IQ of s2D images from a Hologic DBT system (called c-view) were clearly inferior to the IQ of 2D images. Indeed, the contrast-detail curve for s2D images was well above the acceptable EUREF curve, whereas for 2D images, most parts of the contrast-detail curve were below the achievable EUREF curve. In the same study, the IQ inferiority of s2D compared to 2D images was also documented using other phantoms. Nelson et al. [23], using the ACR and a novel 3D anthropomorphic phantom, concluded that s2D images from a Hologic Selenia Dimensions DBT system, although providing enhanced visualization of medium and large microcalcification objects, provided poorer overall resolution and noise properties. Indeed, it was reported that 50% to 70% of ACR phantom images failed to satisfy the ACR accreditation requirements, primarily due to fiber breaks. The results of both of these studies are in agreement with the results of the present study. In contrast, Wahab et al. [24], based on the results of a comparison of FFDM (2D) and s2D images of actual breast images, concluded that radiologists interpreting s2D and FFDM digital mammography images have similar frequencies of detection of calcifications and BIRADS assessment, and, therefore, a synthetic 2D mammogram may be a sufficient replacement for FFDM at screening.
Digital 2D mammography is the current standard, as far as screening mammography is concerned, but DBT is gaining ground in clinical practice and there have been many studies presenting the benefits of DBT in the detection of cancer over 2D mammography, based on some of which, FDA approval was initially granted for the use of DBT in clinical practice [2]. However, the evolution of DBT continues and some manufacturers have already incorporated iterative reconstruction techniques (as in the DBT system evaluated in this study) instead of filtered back projection, to improve the IQ of tomographic and s2D images [2]. Since most radiologists have been trained in and are accustomed to relying on 2D images for diagnosis, the need to meet the demand for high-quality s2D images remains imperative. However, it should be stressed that s2D images are not intended to be a standalone examination like 2D mammography and should be always interpreted along with the tomographic layer images [2].
Although, in this study, s2D images (and partly TL images) of the CDMAM phantom exhibited inferior IQ compared to the respective 2D images, this does not mean that DBT alone may not be adequate for diagnosis. Unlike the CDMAM phantom, where all the details are found within a layer of just 0.5 mm, real breasts contain structures critical for diagnosis that extend over several layers within the compressed breast. Therefore, the diagnostic benefits that arise from the separation of superimposing layers in clinical practice, in comparison to 2D mammography, cannot be fully assessed with the CDMAM phantom. However, the fact that, in this study, TL images exhibited better IQ scores than the s2D images (although both were produced utilizing the same DBT data set), is an indication of such an advantage of tomographic images.
Finally, it is worth mentioning that image enhancement techniques based on deep learning have started to emerge. For instance, a very recent report [25] describes the uti-lization of a convolutional neural network (CNN) for image denoising, based on PCA sparsity estimation, which has been applied to cerebral microbleed detection in susceptibility weighted magnetic resonance images. Despite the effectiveness of similar methods on certain imaging modalities, the purpose of our work was to assess the quality of phantom mammographic images acquired under conditions identical to the acquisition of clinical images. The possible application of several image enhancement algorithms on clinical s2D and DBT images of all kinds of lesions (cancerous, non-calcified, etc.) and the subsequent measurable effect on the quality of the CDMAM phantom images is very important and requires extensive further work.
Conclusions
The automatic evaluation of CDMAM phantom images acquired with a DBT system demonstrated that 2D images exhibit better IQ than synthesized 2D images and, in most cases, than tomographic images. Tomographic layers clearly exhibited better IQ than synthesized 2D images and satisfied the EUREF limits, unlike the synthesized 2D images, which presented inferior IQ compared to the EUREF requirements; these requirements are currently applicable only for actual 2D projections. For both TL and s2D images, improvement in IQ was observed when H-mode was used instead of N-mode. In contrast to expectations, HR mode only resulted in improvement in IQ in TL images, and mainly for small diameter-sized details, whereas for large-diameter details, the opposite effect was observed. Furthermore, HR mode produced inferior s2D images compared to ST mode for all detail sizes. | 7,537.6 | 2022-08-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Parallelized shifted‐excitation Raman difference spectroscopy for fluorescence rejection in a temporary varying system
Abstract A fluorescence background is one of the common interference factors of the Raman spectroscopic analysis in the biology field. Shifted‐excitation Raman difference spectroscopy (SERDS), in which a slow (typically 1 Hz) modulation to excitation wavelength is coupled with a sequential acquisition of alternating shifted‐excitation spectra, has been used to separate Raman scattering from excitation‐shift insensitive background. This sequential method is susceptible to spectral change and thus is limited only to stable samples. We incorporated a fast laser modulation (200 Hz) and a mechanical streak camera into SERDS to effectively parallelize the SERDS measurement in a single exposure. The developed system expands the scope of SERDS to include temporary varying system. The proof of concept is demonstrated using highly fluorescent samples, including living algae. Quantitative performance in fluorescence rejection and the robustness of the method to the dynamic spectral change during the measurement are manifested.
| INTRODUCTION
In recent decades, the application of Raman spectroscopy to biological and medical studies has been expanded vastly, especially in the field of a living cell and tissue characterization [1][2][3][4][5][6]. One of the key driving forces of this recent expansion is the drastic advancement in the various chemometric data mining techniques for data decompositions, clustering and discriminant analyses [7][8][9][10][11]. During the mining processes of Raman spectroscopic data set, the purity of the data is one of the decisive factors of the analytical performance. Inclusion of irrelevant signals such as a background to the input data set often disturbs the analysis and alters the output, which could lead to a vague or even wrong conclusion. Therefore, preclusion of the extraneous signal prior to the actual analysis is necessary for achieving a reliable outcome. In a biological application of Raman spectroscopy, fluorescence from either endogenous or exogenous biomolecules is usually the primary source of background. Although the magnitude of fluorescence can be greatly reduced by tuning the excitation laser to longer wavelength [12], the complete suppression of fluorescence from bio-originated samples is extremely difficult even by 1064 nm excitation [13,14]. Hence, the development of techniques which facilitates the separation of Raman signal from the fluorescence background is deemed important. Currently, the most frequently used branch of fluorescence eliminating techniques is an ad hoc baseline subtraction by polynomial curve fitting [12,15]. These methods attribute slowly varying spectral components to fluorescence based on the tendency of its broader bandwidth than Raman peaks. However, this assumption does not always hold for biological samples because ubiquitous biomolecules, such as proteins, DNAs and polysaccharides, are polymers, and these biopolymer Raman spectra exhibit broadened bandwidth and unclear baselines in the fingerprint region due to the presence of numerous vibrational modes and to their structural diversity. Moreover, the order of polynomial functions is often arbitrarily (or statistically) chosen so as to make the resultant spectrum have the flattest baseline. There is usually little physical justification in the process; thus, they are considered as less objective and expedient methods. Also, as we will show later in the result section, polynomial subtraction methods are not always compatible with chemometric analyses and may introduce an additional complication to the data set.
To overcome above issues, numbers of fluorescence rejection methods based on solid physical principles, such as temporal [16,17] or spectral response differences [18][19][20], have been proposed. Of these methods, shifted-excitation Raman difference spectroscopy (SERDS) [19] and its variants [20][21][22][23][24] are based on the principle that Raman and fluorescence spectra respond differently to small shifts in the excitation wavelength; Raman bands shift concertedly with the excitation light, whereas fluorescence does not. Previous studies have shown the effectiveness of the principle in extracting Raman signals from various fluorescent subjects, including cultured cells [23], human tissue [25] and tooth [24]. Despite these successes, the application of SERDS methods to biological samples is still technically limited. The samples are constrained to be stable, giving a stationary response. This restriction exists because these methods consecutively record a series of Raman spectra excited by shifted wavelengths, and the samples are required to be unchanged during the whole set of measurement. In this sequential scheme, the spectral acquisition rate determines the wavelength switching frequency, which is typically in the order of seconds or longer. However, for many biological samples, especially the live ones, spectral changes may occur in much faster time scale by photobleaching [25,26] and by the physical movement of components in and out of the laser focus.
In this study, we took a further step to expand the scope of the SERDS technique to include unstable, dynamic samples by effectively parallelizing the measurement. Herein, we first outline the principle of the developed parallelized SERDS (P-SERDS). Then, proof of concept studies using stationary fluorescing artificial dye solution and dynamic living algae cells are reported. High-quantitative performance in fluorescence rejection, as well as the robustness of the method to the dynamic spectral change during a signal accumulation, is manifested. Finally, we demonstrate the superior compatibility of the proposed method to multivariate chemometrics in comparison with the conventional fluorescence removal method using the polynomial baseline fitting.
| PARALLELIZED SHIFTED-EXCITATION RAMAN DIFFERENCE SPECTROSCOPY
Similar to other SERDS instruments, the P-SERDS utilizes a wavelength tunable laser for fast wavelength switching. Parallelization of the measurement is realized by synchronization of fast laser tuning with a modulation to a mechanical streak camera. A mechanical streak camera is composed of a galvano mirror and a spectrograph, in which the galvano unit is used to control the height of signal light entering the spectrometer. Without invoking a detector readout, the device can designate and modulate the vertical position on the twodimensional detector pixels on which signal light is exposed. By driving the streak device to switch detection pixels repeatedly in conjunction with the laser tuning, it is possible to parallelly record multiple spectra excited by the different laser wavelengths in a single exposure.
The advantage of the present parallel configuration is the independence of the spectral acquisition rate and the modulation frequency. Long (seconds to minutes) exposures for weak Raman scatterers can be performed, while employing orders of magnitude faster wavelength switching frequency (up to 200 Hz). Owing to this fast wavelength switching, a dynamically changing sample may be considered virtually unchanged during a single set of shifted-excitation in 5 ms. Slower changes in the spectrum during the total exposure will be recorded equally in all the parallelly accumulated spectra, therefore will not affect the subsequent fluorescence separation.
We note that, recently, the concept of parallelization of SERDS has also been demonstrated by Sowoidnich et al. [27,28] independently, based on a totally different implementation from the present study. They modified the detector circuits to electrically manipulate the accumulated charges on the detector. Rejection of fluorescence and environmental background such as ambient light has been demonstrated.
| MATERIALS AND METHODS
3.1 | Experimental setup 3.1.1 | Optical setup Figure 1A shows a schematic of the developed system. The light source was a CW wavelength tunable diode laser equipped with an external cavity (TOPTICA Photonics AG, DL Pro, 488 nm). First, the beam diameter of the laser output was expanded by a telescope and then introduced into an inverted microscope (Olympus, IX-71) by a dichroic filter (LPD01-488RU; Semrock). The incident light was focused onto a sample by an objective (40×, NA 1.3, oil immersion), and the backward-scattered Raman signal was collected by the same objective. The backpropagating signal was separated from the excitation laser by transmitting through the dichroic filter, and then, focused onto a confocal pinhole at the microscopic imaging plane by a second objective. After passing through the pinhole, the signal was re-collimated and transmitted through a pair of Rayleigh rejection filters (488 LPF; Iridian Spectral Technologies), then guided into a mechanical streak camera unit for the recording of Raman spectra. The laboratory-built streak unit was composed of a uniaxial galvano mirror (VM500plus; Cambridge Technology), a focusing lens, a polychromator (iHR-320; HORIBA Jobin Yvon) and a charge-coupled device (CCD) detector (Spec-10 2 KB-EV/LN; Princeton Instruments). The galvano mirror was placed at the back focal plane of the focusing lens to the polychromator with its scanning axis aligned to the opening of the entrance slit of the spectrometer.
| Synchronization of the laser and the mechanical streak camera
The output wavelength of the laser system and the vertical detection position of the streak unit were both controlled by external voltage signals. The laser tuning and the streak scanning were synchronized by the application of concerted voltage waveforms produced by a PC-controlled function generator (NI USB-6343; National Instruments), shown in Figure 1B.
Waveforms were designed so that each spectrum excited at different wavelengths was separately recorded at each designated vertical position of the CCD pixels.
Here, we note that although the DL Pro laser system was capable of mode-hop free continuous frequency tuning in 60 GHz (0.05 nm or 2 cm −1 ) range with the help of a builtin feed-forward adjustment circuit, we deliberately operated the laser without the feed-forward function to achieve repeated mode-hopping between two discrete frequencies with wider wavelength difference. By optimizing the center voltage and the modulation amplitude of the input damped square waveform, we manage to generate mode-hopping cycle between two wavelengths of ca. 0.1 nm (5 cm −1 ) difference in 5 ms period (200 Hz), which was appropriate for P-SERDS experiments.
We applied the same modulation frequency of 200 Hz to the streak unit. In order to avoid spectral mixing due to timing jitter, the input waveform was cycled through three voltage levels in anomalous timing. The higher and lower voltage levels with longer duration were used to direct the signal light to regions of interest (ROIs) of the CCD readout, and the short duration intermediate level is used to discard the emission during the laser wavelength transition to outside of ROI. The typical spatial distribution of charges on the detector after a single P-SERDS exposure is represented in Figure 1C, in which the two rectangles designate the ROIs. The vertical pixels within each region are binned before the digitization step to yield single spectral output for each ROI.
DM, dichroic mirror; LLF, laser line filter; LPF, long-wavelength pass filter; OBJ, objective; P, pinhole. B, Input voltage waveforms for the laser and the streak camera modulations. C, A spatial distribution of accumulated charges on charge-coupled device (CCD) detector pixels after a single parallelized shifted-excitation Raman difference spectroscopy (P-SERDS) exposure. White rectangles mark the regions of interest for the detector readout
| Parallelized shifted-excitation Raman difference spectroscopy measurement conditions
The P-SERDS measurements of artificially prepared dye samples were performed under the synchronized modulation frequency of 200 Hz, with the excitation wavelength shift of 0.07 nm (3.0 cm −1 ). The average laser power at the sample and the exposure duration were 1.0 mW and 60 seconds, respectively. For the time-lapse P-SERDS measurements of algae cells, the excitation wavelength shift and the laser power were set to 0.13 nm (5.3 cm −1 ) and 2.7 mW, respectively. Other measurement parameters were the same as those for the dye samples.
| Spectral analysis
All the data analysis, except for multivariate curve resolution-alternate least squares (MCR-ALS), was performed using laboratory-developed analysis programs on Igor Pro 7 or 8 (Wavemetrics, Inc.). MCR-ALS with the non-negativity constraint was performed using Sci-kit learn package (skleran.decomposition.NMF, version 0.20.3) [29] on Python.
| Correction and calibration
Prior to SERDS analysis of the acquired spectra, the obtained spectra were subject for spectral preprocessing for the correction and calibration. Spectral abscissa of each ROI is individually calibrated to wavelength by reference to atomic emission lines of a neon lamp, and Raman shift by Raman peaks of indene. The spectra were, then, individually processed for dark count subtractions, pixel sensitivity corrections by white light, and then resampled by interpolation at common wavelength series of a constant wavenumber interval (2.2 cm −1 ). Resampling was necessary because the raw spectra from different ROIs were sampled at slightly different wavelength points.
| Differential spectra
After the preprocess had been performed, each paired Raman spectra, measured simultaneously in a single exposure, were subtracted one from another and divided by the excitation frequency shift in wavenumber unit (typically 3-5) to yield a differential Raman spectrum. A reconstructed Raman spectrum was calculated by a simple onedimensional numerical integration of the differential spectra; wherein, the integral constant was chosen so that the minimum value of the output spectrum became zero. Note that this choice of integration constant assumes that the observed Raman spectra contain the region of no Raman band in the observation window. This assumption may be generally valid when the "silent" region is included. However, to avoid ambiguity in the discussion, the reconstructed spectra are mainly used for the qualitative evaluation of Raman responses in this study. Alternative reconstruction algorithm based on a deconvolution calculation [20] without an apodization function was also applied to the P-SERDS results but yielded poorer result by the massive increase in the noise level. A P-SERDS time-lapse spectral data set is constructed from a time-lapse observed spectra series by converting every parallel measurement pairs to differential spectra.
| Multivariate decomposition analyses
Principal component analysis (PCA) mathematically reconstructs the input data based on its covariance to a set of principal components (PCs). Since the noise in the data has little covariance to physical signals, the process can effectively separate the signals from the noise. The number of significant (non-noise) PCs was estimated based on a statistical test for the noise characteristics of both loading (spectral) and score (temporal) vectors [30]. MCR-ALS analysis of a denoised-reconstructed P-SERDS data was performed in the following procedure. First, the number of significant PCs was estimated. Then, a denoised P-SERDS data was calculated by matrix multiplication of truncated scores and loading matrices, holding only the significant components. This denoised P-SERDS data was transformed to reconstructed Raman data by numerical integration; then, after the negative values in the matrix were clipped to zero, the data was subjected to MCR-ALS.
| Background subtraction
Fluorescence rejection by polynomial baseline fitting was performed using the procedure outlined in the literature [15].
| Samples
Fluorescein (Wako Special grade) and ethanol (Guaranteed Reagent grade) were both purchased from Wako Pure Chemical Corporation and used as received. The algae cells, Chlamydomonas sp. JSC4, isolated from the coast of Southern Taiwan, were obtained through the courtesy of Dr. Shih-Hsin Ho and Dr. Liang-da Chiu. The cells were cultured in modified Bold 6 N medium [31], and then treated with 3-day N-depletion environmental stress condition for accumulation of lipids and carbohydrates. [32] 4 | RESULTS AND DISCUSSION We examined the fluorescence rejection capability of our proposed method using artificially prepared samples that exhibit stationary fluorescence background. Figure 2A shows observed spectra, by the P-SERDS method, of dilute fluorescent dye, fluorescein, in ethanol at different concentrations (0, 80, 240, 400, 800, and 2000 nmol dm −3 ). The spectra are plotted against wavelength (bottom axis) and, for the reference, the corresponding Raman shift for one of the excitation wavelengths is displayed on the top axis. Inset displays the expanded view of the graph for the 2000 nmol dm −3 sample, in which paired spectra with shifted vibrational bands are made visible. As is expected, each pair of spectra almost perfectly overlaps one another except for the small Raman peak shifts due to excitation wavelength shift. Every spectrum exhibits characteristic vibrational bands of ethanol at 432, 882, 1051, 1094, 1274, 1452 and 1480 cm −3 . On top of these bands, broad fluorescence background emerges to rise concertedly with dye concentrations. Figure 2B shows differential spectra calculated from the paired spectra in Figure 2A. All the spectra manifest substantial signals of dispersive band shape, and the spectral patterns are the same for all the samples irrespective of their dye concentrations, indicating adequate fluorescence rejection by the proposed method. For the evaluation of qualitative Raman responses, reconstructed spectra were calculated by numerical integrations of the differential spectra and are shown in Figure 2C. The reconstructed spectrum of neat ethanol sample (0 nmol dm −3 ) was identical to the original observed Raman spectrum. Hence, we conclude that no Raman vibrational information was lost by P-SERDS analysis. In the case of fluorescent samples, slight band shape distortions with a fluctuating baseline become more apparent in the reconstructed spectra as the dye concentration increases. Nevertheless, all the reconstructed spectra present the characteristic peak pattern of ethanol and are proven to be useful for the qualitative analysis, such as general bands assignments and spectral matching. The possible origins of the band shape distortion are discussed in the latter paragraph.
In order to validate the quantitative Raman response in the differential spectra, we plotted the ethanol peak intensity at 882 cm −1 (estimated by the minimum to maximum intensity difference of the dispersive curve) against the background-to-signal ratio (B/S) of the original spectra. The result is shown in Figure 3, together with the results obtained by ad hoc background subtraction method by the baseline fitting to second-to fifth-order polynomials. For the sake of comparison, each plot was normalized with respect to the peak intensity of the neat solvent at B/S = 0. We assume the ethanol concentration to be approximately constant because, for all of our samples, the dye concentrations were dilute. Therefore, the Raman response plot should ideally show the constant values of unity. Our P-SERDS result (filled circle) exhibits almost constant unity value regardless of the background level, suggesting a robust quantification performance against interfering background. On the other hand, the polynomial subtraction methods show highly dependent results (open symbols) on the choice of polynomial order. When polynomial order is low (2,3), the estimated Raman intensity follows linear dependence to the B/S ratio. This behavior is due to the presence of the residual background component in their Raman spectra after the background subtraction. Although the use of higher polynomial order (4,5) seems to resolve the problem, the result demonstrates one of the inherent drawbacks of the method, in which the analysis is susceptible to the choices of the analytical parameter.
Next, we applied our P-SERDS method to living cells in order to demonstrate its feasibility to temporally unstable samples. We chose algae cells for this purpose. Microalgae are a large group of unicellular species possessing a photosynthesis system. They are considered as a prospective source of biofuel production, and much research has been devoted to engineering novel or enhanced synthesis pathways of energy-rich metabolites, lipids and carbohydrates in algae [33]. Recently, we have reported the application of Raman spectroscopy to algae characterization for a label-free and non-destructive evaluation of biofuel production [32]. In our previous study, high fluorescence background from the photosynthesis system as well as the inhomogeneous and dynamic intracellular structures were major limiting factors which degrade the speed and accuracy of the quantification. In the present study, we applied the P-SERDS method to individual algae cells for a fast global (average) Raman spectral acquisition by continuously and randomly moving the laser spot inside the cell during a single exposure for one minute. The measurement was consecutively repeated 30 times for the same cell to record a time-lapse Raman spectra series of a single alga, and the whole measurement was replicated for six individual cells.
Time-lapse spectra series ( Figure S1) of a single algae cell observed by repeated Raman measurement for 30 minutes duration show a drastic temporal change in the background level due to fluorescence bleaching. Large intensity drops between the consecutively acquired spectra indicate that this dynamic spectral change had taken place even during each one-minute exposure. We also anticipated the spectral change during an exposure due to the continuous laser spot movement inside the spatially inhomogeneous cell. Despite such highly dynamic measurement conditions, the P-SERDS method successfully recorded identical baseline profiles for parallelly accumulated spectral pairs by virtue of the use of high-frequency modulation to the excitation. Average spectra of the whole time-lapse series from a single cell are presented in Figure 4A, together with the corresponding differential spectrum (B) and the reconstructed spectrum (C). In the differential spectrum, fluorescence was effectively canceled out, and distinct dispersive bands originating from genuine Raman signals were unequivocally observed. The reconstructed spectrum exhibited numbers of characteristic peaks of biomolecules, possibly proteins, lipids and glycogens, readily usable for further qualitative and quantitative analysis. Possible assignments of some of the prominent bands are as follows: proteins (amide I at around 1650 cm F I G U R E 4 Observed Raman scattering (A), differential (B) and reconstructed (C) spectra of a single alga. The blue and red spectra in A are average Raman spectra excited by different wavelengths (487.36 and 487.23 nm, respectively) and are plotted on a common wavelength axis. The corresponding Raman shift for blue spectrum is displayed on the top axis. Differential intensity is abbreviated as diff. intensity 939 cm −1 , C O and C C str. and C H bend at 1155, 1083 and 863 cm −1 ) [34][35][36].
Next, to assess the efficacy of fluorescence rejection in multivariate analysis, we constructed a P-SERDS time-lapse spectral data set, and it was subjected to PCA. We also performed the polynomial baseline subtraction to the same time-lapse series. This polynomial subtracted data set and the original fluorescence unremoved time-lapse series were also processed by PCA for comparison. Table 1 summarizes the number of significant components identified for each data set. Loading spectra of the first four PCs of the P-SERDS, the original and the polynomial subtracted data sets are represented in Figure 5A-C, respectively.
PCA analysis revealed that the original observed timelapse data set of a single alga has a little complicated nature than expected, carrying three independently varying components ( Figure 5B). In general, direct interpretations of individual PCs are not straightforward because PCs merely represent statistical characteristics of the data set, and their relationship to chemical species or biological structures are indirect. Physically relevant components often appear as mixtures in the PCs. In our case, we could only speculate the physical nature of two of the variations, one as a temporary decreasing (bleaching) fluorescence and the other as an (almost) stationary Raman signal of the cell. The presence of the third component suggests that there is additional variation in the data set, which may either be a second fluorescence component with different bleaching kinetics or a second Raman component with non-stationary kinetics. However, due to the mixing of components in the PCs, further analysis based only on the PCA result was hampered. This situation illustrates the limitation of multivariate techniques, where the presence of unwanted additional components complicates the analysis.
In contrast to the fluorescence unremoved result, the PCA result of the P-SERDS data set exhibited benefits of prior fluorescence rejection by much simplified composition as it yielded only one significant component ( Figure 5A). This sole component resembles the differential spectrum in Figure 4B, except for the sign which was determined arbitrarily, suggesting that the fluorescence contribution is effectively minimized to below noise level by P-SERDS calculation and that a single Raman spectral pattern had been recorded during the time-lapse series throughout. From the decrease of the number of significant components from original (3) to P-SERDS data (1), we inferred that the diminished two varying components were two fluorescence spectra with different kinetics.
Unlike the P-SERDS result, the polynomial fitting approach ( Figure 5C) did not resolve the issue. The number of significant components did not change after the treatment by low-order polynomial, or, even worse, increased when higher-order polynomials were employed ( Table 1). The result is most likely due to the inaccuracy of the baseline estimation by the fitting. The PCA exposed the residual fluorescence and artifactual components introduced from over or under subtraction of the baseline. Diff. intensity, differential intensity; PC, principal components Next, we further extended our study to demonstrate the P-SERDS application to assess cell-to-cell individuality of algae by multivariate resolution techniques. A time-lapse Raman spectra series of six individual algae cells were concatenated to form a large matrix. This total data set was then processed for PCA analysis. As a result, we found two significant components ( Figure S2). The presence of multiple Raman components in PCA is an indication of the cellto-cell variation of their biomolecular composition ratio. In order to determine the relevant biomolecular species, we employed MCR-ALS method with the non-negativity constraint [11] on a denoised-reconstructed P-SERDS data set that had derived from the PCA result. MCR-ALS is an alternative method to PCA in finding bilinear matrix decomposition of the input data matrix. By the use of non-negativity constraint, which reflects the physical nature of Raman scattering, the method tends to find a chemically interpretable decomposition solution [37]. The user must supply the number of components, and in the present case, we designated two components based on the PCA result.
The decomposed Raman spectra obtained from MCR-ALS analysis, presented in Figure 6, displays characteristic peaks of proteins and glycogens (top) and saturated lipids (bottom), with small spectral crosstalks. The lipid spectrum exhibited single sharp peak at 1295 cm −1 , as well as triplets at 1128, 1099 and 1061 cm −1 , which are the signature of alltrans C C chain conformation of saturated fats in solid phase [35], suggesting the presence of highly condensed lipid package in some of the cells. The result suggests a diversity of lipid ratio to other biomolecules among the cell population, which may indicate the individuality of lipid production and storage ability. We note that PCA analyses to the original observed and the polynomial subtracted data sets yield larger numbers of significant components (Table 1) due to the fluorescence interference. Because of the complication raised by an increased number of components, further MCR-ALS analyses failed to generate clear resolution of the varying biomolecular species as was possible by the P-SERDS data.
Next, we remark the applicability of the P-SERDS method to various Raman measurements. The method can also be used to reject non-fluorescent background, for example, spontaneous emissions from the sample and ambient light from the environment, as long as they are independent of the excitation wavelength shift. The essential devices, the tunable laser and the galvano mirror unit, can be easily incorporated into a working single-focus Raman spectroscopic system equipped with a two-dimensional detector, with the least modification to the optical layout. Therefore, the method is not limited to a biological investigation and applies to a wide variety of research subjects whenever the interference of background is inevitable.
Finally, we discuss some of the limitations of the present P-SERDS method. Firstly, as is partly seen in Figure 3, the P-SERDS also suffers from the common drawback of difference spectroscopy, in which the signal-to-noise ratio is generally poor as it focuses specifically on the variations in the signal, not on the signal amplitude itself. In the case of a static background, if reliable background spectra can be obtained by, for example, separate measurement, simple background subtraction analysis may deliver better results. Second, since the spectra accumulated in parallel propagate along different optical paths after the streak camera, asymmetry in the optical configuration due possibly to misalignment and aberrations would result in the additional difference in the observed spectra. This additional difference may be minimized by careful corrections and calibrations of the data, but it may not be completely canceled out in the subtraction step. Practically, the error caused by this artificial difference is negligibly small in the differential P-SERDS spectra, but it may be exaggerated in the reconstructed spectra to produce distorted Raman band shapes and fluctuating baseline, and, therefore, could affect the qualitative analysis. We noticed that the aberration in the imaging spectrometer was the primary source of the error in our setup. Restricting the observation ROIs to the central part of the imaging plane, where the aberration is at most corrected, helped to reduce the distortion. Alternatively, the lock-in CCD-based SERDS technique recently proposed [28] circumvent the aberration issues at the expense of reduced effective CCD pixel (row) number.
| CONCLUSION
In this study, we have developed P-SERDS by the synchronized operation of a mechanical streak camera with a wavelength tunable light source. The parallel detection scheme F I G U R E 6 Decomposed Raman spectra derived from multivariate curve resolution-alternate least squares of parallelized shifted-excitation Raman difference spectroscopy (P-SERDS) reconstructed data set proposed by the present method facilitates the effective and accurate fluorescence rejection from the temporary varying system with much faster dynamics than the spectral acquisition rate, thus expanded the scope of SERDS technique to include broader and more practical biological application. The highly accurate background rejection performance of P-SERDS for such dynamic samples is demonstrated to be beneficial, especially for the multivariate analysis of Raman data set by reduction of the number of retained compositions. This simplification of the analytical result not only helps in further quantitative and qualitative assessments but also promotes the development of clearer biochemical insight into the target system. We believe that the present method has enormous potential in boosting the reliability and applicability of Raman spectroscopy in biological research, and thus empowers related discrimination, diagnosis and quantification applications in the near future. | 6,780.2 | 2019-08-28T00:00:00.000 | [
"Biology",
"Chemistry",
"Engineering"
] |
Timing Does Matter: Nerve-Mediated HDAC1 Paces the Temporal Expression of Morphogenic Genes During Axolotl Limb Regeneration
Sophisticated axolotl limb regeneration is a highly orchestrated process that requires highly regulated gene expression and epigenetic modification patterns at precise positions and timings. We previously demonstrated two waves of post-amputation expression of a nerve-mediated repressive epigenetic modulator, histone deacetylase 1 (HDAC1), at the wound healing (3 days post-amputation; 3 dpa) and blastema formation (8 dpa onward) stages in juvenile axolotls. Limb regeneration was profoundly inhibited by local injection of an HDAC inhibitor, MS-275, at the amputation sites. To explore the transcriptional response of post-amputation axolotl limb regeneration in a tissue-specific and time course-dependent manner after MS-275 treatment, we performed transcriptome sequencing of the epidermis and soft tissue (ST) at 0, 3, and 8 dpa with and without MS-275 treatment. Gene Ontology (GO) enrichment analysis of each coregulated gene cluster revealed a complex array of functional pathways in both the epidermis and ST. In particular, HDAC activities were required to inhibit the premature elevation of genes related to tissue development, differentiation, and morphogenesis. Further validation by Q-PCR in independent animals demonstrated that the expression of 5 out of 6 development- and regeneration-relevant genes that should only be elevated at the blastema stage was indeed prematurely upregulated at the wound healing stage when HDAC1 activity was inhibited. WNT pathway-associated genes were also prematurely activated under HDAC1 inhibition. Applying a WNT inhibitor to MS-275-treated amputated limbs partially rescued HDAC1 inhibition, resulting in blastema formation defects. We propose that post-amputation HDAC1 expression is at least partially responsible for pacing the expression timing of morphogenic genes to facilitate proper limb regeneration.
Sophisticated axolotl limb regeneration is a highly orchestrated process that requires highly regulated gene expression and epigenetic modification patterns at precise positions and timings. We previously demonstrated two waves of post-amputation expression of a nerve-mediated repressive epigenetic modulator, histone deacetylase 1 (HDAC1), at the wound healing (3 days post-amputation; 3 dpa) and blastema formation (8 dpa onward) stages in juvenile axolotls. Limb regeneration was profoundly inhibited by local injection of an HDAC inhibitor, MS-275, at the amputation sites. To explore the transcriptional response of post-amputation axolotl limb regeneration in a tissuespecific and time course-dependent manner after MS-275 treatment, we performed transcriptome sequencing of the epidermis and soft tissue (ST) at 0, 3, and 8 dpa with and without MS-275 treatment. Gene Ontology (GO) enrichment analysis of each coregulated gene cluster revealed a complex array of functional pathways in both the epidermis and ST. In particular, HDAC activities were required to inhibit the premature elevation of genes related to tissue development, differentiation, and morphogenesis. Further validation by Q-PCR in independent animals demonstrated that the expression of 5 out of 6 development-and regeneration-relevant genes that should only be elevated at the blastema stage was indeed prematurely upregulated at the wound healing stage when HDAC1 activity was inhibited. WNT pathway-associated genes were also prematurely activated under HDAC1 inhibition. Applying a WNT inhibitor to MS-275-treated amputated limbs partially rescued HDAC1 inhibition, resulting in blastema formation defects. We propose that post-amputation HDAC1 expression is at least partially responsible for pacing the expression timing of morphogenic genes to facilitate proper limb regeneration.
INTRODUCTION
Axolotls have a remarkable capacity to regenerate multi-tissue structures. Following limb amputation, the exposed wound is covered rapidly by a specialized epithelium derived from keratinocytes around the wound periphery (Chalkley, 1954;Ferris et al., 2010). Rather than the cells migrating across the wound surface, this sheet of epithelial tissue is propelled from behind as cells at the periphery take up water and expand in volume (Tanner et al., 2009). Underneath the wound epidermis, progenitor cells aggregate and lead to the formation of a unique structure called the blastema. The blastema is a combination of lineage-restricted and multipotent progenitors that gives rise to the internal structures of the newly regenerated limb (Kragl et al., 2009;Currie et al., 2016;McCusker et al., 2016;Fei et al., 2017). The correct interaction between the epidermis and the underlying ST is necessary for promoting blastema cell proliferation (Boilly and Albert, 1990) and stump tissue histolysis (Scheuing and Singer, 1957) and guiding blastema outgrowth (Thornton, 1960). Changes in cellular behaviors correlate with changes in transcription (Geraudie and Ferretti, 1998). The cell lineageand regeneration stage-specific expression patterns of various morphogenesis genes are critical for successful limb regeneration (Monaghan et al., 2009(Monaghan et al., , 2012Campbell et al., 2011;Knapp et al., 2013;Stewart et al., 2013;Wu et al., 2013;Díaz-Castillo, 2017). These data suggest that regeneration in axolotls is a highly orchestrated stepwise process requiring precise transcriptional modulation.
Many changes during injury are associated with the control of epigenetic mechanisms that alter chromatin structure and the properties of proteins that function in transcriptional regulation and cell signaling (Paksa and Rajagopal, 2017). One of these epigenetic mechanisms involves deacetylation of histone proteins by HDACs to make the regional chromatin structure more compact and has been revealed to be important in amphibian regeneration. HDACs have also been associated with promoting growth and proliferation (Glozak and Seto, 2007). Because of the lack of detailed genomic data until recently (Nowoshilow et al., 2018), few studies have examined epigenetic mechanisms during axolotl regeneration. A study on Xenopus limb bud regeneration showed that histone modifications are important for regulating genes that maintain intrinsic limb-cell identities (Hayashi et al., 2015). Pharmacological blockade of HDACs reduces HDAC activity and inhibits tail regeneration in Xenopus tadpoles and larvae axolotls (Taylor and Beck, 2012). Tseng et al. (2011) also used valproic acid (VPA) and trichostatin A (TSA) to inhibit Xenopus tadpole tail regeneration. The authors observed that notch1 and bmp2, two developmentally regulated genes that are required for Xenopus tail regeneration, were aberrantly expressed upon TSA treatment, consistent with the idea that HDAC activity is required during regeneration to regulate gene expression. In our previous study (Wang et al., 2019), we observed biphasic expression of nerve-mediated HDACs during the wound healing and blastema formation stages of axolotl limb regeneration. Furthermore, we reproducibly demonstrated an obvious reduction in cell proliferation and a prominent absence of limb regeneration upon treatment with two HDAC inhibitors, MS-275 and TSA, suggesting the necessity of HDAC for successful limb regeneration after amputation.
To elucidate the potential mechanism underlying the roles of HDACs in amphibian appendage regeneration, we took advantage of our axolotl limb regeneration model to analyze the alteration of the transcriptome at the wound healing and blastema formation stages when HDAC activity was inhibited by MS-275. We performed unsupervised clustering of genes exhibiting similar expression patterns during early limb regeneration in the epidermis and soft tissue (ST) with or without MS-275 treatment. Using gene set enrichment analysis (GSEA), we found that postamputation elevation of HDAC expression is required for the correct gene expression timing. Notably, genes involved in tissue development, differentiation, and morphogenesis were prematurely enriched at the wound healing stage upon MS-275 treatment. Q-PCR and functional analysis of candidate genes and pathways in independent animals were performed to further validate the necessity of HDAC1 activity in preventing premature activation of regeneration stage-dependent gene expression.
Transcriptome Profiling of the Epidermis and ST During Early Limb Regeneration Upon HDAC1 Inhibition
In our previous study, we demonstrated two waves of HDAC1 expression post-amputation corresponding to the wound healing stage and the duration of blastema formation. Inhibiting HDAC activity with the HDAC inhibitor MS-275 or TSA impaired blastema formation and subsequent limb regeneration ability (Wang et al., 2019;Supplementary Figure 1). To investigate further the transcriptional programming underlying the mechanism of the inhibition of regeneration by MS-275, we performed deep RNA sequencing during early axolotl limb regeneration using the Illumina sequencing platform. Gene expression of two types of tissues (epidermis and the underlying ST) treated with DMSO (as a control) or MS-275 at 0 dpa (homeostatic control), 3 dpa (wound healing stage), and 8 dpa (blastema formation stage of regeneration) was profiled ( Figure 1A). Tissue samples from 4 limbs of two animals at each time point and for treatment condition were pooled as one biological replicate. In total, 40 limbs from 20 animals were used for the transcriptome analysis.
Because the axolotl genome was incomplete, we mapped sequencing reads to the axolotl transcriptome (Nowoshilow et al., 2018;Supplementary Figure 2A). An average of 78% of sequencing reads from each sample could be aligned to known axolotl transcripts (Supplementary Table 1). To obtain more functional information for each gene, we reannotated inhibitor MS-275 was injected into the amputation site every other day after limb amputation to study the effect of HDAC1 depletion on transcriptome composition during early stage limb regeneration. Two biological replicates each of the epidermis and soft tissue (ST) at 0, 3, and 8 days post-amputation (dpa) were collected from HDAC1 inhibited and vehicle control animals. The epidermis and the underlying soft tissues were separated from the most distal part (2 mm) and then collected from 3 and 8 dpa, corresponding to the wound healing and blastema formation stages, respectively, and compared to the homeostatic control samples collected immediately after amputation. To identify biological pathways important for critical stages of limb regeneration and those that were interfered with by HDAC inhibition, we grouped genes from each Gene Ontology (GO) term as a unit to perform Gene Set Enrichment Analysis (GSEA) with the following comparisons. As summarized in Figure 2A, we compared genes from each GO term for the wound healing stage (3 dpa; with or without MS-275 treatment) and homeostatic control (0 dpa) and for the blastema formation stage (8 dpa; with or without MS-275 treatment) and homeostatic control (0 dpa). There were 1,133 GO terms significantly enriched (FDR < 0.05) in at least one of the comparisons of each regeneration stage and treatment to homeostasis. As shown in Figure 2A, we used heatmaps to illustrate the activity of pathways and biological processes at different regeneration stages. We observed that the expression of genes related to transcriptional regulation and cytoskeletal organization was increased at both 3 and 8 dpa compared to 0 dpa, whereas these changes were not observed under HDAC1 inhibition (Figure 2A and Additional File A). These differences suggest that inhibition of HDAC1 affected active responses to amputation and signaling regulation for the normal early regeneration stages. The expression of genes associated with lymphocyte migration, on the other hand, was aberrantly upregulated by HDAC1 inhibition (Figure 2A and Additional File A). Notably, biological functions such as developmental growth and tissue morphogenesis, which were observed only at 8 dpa vs. 0 dpa under normal regeneration, were observed earlier at 3 dpa with MS-275 treatment vs. 0 dpa in the ST (Figure 2A and Additional File A). Thus, it seems that the inhibition of limb regeneration by blocking HDAC may be due to the right process occurring at the wrong time. In addition, many pathways associated with regeneration, such as developmental cell growth, mesenchymal cell differentiation, embryonic morphogenesis, and neuron differentiation, were enriched at both 8dpa and 8 dpa with MS-275 treatment vs. 0 dpa but were enriched earlier at 3 dpa with MS-275 treatment vs. 0 dpa (Figures 2A,B and Additional File A). Taken together, the post-amputation surge in HDAC activity is required for the correct expression timing of genes involved in tissue development, differentiation, and morphogenesis in the ST.
Cluster Transition in the ST in Response to HDAC1 Inhibition
To obtain additional evidence that significant changes originally appeared at 8 dpa but occurred earlier at 3 dpa with MS-275 treatment, we divided genes into clusters based on the trend of their expression from 0 to 3-8 dpa. We first performed differential expression analysis at different time points and only considered the genes that showed significant differential expression (p-value < 0.05) in at least one of the time-point pairwise comparisons (namely, 0 vs. 3 dpa, 0 vs. 8 dpa, and 3 vs. 8 dpa) with DMSO or MS-275 treatment (Additional File B and Supplementary Table 2). A total of 37,798 genes were subjected to unsupervised clustering analysis by Mfuzz (Supplementary Table 3 and Additional File C) and were assigned to eight groups according to the temporal pattern of their expression (Figures 3A,B). We summarized the proportions ( Figure 3C) and exact numbers of genes (Supplementary Table 3), and the gene identities (Additional file C) with stage-dependent expression patterns in the ST that switched among the defined clusters after MS-275 treatment. (E) The heatmap shows the expression values of selected genes whose expression transitioned from cluster 4 to cluster 5 after MS-275 exposure. (F) Enrichment maps of gene sets significantly enriched (FDR < 0.05) by genes whose expression transitions from cluster 4 to cluster 6 after MS-275 exposure. (G) The heatmap shows the expression values of selected genes whose expression transitioned from cluster 4 to cluster 6 after MS-275 exposure. (H) Enrichment maps of gene sets significantly enriched (FDR < 0.05) by genes whose expression transitions from cluster 4 to cluster 7 after MS-275 exposure. The red to white color gradient for each GO node indicates the significance of the enrichment for that particular GO term (red being more significant).
In Figures 3E,G, we highlighted genes that exhibited cluster transition from C4 > 5 and C4 > 6, respectively. Several of these genes have been identified to be critical during the early stages of appendage regeneration. The expression of most of these regeneration-correlated regulators is thought to be significantly upregulated at the blastema formation stage at day 8 of normal regeneration (Monaghan et al., 2012;Currie et al., 2016). However, under HDAC1 inhibition, these genes are expressed too early, i.e., during the wound healing stage (mmp13, fgfr1, twist1, sox9, tgfb2, pdgfrb, msx2, fn1, sall4, and tgfb3). To validate the diverse gene expression patterns, we performed Q-PCR on independent animals (3 biological replicates for each condition; one biological replicate composed of 4 limbs from 2 animals; in total, 60 limbs from 30 animals were included in this validation study). Five out of the 6 genes tested exhibited similar premature elevation patterns when treated with MS-275, as indicated in the sequencing analysis (Supplementary Figure 3). Taken together, these results suggest that HDAC1 has an important role in preventing premature elevation of developmental genes postamputation to ensure rigorous timing for ensuring successful blastema formation, which is critical for limb regeneration.
Inhibiting HDAC1 Activity Changes the Expression Pattern of Cell Type-Associated Signature Genes
Intact limbs are composed of various cell types originating from lineages of the epidermis, endothelium, nerves, muscle and connective tissue (CT) (Rivera-Gonzalez and Morris, 2018). The compositions and ratios of multiple cell types may be key for successful limb regeneration. Differences in the cellular composition of the ST may be one of the possible reasons for failed limb blastema formation when post-amputation HDAC activity is blocked. Because of the lack of an available cell tracking system, we adopted a transcriptome-based approach to estimate the cell composition based on the expression level of the cell typeenriched genes defined by a previous publication (Leigh et al., 2018; Additional File D).
We explored the CT cell composition in the ST during regeneration because CT cells are the most abundant cells contributing to the blastema (Dunis and Namenwirth, 1977;Muneoka et al., 1986;Kragl et al., 2009;Currie et al., 2016). As illustrated in Supplementary Figure 4A, CT cells could be classified into two expression patterns groups after amputation based on their differential abundance across regeneration time points. The first group included fCT I, II, III, and IV cells, the numbers of which were decreased at 3 dpa, while the other group contained cycling cells, fCT V cells, periskeletal cells, and tendon cells, the numbers of which were increased at 8 dpa. Interestingly, except for genes representing cycling cells exhibiting delayed activation, the expression of genes representing various fCT cells, periskeletal cells and tendon cells was aberrantly upregulated or failed to be downregulated in the wound healing stage under MS-275 treatment (Supplementary Figure 4B). The deduced abnormal representations of cell compositions may be one of the causes of impaired limb regeneration. Furthermore, the delayed expression of genes representing cycling cells also hinted that the progress of regeneration was limited.
According to the results of cell composition analysis, the pattern transitions of fCT I, fCT II, fCT III, and fCT IV cells were similar to those of clusters 2 and 8 (Supplementary Figure 4C), and those of fCT V cells, periskeletal cells, and tendon cells were similar to those of clusters 4 and 6 ( Figure 3F). Here, we focused on the transitions from cluster 2 to 8 (C2 > 8). Genes enriched in this transition of C2 > 8 were involved in various aspects of tissue remodeling, such as cellular component morphogenesis, cell junction organization, extracellular matrix (ECM) organization, and tissue migration, demonstrating that tissue repair and the promotion of positive matrix remodeling should have declined immediately upon the early stages of axolotl limb regeneration but that MS-275 treatment somehow caused a delay (Supplementary Figure 4C). While we cannot completely rule out the possibility that homeostatic tissues were slightly overrepresented in the sampling of MS-275treated regeneration defective tissues, thus resulting in the higher transcript quantity of fCT-associated genes, the premature activation of periskeletal and tendon-associated genes could not be explained by the possible over-representation of homeostatic tissues. Moreover, based on the morphological observation (Supplementary Figure 1), overrepresentation of homeostatic tissue sampling may take place at 8 dpa but is less likely to occur at 3 dpa. These observations further support the main conclusion of premature elevation of blastema stage-expressing genes at the wound healing stage when the repressive histone modifier HDACs were inhibited.
The Post-amputation Expression of HDAC Is Also Required for Correct Gene Expression Timing in the Epidermis
The initiation of blastema formation is dependent on the formation of the epidermis. Open wounds in the axolotl tail are rapidly closed by epidermal cells that migrate from the basal layer of the epidermis (Endo et al., 2004). Although there was no significant delay in wound healing between the control and MS-275 treatment in our limb regeneration model (Wang et al., 2019), inability of the right epidermal layers to cover the wound edge may still be an issue. To understand whether the composition of the epidermis may be affected by MS-275, cell composition analysis was performed on gene expression profiles of the epidermis using epidermal cell-related marker genes from a previous single-cell study (Nowoshilow et al., 2018; Supplementary Figure 5A and Additional File E). We found that expression in epidermal Langerhans, intermediate epidermis and small secretory cells was affected by HDAC1 inhibition, suggesting that the priority of cell migration might be changed (Supplementary Figure 5B).
To identify the genes associated with the potential changes in cell position, clustering analysis of 39,581 differentially expressed genes in at least one of the pairwise time points was performed, and these genes were classified into eight groups (Supplementary Figures 6A,B and Additional file F). Accordingly, the cluster transition from cluster 4 to 7 (C4 > 7) was consistent with expression in epidermal Langerhans, intermediate epidermis and small secretory cells (Supplementary Figures 5B, 6C, Supplementary Table 4, and Additional File G). Immune-related biological functions, including the inflammatory response, leukocyte-mediated immunity, leukocyte differentiation, and leukocyte migration, were enriched for genes that transitioned from C4 > 7 in the epidermis (Supplementary Figure 6D). Similarly, the GO terms associated with responses to wound and leukocyte activation and migration were only enriched at 8 dpa vs. 0 dpa under normal regeneration but were enriched earlier in the wound healing stage in MS-275treated wound epidermis at 3 dpa vs. homeostatic epidermis at 0 dpa ( Supplementary Figure 7 and Additional File H). Taken together, nerve-mediated HDAC1 activity is necessary for pacing gene expression at the right stage of regeneration in both the ST and wound epidermis.
Wnt Inhibitor Partially Rescued MS-275-Induced Blastema Formation Defects
The current study identified many developmental relevant pathway-associated genes that were expressed earlier when HDAC1 activity was inhibited after amputation. Among them, we selected Wnt signaling pathway-associated genes to perform a proof of principle functional assay. Wnt signaling has been demonstrated to be involved in vertebrate limb regeneration; however, limb development requires spatial-temporal regulation of the Wnt signaling pathway (Yokoyama et al., 2011;Wischin et al., 2017). To examine whether premature elevation of the Wnt pathway is one of the causes of impaired regeneration upon HDAC1 inhibition, we used 15 independent juvenile axolotls and subjected their 30 limbs to 3 different treatments after amputation; control treatment, i.e., injection of vehicles; regeneration-inhibiting treatment, i.e., 25 mM MS-275 alone; and rescue treatment, i.e., 25 mM MS-275 plus 1 µM Wnt inhibitor. Limb regeneration was assessed every day until 26 dpa. A blastema could be easily seen at 8 dpa in the control group (Supplementary Figure 8B), whereas the MS-275 alonetreated axolotls displayed a lack of regeneration (Supplementary Figure 8C). The axolotls treated with MS-275 plus Wnt inhibitor exhibited a smaller blastema than the control axolotls (Supplementary Figure 8D). All control animals regenerated limbs with digits. Out of the 10 MS-275-treated limbs, 9 did not regenerate, and only 1 limb developed into the early bud blastema stage. In the MS-275 plus Wnt inhibitor group, 1 limb failed to regenerate, 8 developed into the blastema stage, and 1 limb further regenerated into the early differentiation stage. It is apparent that the addition of a Wnt inhibitor partially rescued the MS-275-induced blastema formation defects. Optimization of a regeneration stage-dependent Wnt inhibition protocol under HDAC1 inhibited amputated limb would be necessary to evaluate the full rescue effect. This experimental model can be applicable for functionally test the regeneration stage sensitive pathways identified in this study.
DISCUSSION
Regeneration in axolotls is initiated after wounding and depends on nerve-derived trophic factors (Vieira et al., 2019). The wound epidermis rapidly migrates and covers the wound within hours. In the next days, nerve fibers originating from the amputation plane innervate the wound epidermis, and the signaling loops between the nerve and wound epidermis build a signaling center called the apical epithelial cap (AEC). The AEC produces various signals that promote blastema cell dedifferentiation and proliferation (Makanae et al., 2013). The wound epidermis is a specialized epithelium required for the initiation of blastema formation and limb regeneration (Goss, 1956;Thornton, 1957Thornton, , 1958Mescher, 1976;Tassava and Garling, 1979), while its roles during the early stages of limb regeneration and the nerve-mediated HDAC molecular mechanisms mediating its transition remain largely unknown. In previous studies (Hay and Fishman, 1961;Satoh et al., 2012), the epidermis in the region proximal to the wound site remained proliferative after amputation, although the proliferation rate was reduced rate compared to that of the wound epidermis. Moreover, the initiating epidermal cells that rapidly cover the wound surface are derived from the basal epidermis (Endo et al., 2004;Ferris et al., 2010). We observed premature elevation of the expression of various cell type-associated signature genes when HDAC activity was inhibited (Supplementary Figure 5). Further validation by in situ characterization of marker genes representing the basal epidermis [collagen type xvii alpha 1 chain (col17a1)], proliferating epidermal cells (high levels of proliferating cell nuclear antigen (pcna), intermediate epidermis (krt12), and small secretory cells [SSCs, otogelin (otog)], is needed in the future to demonstrate clearly the HDAC1-associated cell composition changes in regenerating tissues.
The evolutionary loss of complete scar-free regenerative potential for multiple tissue types in mammalian and amphibian models coincides with the development of the immune system (Mescher and Neff, 2005). A successful allograft requires lower rejection (Kragl et al., 2009;McCusker and Gardiner, 2013;Nacu et al., 2013). A significantly higher success rate for axolotl allotransplantation is also associated with a lower immune response. It seems that the occurrence of an immune response suppresses regeneration. Although a strong immune response and regeneration seem to contradict each other, the presence of immune cells is required for regeneration. For instance, regeneration is inhibited when macrophage numbers are low, whereas healthy regeneration occurs when macrophages are present in sufficient numbers (Godwin et al., 2013(Godwin et al., , 2017. Furthermore, we also found a significant transition of the deduced epidermal cell population (Supplementary Figure 5B), and genes associated with immunity-related biological functions were expressed much earlier in the wound healing stage after MS-275 treatment (Supplementary Figure 6D). Differences in immune responses can tip the balance between scarring and regeneration (Godwin et al., 2013).
Blastema formation requires not only a specialized wound epithelium but also the coordination of the proliferation and migration of progenitors derived from muscle, bone, dermal fibroblasts, CT and hitherto undiscovered populations (Muneoka et al., 1986;Kragl et al., 2009;Currie et al., 2016;McCusker et al., 2016;Fei et al., 2017;Nowoshilow et al., 2018). CT contains key cell types for deciphering the molecular program of regeneration, as they express factors that guide the regeneration of appropriate limb parts (Carlson, 1975;Pescitelli and Stocum, 1980;Kragl et al., 2009;Nacu et al., 2013). We found that the expression levels of genes associated with mature fibrous CT (fCT) populations (fCT I, fCT II, fCT III, and fCT IV) were decreased after amputation (Figure 2A), which is consistent with a previous publication demonstrating the reduced representation of these cell types at the wound healing stage (Gerber et al., 2018). In contrast, cycling cells were enriched in the early stages of regeneration, and fCT V cells, periskeletal cells, and tendon cells were enriched around the blastema formation stages (Supplementary Figure 4). How CT cell compositions are modulated in the blastema has not been clarified because of the difficulty in isolation and deconstruction of blastema cells. Notably, signature genes associated with most CT cell types were highly enriched upon MS-275 treatment, except for cycling cells (Supplementary Figure 4). Because blastema progenitors are among early proliferating cells in stump tissues following amputation, we speculated that these cycling cells were inhibited by blocking HDAC1 activities. The reason for the failure of blastema formation may be changes in the activities of CT cells. This analysis revealed a general time-dependent progression of genes associated with different CT cell types. Future in situ staining or single-cell studies will be needed to trace the dynamic composition of each cell type.
Many candidate genes have been revealed to play essential roles in blastema initiation and maintenance based on their changes in gene expression patterns by bulk transcriptome studies from different stages of axolotl limb regeneration (Monaghan et al., 2009;Campbell et al., 2011;Knapp et al., 2013;Stewart et al., 2013;Wu et al., 2013;Voss et al., 2015;Bryant et al., 2017;Gerber et al., 2018). We found that MS-275 interfered with some regeneration-associated genes, which may have directly led to limb regeneration failure (Figures 3E,G). At the molecular level, the expression levels of extracellular matrix (ECM) deposition-related genes, including spalt-like transcription factor genes (sall1 and sall4) (Stewart et al., 2013;Erickson et al., 2016;Li et al., 2021) and collagens (col1a1, col5a1, and col11a1), and ECM remodeling-related genes, including matrix metallopeptidase 13 (mmp 13) (Rayagiri et al., 2018), were low at the beginning of axolotl regeneration but increased at the blastema formation stage (Figures 3E,G). Twist1 is an early marker of the limb blastema mesenchyme (Kragl et al., 2013). Msx2 is used as a blastema marker gene (Satoh et al., 2007;Suzuki et al., 2005;Makanae et al., 2014). Sox9 is essential for sclerotome development and cartilage formation (Zeng et al., 2002). Notably, PDGFRB was identified as a chemotactic growth factor involved in wound healing or regeneration in an initial screen (Stewart et al., 2013) and was expressed in mesenchymal blastema by in situ hybridization (Currie et al., 2016). All of these genes that should not exhibit upregulation until the blastema stage were prematurely elevated at the wound healing stage when HDAC1 was inhibited.
Manipulating Wnt signaling activity at different stages of limb regeneration has different effects (Wischin et al., 2017). Chemical activation of Wnt signaling in the wound-healing stage inhibits limb regeneration. The treatment that is administered after the establishment of the blastema and before morphogenesis also causes disorganization of skeletal elements. In the current study, Wnt signaling was highlighted as one of the GO terms in the DEGs exhibiting cluster transition from 4 to 5 or 6 ( Figures 3D,F), indicating that genes related to the Wnt pathway were activated earlier under the MS-275 treatment. We demonstrated partial rescue of blastema formation upon continuous inhibition of Wnt activity under HDAC1 inhibition (Supplementary Figure 8), indicating that aberrant Wnt activity was indeed one of the causes of MS-275 treatment-induced regeneration defects. Future profiling of Wnt pathway-related gene expression throughout the ∼26 days of limb regeneration in control and MS-275-treated limbs would pave the way to optimize a stage-specific, dosage-dependent protocol to further rescue HDAC1 inhibition-induced limb regeneration defects. Our preliminary results so far demonstrate the feasibility of performing limb regeneration rescue assays under HDAC1 inhibition conditions to study the novel pathway stage specifically modulated by post-amputation HDAC activities.
HDAC forms a complex containing distinct components that is believed to carry out different cellular functions, including regulation of the cell cycle (Rayman et al., 2002) and maintenance of stem cell pluripotency (Liang et al., 2008). HDACs control the correct timing of transcriptional programs among tissues and organs in the metamorphosis of anuran amphibians by interacting with thyroid hormone receptors (Shi, 2013). The recruitment of HDAC-containing corepressor complexes is critical for gene repression by unliganded thyroid hormone receptors in premetamorphic tadpoles and liganded thyroid hormone receptors that activate target gene transcription in metamorphosis. Such ligand switching behavior controls dramatic morphological development, and it is possible that regeneration-specific mechanisms may derepress HDAC corepressor complexes during regeneration to regulate transcription temporally. As HDAC1 is highly conserved in other vertebrates, examining whether it plays similar roles in other appendage regeneration models and whether it is differentially activated in non-regenerative systems after major limb injury may be worth exploring.
CONCLUSION
Correct timing of regeneration-associated gene expression is one of the foundations of successful axolotl limb regeneration.
Our study demonstrated the necessity of post-amputation HDAC activity in modulating regeneration stage-dependent gene expression activity. In the current proof of principle study, we demonstrated that inhibiting prematurely elevated Wnt pathway activity under treatment with the HDAC inhibitor MS-275 can partially rescue blastema formation ability. The HDAC1-mediated wound healing stage-specific repression of genes associated with tissue development, differentiation, and morphogenesis may be a prerequisite for blastema formation.
Animal Handling
Axolotls (Ambystoma mexicanum) were reared to juvenile age (12-16 cm snout to tail tip length) for all animal experiments. All the axolotls were kept in modified Holtfreter's solution (118.4 mM NaCl, 1.3 mM KCl, 1.8 mM CaCl 2 , 1.6 mM MgSO 4 .7H 2 O). Prior to all experiments, the axolotls were anesthetized in 0.1% MS-222 (Sigma-Aldrich, St. Louis, MO, United States). Limb amputation was performed on the middle upper forelimbs on both the right and left sides, and tissues were harvested at 0, 3, and 8 dpa ( Figure 1A). HDAC inhibitor injection into the amputated limbs of juvenile axolotls was performed as described by Wang et al. (2019). Briefly, 2 ml of 25 mM MS-275 (Selleckchem, Houston, TX, United States) was injected into the stumps immediately beneath the wound epidermis of juveniles every other day after amputation. The animal care and experimental procedures were approved by the Institutional Animal Care and Use Committee of National Taiwan University College of Medicine and were conducted in accordance with the approved guidelines.
cDNA Library Preparation and Illumina Sequencing
Tissues from axolotls were subjected to total RNA extraction using TRIzol reagent (Invitrogen, CA, United States) according to the manufacturer's instructions. RNA samples were purified using an RNeasy Mini Kit (Qiagen, Hilden, Germany) and then quantified on a Qubit 4 fluorometer (Invitrogen, CA, United States) and Qsep100 capillary gel electrophoresis (BiOptic, New Taipei City, Taiwan). All samples had an RNA quality number (RQN) of more than 9.0. To reduce variation among individuals within each group, tissue from both the right and left limbs of two animals at each time point was pooled together as one replicate. Two replicates from 4 animals (8 forelimbs) in total were prepared for one condition at each time point post-amputation. The 2 biological repeats for 5 conditions (3 dpa and 8 dpa with and without MS-275 treatment plus the homeostatic control at 0 dpa) includes tissues taken from 40 limbs of 20 axolotls. Ten RNA samples each from soft tissue (ST) and epidermis were subsequently used for cDNA library construction and Illumina deep sequencing.
Sequencing libraries were prepared using a TruSeq Stranded mRNA Preparation Kit (Illumina, CA, United States) according to the manufacturer's instructions. Briefly, 10 µg of each total RNA sample was processed via poly-A selection with oligo(dT) magnetic beads and fragmentation. The resulting fragmented mRNAs were then subjected to first-strand cDNA synthesis using reverse transcription with random primers followed by secondstrand cDNA synthesis using DNA polymerase I and RNase H (Invitrogen). Paired-end (PE) oligoadapters (Illumina) were then added to the cDNA fragments with T4 ligase. The resulting cDNA fragments were purified and enriched by polymerase chain reaction (PCR). The cDNA libraries were sequenced by the Illumina HiSeq2000 (Illumina) system, which generates PE raw reads approximately 150 bp in size.
Transcriptome Reannotation
The axolotl transcriptome was obtained from https: //axolotl-omics.org and annotated using two approaches: (i) the nucleotides were BLASTed (Altschul et al., 1990) against the UniProt database (BLASTx, e-value threshold 1e-5) and (ii) amino acid sequences were obtained based on the predicted open reading frames (ORFs) by TransDecoder and then BLASTed against the UniProt database (BLASTp, e-value threshold 1-e5). Various organism protein names were mapped to human gene symbols via the R package biomaRt (Durinck et al., 2005).
Processing and Analysis of RNA-Seq Data
The raw FASTQ files were checked with FastQC (v0.11.7) and trimmed with cutadapt (v2.10). The qualified reads were aligned to the axolotl transcriptome using bowtie2 (v.2.3.4) (Langmead and Salzberg, 2012). The expression of each transcript was quantified using RSEM (Li and Dewey, 2011) and presented as log 2 TPM (transcript per million) for further analyses. Normalization across all samples was performed by the trimmed mean of the M-values (TMM) method implemented in the edgeR package (McCarthy et al., 2012). Differential expression analysis was performed using the limma package (Ritchie et al., 2015). First, linear models were constructed based on gene expression profiles and the experimental conditions, including time point and MS-275 treatment, utilizing the lmFit function. Second, the contrasts.fit function was employed to compute estimated coefficients and standard errors for a given experimental comparison. Finally, an empirical Bayes framework implemented in the eBayes function was used to compute the statistics of differential expression of all genes. The temporal clusters were determined by the fuzzy c-mean algorithm produced with the Mfuzz package (Futschik and Carlisle, 2005). For each tissue, genes with a p-value < 0.05 in one of the comparisons, i.e., 3 dpa vs. 0 dpa, 8 dpa vs. 0 dpa, dpa8 vs. dpa 3, 3 dpa w/MS-275 vs. 0 dpa, 8 dpa w/MS-275 vs. 0 dpa and 8 dpa w/MS-275 vs. 3 dpa w/MS-275, were considered for clustering analysis, and time-series profiles with and without treatment of MS-275 (namely, [0 dap, 3 dpa, and 8dpa] and [0 dpa, 3 dpa w/MS-275, and 8 dpa w/MS-275]) were merged into a single matrix for clustering analysis. To identify the optimal number of clusters, we performed repeated soft clustering for cluster numbers ranging from 4 to 27 to calculate the minimum centroid distance between two cluster centers produced by c-means clustering (Schwammle and Jensen, 2010).
Function Analysis
The gene sets of GO biological processes were obtained from MSigDB (v7.0). Genes that do not exist in the axolotl transcriptome were identified and eliminated from the analysis. Overrepresentation analysis (ORA) and GSEA were performed using the functions implemented in the clusterProfiler package (Yu et al., 2012). For GSEA, the genes were ranked based on the log-transformed p-values derived from the limma test with signs set to positive/negative for a fold change of > 1 or < 1, respectively. An enrichment map was constructed using in-house R scripts (R statistical environment version 3.5.2.; Additional File I) and was visualized with Cytoscape.
Cell Abundance Estimation
The transcriptomic markers of the different cell types were defined by previous studies (Gerber et al., 2018;Leigh et al., 2018; The gene lists are summarized in Additional Files 3, 4 for ST-and epidermis-associated cells, respectively). The estimated potential abundance of each specific cell type was estimated by the arithmetic mean of the normalized read counts from all signature genes.
RT-Q-PCR for Validation
RNA was prepared using TRIzol Reagent (Invitrogen). RNA samples from the epidermis and ST were harvested at 0, 3, and 8 dpa for Q-PCR analysis. At the 0 dpa time point, the proximal 2 mm of the amputated parts were harvested immediately after amputation.
The epidermis and underlying soft tissue were separately collected. Total RNA from the collected tissues was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, United States). Reverse transcription using Superscript III reverse transcriptase (Invitrogen) was performed at 50 • C. The first-strand cDNAs were diluted 10 times with nuclease free water and served as templates for Q-PCR. Reactions were performed in a total volume of 10 µl using the SYBR Green kit (Stratagene, La Jolla, CA, United States) with 0.8 µM of each primer following the manufacturer's instructions. The sequences of the gene-specific primers (Supplementary Table 4) were determined based on our next-generation transcriptome sequencing data. Q-PCR was performed and analyzed with the ABI StepOne Real-Time PCR System with StepOne software version 2.1 (Applied Biosystems, Foster City, CA, United States).
Local Injection of MS-275 and a Wnt Inhibitor
Among the biological/signaling pathways modulated by the postamputation surge in HDAC1 activity, the Wnt signaling pathways is one of the pathways determined to be prematurely activated in the healing stage when amputated limbs are treated with MS-275. To examine whether the shift in Wnt signaling-associated gene expression from 8 to 3 dpa may be causative of blastema formation defects upon HDAC1 inhibition, we performed rescue experiments with Wnt inhibitors. Two microliters of 1 µM Wnt inhibitor (Calbiochem, Billerica, MA, United States) was injected simultaneously with MS-275, or 25 mM MS-275 only or vehicle only was administered. The timing of inhibitor injection into the amputated limbs of juvenile axolotls was determined according to a study by Wang et al. (2019); Figure 1 and Supplementary Figure 8.
DATA AVAILABILITY STATEMENT
RNA-seq data from this study were deposited in the Gene Expression Omnibus (GEO) under accession number GSE157716.
ETHICS STATEMENT
The animal study was reviewed and approved by the IACUC, NTU.
ACKNOWLEDGMENTS
We also thank the staff of the Sequencing Core and Second Core Laboratory, Department of Medical Research, National Taiwan University Hospital, for technical support during the study and the National Center for High-Performance Computing for computer time and facilities. We would also like to express their gratitude for the constructive comments and English editing from members of the SPL laboratory. | 8,982.4 | 2021-05-10T00:00:00.000 | [
"Biology",
"Medicine"
] |
Flavour anomalies from a split dark sector
We investigate solutions to the flavour anomalies in $B$ decays based on loop diagrams of a"split"dark sector characterised by the simultaneous presence of heavy particles at the TeV scale and light particles around and below the $B$-meson mass scale. We show that viable parameter space exists for solutions based on penguin diagrams with a vector mediator, while minimal constructions relying on box diagrams are in strong tension with the constraints from the LHC, LEP, and the anomalous magnetic moment of the muon. In particular, we highlight a regime where the mediator lies close to the $B$-meson mass, naturally realising a resonance structure and a $q^2$-dependent effective coupling. We perform a full fit to the relevant flavour observables and analyse the constraints from intensity frontier experiments. Besides new measurements of the anomalous magnetic moment of the muon, we find that decays of the $B$ meson, $B_s$-mixing, missing energy searches at Belle-II, and LHC searches for top/bottom partners can robustly test these scenarios in the near future.
Introduction
Several flavour anomalies have been observed in the last few years in various B-meson decay measurements by different experimental collaborations. Some of the anomalous measurements involve semileptonic b → s transitions and include: (1) the suppression with respect to the Standard Model (SM) expectation of the ratios R K and R K * -the branching ratios of the B-meson decay into a K or K * meson and muons, over the decay to the same kaon and electrons -which were observed at LHCb [1][2][3][4] and which imply the possible violation of lepton-flavour universality (LFUV); (2) an enhancement in the angular observable P 5 , first measured by the LHCb [5] and Belle collaborations [6], and later observed also by ATLAS and CMS [7,8]; and (3) a suppression in the branching ratios for the decays B s → φµ + µ − [9] and B → K ( * ) µ + µ − [10,11].
The solutions based on a light dark sector can be divided roughly into two categories, depending on whether the masses involved lie above or below the characteristic window for LFUV observables, which is commonly identified as ranging roughly from the dimuon threshold to √ 6− √ 8 GeV. The first category involves a new light vector with mass m V 2 GeV, coupled to the b − s and the muon currents, interfering negatively with the SM amplitude [18,33]. The light particle can contribute strongly to the physical observables thanks to the proximity of a resonance to the experimental bins [33] and NP effects can be parametrised in this case by Wilson coefficients with an explicit q 2 dependence. Interestingly, the corresponding resonance in the dimuon spectrum from B-meson decay could be hidden due to the sizeable hadronic uncertainty and the presence of the J/Ψ resonance in the same region [33,37]. A second class of models, featuring instead the exchange of light particles below the dimuon threshold, have been considered in refs. [18,32,34,35]. These scenarios are strongly constrained by a number of observations. On the one hand, a light scalar particle with effective couplings to the s and b quarks and leptons yields a positive contribution to the decay rate. One thus requires a sizeable coupling to electrons, a scenario that is in most cases [34] in tension with the measurement of B → K ( * ) e + e − processes in agreement with the SM [38]. On the other hand, a light vector boson with effective couplings to the s and b quark and muons can interfere negatively with the SM process, but is strongly constrained by the measurement of the anomalous magnetic moment (g − 2) µ . This in turn implies a sizeable coupling to the b and s quarks, leading to strong bounds from B s mixing. Finally, in ref. [35] it was pointed out that a resonance coupling to electrons can actually be used to reproduce the low-q 2 bin of R K * . A vector boson very close -but below -the dimuon threshold, where BR(B → K ( * ) V ) can be as small as 1 × 10 −7 , can explain the anomaly and escape the limits from ref. [38].
A common trait of the constructions mentioned above is the presence of an effective offdiagonal coupling g bs to the quark current. Since the quarks carry SU(3) c charge, this must arise from particles with colour integrated out in the UV theory, which must be heavy to avoid exclusion by the strong limits from the LHC. This also means that in realistic scenarios g bs will be suppressed by loop effects and/or a small mixing angle between the SM quarks and these heavy particles.
In this paper we explore in detail the concrete possibility that the flavour anomalies emerge from UV-complete models giving rise to the effective b−s coupling via loop effects involving the particles of both the heavy and light dark sectors. We will show that, under these assumptions, several phenomenological challenges must be faced to generate a g bs large enough to provide a good fit to the LFUV observables once the constraint from (g − 2) µ is taken into account, and at the same time maintain a reasonable agreement with perturbativity of the couplings.
In the specific cases mentioned above, we will show that solutions that belong to the first category, m V 2 GeV, can be made viable with appropriate UV completions, even though they are subject to a combination of increasingly tightening bounds that include, in the UV, LHC direct constraints on hadronic and leptonic new states and Drell-Yan dimuon constraints from the Z lineshape, and in the IR, an appropriate width-and bin-dependent treatment of the bounds from BR(B → K + invisible) and BR(B → Kµµ). Conversely, complete solutions that belong to the second category mentioned above, characterised by m V under the dimuon threshold, are much less likely to survive, as they are strongly constrained by a combination of bounds from BR(B → K + invisible), neutrino trident production, intensity frontier limits on kinetic mixing, and BR(B s → µµ).
More in general, we perform in this work a detailed Monte Carlo scan of the broad m V range and the loop-induced couplings. We identify and characterise the specific properties of the viable models and provide an indication of possible strategies for a timely detection of the associated NP states. The paper is organised as follows. In Sec. 2 we recall expressions for the loop-generated Wilson coefficients from box and penguin diagrams. We provide the quantum numbers of the particles entering the loops and estimate the characteristic size of the couplings required to fit the flavor anomalies. In Sec. 3 we enlist the complete set of constraints we apply to the models. Section 4 is dedicated to the main results, with a description of the fitting procedure and discussion of the allowed parameter space. We present our conclusions in Sec. 5. Appendix A is dedicated to the detailed treatment of the recasting procedure for B → K + inv. limits.
One loop Wilson coefficients from split dark sector models
Our goal is to investigate the extent as to which the flavour anomalies can be explained by generic loop effects involving light new particles, in association with a heavy sector above the EWSB scale. Possible constructions for the Wilson coefficients of the effective Hamiltonian descending from loops of TeV-scale new particles have been investigated, e.g., in refs. [39][40][41][42][43][44][45][46]. They involve either box diagrams of scalar and fermionic states like in Fig. 1(a), or penguin diagrams like the ones represented in Fig. 1(b).
Box diagrams
In the presence of direct Yukawa couplings between the quarks b, s, the muons, and a NP sector composed of fermions ψ i and scalars φ j , the relevant Lagrangian is given by where a sum over repeated indices is implied.
Since at least one of the fermion or scalar fields in the box diagram must necessarily carry the colour charge, the bounds from LHC searches for colour production with b-tagged jets will contribute to push these states above the 1−1.2 TeV scale [48][49][50][51]. At the same time there exist strong bounds from the measurement of B s mixing [52] that either strongly limit the available size of the Yukawa couplings or push one of the new states coupled to b, s to a very large scale. In the case where there is a single pair of particles per vertex, ordered like in Fig. 1(a), the bounds from B s mixing require |Y where we have suppressed subscript indices in the Yukawa couplings, we define x = (m 1 /m 4 ) 2 , y = (m 2 /m 4 ) 2 , z = (m 3 /m 4 ) 2 , and F is a loop function, which equals approximately 1 when m 1 ≈ m 2 ≈ m 3 ≈ O(100 GeV). As shown, e.g., in refs. [39,40,46,47], it follows from Eq. (2) that C µ 9 −0.6 requires Yukawa couplings |Y , a condition that does not guarantee the validity of perturbation theory up to scales much larger than EWSB.
In line with the recent interest in light dark sectors, one can perhaps envision enhancing the contributions of the muon Yukawa coupling by reducing the mass of the colour-singlet particles in the box. However, this is not possible without incurring severe bounds from multiple sources.
One in fact infers from Eq. (3) that F receives a logarithmic enhancement of a few units if x, y, and z are all at the same time significantly smaller than 1. But at least one among m 1 , m 2 , m 3 cannot be much smaller than m 4 . There are two reasons for this. On the one hand, multi-lepton searches at the LHC via Drell-Yan production constrain the particles carrying SU(2) L ×U(1) Y quantum numbers to masses above the 400−600 GeV range [53][54][55]. On the other, if one assumes that the uncoloured particles in the box are lighter than the mass of the muon, so to possibly result invisible at the LHC, they still necessarily contribute at one loop to the anomalous magnetic moment of the muon. The measured 2σ upper bound, δ(g − 2) µ 4 × 10 −9 [56][57][58] implies that the muon Yukawa coupling be This is too small to fit the flavour anomalies unless, as we said above, m 1,2 ∼ O(100 GeV) at least. We conclude that there is arguably no common parameter space for b → s anomalies and (g − 2) µ with perturbative Yukawa couplings and NP masses below O(100 GeV).
Penguin diagrams
It is more promising to look at another type of loop-induced coupling giving rise to the effective operators O 10 is flavouruniversal, and cannot be used to explain the LFUV anomalies. We will therefore be interested in models realising the penguin diagram topology presented in Fig. 1(b), in which a new vector particle (referred to henceforth as a dark photon for simplicity) couples to the muons, and the b and s quarks.
The particle that carries the colour charge in the loop might be a scalar multiplet Φ or a heavy vector-like (VL) fermion Ψ. In the former case the loop is closed by a light (Dirac) fermion χ, whereas in the latter by a light scalar π. In order to avoid charging the b and s quarks under the dark gauge group U(1) D (with gauge coupling g D ), we assume that the dark charge is confined within the loop, i.e., Q χ(π) = −Q Φ(Ψ) . We thus avoid the strong bounds on V from multi-lepton searches at the LHC.
Without much loss of generality we will focus henceforth on the case where the heavy coloured particle is Φ. The case with Ψ does not present very significant differences, barring a few percent suppression of C µ 9 due to swapping the role of the light and heavy mass in the loop functions (see below). We thus have a scalar doublet Φ = (φ u , φ d ) T and light fermion singlets χ i , whose quantum numbers with respect to SU (3) Note that with the above charge assignment the mass matrix of the dark fermions is not a priori diagonal, unless the off-diagonal entries are forbidden by a flavour symmetry, or suppressed by some other mechanism. We will assume for simplicity that this is always the case, without entering in the specifics of such constructions. We confine ourselves to the treatment of left-handed b − s currents, which give rise to the operators O µ 9,10 . Below the EWSB scale the Lagrangian of the hadronic NP sector reads where a sum over repeated indices is implied, and the Yukawa couplings are related by y ij u = y ik d (V † CKM ) kj . We further confine ourselves to the basis where the only nonzero Yukawa couplings of the down-like type are those of the second and third generation: We finally introduce an effective interaction of the dark photon to the muons: The relative size of the g V µ and g A µ couplings is governed by the UV model-building and so is the eventual size, if nonzero at all, of the coupling of the dark photon to neutrinos. This point is of particular relevance when it comes to the constraints on the model from neutrino trident production [59] at CCFR [60] and CHARM-II [61], to which we come back in Secs. 3 and 4. We shall see that, while it is desirable to embed the effective model in a framework with forbidden or strongly suppressed couplings to the neutrinos, 1 we are able to find in our numerical analysis some viable parameter space lying below the bound from CCFR and CHARM-II.
The penguin diagram contributions to the effective operators O µ 9 , O µ 10 are given by [46] where m V is the mass of the dark photon, we have defined and Γ V is the total width of the dark photon, which reads when all light decay channels are kinematically open. Finally, the loop functions are given by Equation (7) allows one to define the effective coupling g V bs to left-handed quarks as 1 For example, one can construct a Type-I 2-Higgs doublet model where the additional Higgs doublet and extra VL heavy leptons all carry U(1) D charge: Θ : (1, 2, 1/2, Q Θ ), E : (1, 1, 1, Q Θ ). Yukawa couplings with the SM left-handed muon doublet L µ , of the type λΘ † L µ E, generate a left-chiral coupling g L µ of V to the muons once U(1) D is broken. The coupling to neutrinos is absent. Typically one gets g L µ ≈ −g D Q Θ λv 2 Θ /2m 2 E . More than one family of VL singlet fermions, and an additional complex scalar singlet charged under U(1) D can be introduced to generate the right-chiral couplings and to push the mass of the scalars with electric charge above the current bound from LEP and LHC searches [62]. where we have introduced the dimensionful couplingg and left explicit the dependence of the coupling on the momentum of V . This corresponds to the effective operator O 6 defined in ref. [35] and also used in ref. [34].
The typical size of g V bs for representative choices of the input parameters is shown in Fig. 2.
Equation (7) shows that the penguin-generated Wilson coefficients depend on q 2 and the size of the dark-photon width. They enter nontrivially in the calculation of the flavour observables and relative constraints when the dark-photon mass lies in the vicinity of the experimental bins. The corresponding expressions should thus be compared directly to the experimental data, as we do in the numerical scan presented in Sec. 4.
We can nonetheless provide a rough estimate of the typical size required for the q 2 -independent couplingg in the limiting cases that the dark photon mass m V becomes much larger or much smaller than the experimental energy (for the LFUV observables we indicatively consider this to be q 2 ≈ 0.04 − 8 GeV 2 ).
If m V 10 GeV one obtains C µ 9 −0.6 roughly for where we have set q 2 at the mean momentum transfer for the 1 − 6 GeV 2 bin of the R K ( * ) observables.
The two couplingsg and g V µ are independently constrained by two powerful probes. On the one hand, the measured 2σ bound on R BB from B s mixing [52] has a direct impact ong when the vector V is exchanged in the s-channel: 2 where the characteristic experimental scale coincides with the B s mass, q 2 = M 2 Bs . On the other hand, the already mentioned 2σ upper bound on the anomalous magnetic moment of the muon constrains g V µ to The current bounds thus result on a very narrow window of availability for explaining the flavour anomalies with penguin diagrams and a heavy dark photon, m V 10 GeV. At the opposite side of the spectrum, m V < 200 MeV, the dark photon is much lighter than the experimental energy scale. Equation (7) shows that the Wilson coefficients become independent of the dark photon mass and of q 2 . C µ 9 −0.6 requires approximately The most severe constraint in this mass range comes again from the 2σ upper bound on the anomalous magnetic moment of the muon, which requires where the upper value refers essentially only to the region m V 100 MeV, and the lower value to all other masses below that. In light of Eqs. (15) and (16) one needs |g| 10 −6 GeV −2 to fit the flavour anomalies. As one can see in Fig. 2, effective couplings of this size are not easy to obtain in the penguin setup.
It is well known (cf., e.g., ref. [33]) that one can increase the size of g V µ while respecting the 2σ upper bound on (g−2) µ by introducing the axial-vector coupling g A µ and thus creating a negative contribution to (g − 2) µ that has to be fine-tuned. This, however, also induces an extremely strong contribution to the B s → µµ decay rate, and will prove ultimately impracticable, as will appear clear in the next sections.
Altogether, this discussion leads us to conclude that the most natural solutions are likely to be situated inside the window m V ≈ 0.2 − 10 GeV. The remainder of this paper is thus dedicated mostly to this mass range. Note that when the dark photon mass approaches M B , additional resonant enhancement can be obtained to open up the parameter space, albeit at the cost of additional limits from B → K processes, which we will describe in detail in the next section.
3 Constraints on the model 3.1 Flavour constraints B s -mixing Strong limits on the Yukawa couplings to b and s quarks arise from box-diagram contributions to B s -mixing. We recall that for exclusively left-handed couplings, see Eq. (5), the only relevant operator is O 1 [40,46]. The corresponding dimensionful Wilson coefficient is defined as where the sum runs over all possible dark fermions, x i(j) = m 2 χ i(j) /m 2 Φ , and the loop function reads In the small x, y limit, the loop function can be approximated as F (x, y) 1, so that C 1 ∝ 1/m 2 Φ and the limit essentially saturates. This has the unexpected consequence that, in the presence of several dark fermion states, one can readily get a large suppression of the B s -mixing contribution in the limit where ij y i * s y i b y j * s y j b 1, which can be easily obtained with, e.g., two light states χ 1 , χ 2 and y 1 * s y 1 b ≈ −y 2 * s y 2 b . Note that while C 1 receives a strong suppression from the addition of approximately equal and opposite-sign couplings, there is no equivalent suppression for C µ 9 , as in the limit of small x 1 , x 2 the effective couplingg in Eq. (11) becomes proportional to ln(x 1 /x 2 ).
Limits from B s decay When the axial-vector coupling to the muon g A µ is present, it induces a contribution to the operator O µ 10 but also to the axial current operator O µ P , which in turn induces a contribution to the direct decay B s → µ + µ − [63][64][65][66].
By adopting the same convention for the scalar operator as in ref. [46] we obtain The typical bounds on C P are significantly more stringent than those on C 10 . They are likely to have a strong impact on our results so that we include them directly in the full numerical scan.
Given the presence of the heavy scalar doublet Φ and dark fermion χ, one can construct the tree-level decay process B → Kχχ based on the b quark 3-body decay b → χΦ * → sχχ. In the limit where m χ , M K M B , we have the simple expression: where f 2 + ≈ 0.3 is the average value of the form factor over the range of integration of the differential decay rate. This typically leads to the constraint |y * s y b | 10 −2 (m Φ /TeV) 2 on the Yukawa couplings of any new fermion with mass 2m χ < M B − M K . As Eq. (11) shows, the upper bound on the Yukawa couplings strongly limits the available range of g V bs , even when the gauge coupling g D is large. This means that to fit the flavour observables one has to resort to a large g V µ value, which, as we shall see in Sec. 4, is severely constrained by Z-lineshape bounds (in addition to requiring a fine tuning of the axial-vector contribution to avoid exceeding δ(g − 2) µ ). To avoid these problems the minimal particle content will have to include at least on dark fermion with mass m χ > (M B − M K )/2.
On the other hand, there is a case to be made for the presence of additional light states with mass below the (M B − M K )/2 threshold and Yukawa couplings to Φ and the b and s quarks that are small enough to avoid the tree-level B → Kχχ bound.
• Dark matter : for a fermion χ of mass above the (M B − M K )/2 threshold the direct swave annihilation channel, χχ → V → µµ, is strongly constrained by CMB bounds [72]. Introducing one additional lighter state with the same quantum numbers provides instead a viable candidate for forbidden dark matter (we come back to this point in more detail at the end of Sec. 4) • The presence of additional light states directly affects the total width of the dark photon V , opening up additional parameter space for a solution to the flavour anomalies.
In cases where at least one very light state χ 1 appears, two qualitatively different regimes of applicability should be considered for the invisible B → K transition. The first is on-shell decay B → KV , occurring for m V < 2M µ < M B −M K < 2m χ 1 , in which V escapes undetected. It is typically suppressed in the low m V limit due to the momentum dependence of the effective coupling g V bs defined in Eq. (11). We get where we have assumed m V , M K M B , f + (0) 0.3 is a form factor (the full expression, used in the numerical analysis of Sec. 4, can be found in ref. [73]), and λ(x, y, z) is the standard Exchange of a virtual V may dominate the invisible decay width for small m V and large dark coupling g D . The full width reads where Γ KV (s) and Γ V →χ 1 χ 1 are obtained by replacing m V by √ s in Eq. (22) and in the corresponding V decay width to fermions, and Γ V is the total width of V , cf. Eq. (9). Note that with more than one dark fermion in the spectrum one ought to sum over all individual off-shell contributions.
Altogether, the combination of both real and virtual contribution to the B → Kχ 1 χ 1 decay implies a complex kinematic shape in terms of missing energy, which may differ significantly from the SM-like B → Kνν decay. This has the direct consequence that the experimental results of Belle and BaBar, which are optimised for the neutrino process, should not be directly applied to our scenario. We therefore perform a conservative recasting of these analyses, described in detail in Appendix A.
Finally, in the mass regime m V > 2M µ , the light vector state can directly decay into a muon pair. This opens up the resonant channel B → K * V , V → µµ. The typical decay width into K * is given by where an explicit expression for F 1 (x K * , x V ) can be found in Appendix B of ref. [35]. Note that in the limit where m V M B , F 1 (x, y) simplifies to F 1 (x, y) 0.1 √ x(y +1.2x), so that Γ K * V is typically suppressed compared to Γ KV . Furthermore, the branching ratio to muons is inversely proportional to the total width Γ V and can thus be strongly suppressed if the coupling of V to the dark fermions χ i is large.
Note that the limits from LHCb [71] on this process focused on a narrow, or even longlived, new resonance, with limits based on invariant mass bins of a few MeV. This hypothesis is especially problematic when the mediator is around the GeV range, since a large dark gauge coupling implies a very large width for V . We then simply model the resonance via a Breit-Wigner distribution and compare bin-per-bin with the limit of ref. [71], retaining the strongest bin as the main limit. 3 The overall impact of the limits from B → K ( * ) transitions on the size of the effective coupling g V bs is summarised in Fig. 3 as a function of the vector mass m V , for three representative choices of the pair (m χ , g D ). For m χ = 5 GeV, the decay B → Kχχ is kinematically forbidden, so that invisible B decays can only proceed via the on-shell process B → KV . This results in a weak bound in the small m V regime (blue line). Note that when m V > 2M µ , the constraints from resonant searches using B → K * µµ strongly exclude this setup since V → µµ is in this case the only accessible decay channel.
In the presence of dark fermions with a small mass, 2m χ < M B − M K , the decay channel V → χχ opens up and the relative strength of different bounds depends on the size of g D . In the case of large g D (orange line), if 2m χ < m V < M B − M K both the on-shell and off-shell invisible decays contribute to the width. For light vector mediator the off-shell decay takes over due to the q 2 dependence of the coupling and the bound on g V bs saturates. If, on the other hand, g D is smaller (green line) and m V > 2M µ , the constraints from resonant searches using B → K * µµ typically overcome the invisible decay limits, suppressing g V bs by a factor of the order of 1.
Finally, note that for large m V the limit arises from invisible searches from the off-shell decay B → Kχχ when it is kinematically allowed.
Precision physics constraints
Muon anomalous magnetic moment The couplings of V to muons can be constrained by the measurement of the anomalous magnetic moment of the muon. A contribution to (g−2) µ is in this case given by [74,75] where As was mentioned in Sec. 2, given the limited range achievable in penguin constructions for g V bs , when only vector-like couplings to the muons are present it becomes difficult to find a g V µ value large enough to allow for a reasonable agreement with the flavour anomalies and at the same time not too large a deviation from the measured value of (g − 2) µ . A certain level of cancellation with the contribution from the axial-vector coupling must take place in most situations [33]. For a GeV-scale vector mediator, this occurs for g A µ ≈ −0.44 g V µ . Note, however, that including the axial-vector contribution triggers the strong bounds from B s → µµ discussed above.
Z physics and intensity frontier limits The coupling of the Z boson to the muon is modified at the one-loop level [76,77] within UV constructions of the type as in Footnote 1. However, due to the smallness of the Z-boson coupling to charged leptons in the SM, the limit is typically subdominant with respect to the (g − 2) µ bound.
A powerful method for discerning light resonances through precision measurements of Drell-Yan dimuon production was proposed in ref. [78]. For m V = 1 − 5 GeV an upper bound can be derived Finally, the Belle-II Collaboration recently provided a bound on the final state radiation process e + e − → µ + µV , V → invisible, based on 0.28 fb −1 of data from the 2018 run [79], which applies directly to our model. While the current limit can hardly compete with the Drell-Yan bound, the 2019 run has stored ∼ 10 fb −1 and moreover a few ab −1 should be obtained in 2020, so that future data will become rapidly relevant.
Notice that more generically, for dark photon mass above ∼ 10 GeV, the phenomenology of the light fermions in intensity frontier experiments can be obtained by integrating it out and considering the fermion portal four-fermion operators [80]. Similarly, the limit from the tree-level b → χΦ * → sχχ decay can also be obtained by integrating out the heavy scalar Φ and using the existing bounds on the fermion portal operatorbγ µ sχγ µ χ. While we cover directly the relevant limits in this section, the latter approach could be particularly fruitful to study and constrain the possible couplings between new light fermions and the other SM generations.
Neutrino trident production If the gauge boson features a coupling g ν to muon neutrinos (cf. Sec. 2.2), one expects a strong enhancement in the neutrino trident production from scattering on atomic nuclei, N : N ν → νN µ + µ − [59].
The cross section for this process has been measured by the CCFR [60] and CHARM-II [61] collaborations to be in agreement with the SM prediction. In the range m V > 1 GeV it results in the generic bound which roughly saturates for smaller mass to about g ν 0.001.
When the bound appears in the plots of Sec. 4 it is obtained under the assumption g ν ≡ g V µ (we repeat that whether or not the bound is relevant depends on the UV completion).
Kinetic mixing In presence of states charged both under the U(1) Y and U(1) D symmetry groups, kinetic mixing between the photon and the vector V will be generated at the loop level. The corresponding 1-loop contributions from fermions and scalars are given by where Y f,s is the fields' hypercharge, Q f,s is the dark charge, and the coefficients N 3 f,s indicate the dimension of the SU(3) c representation. The fields that contribute to the kinetic mixing are Φ, µ L , and µ R which, when g V,A µ g D , results in Such a kinetic mixing is at the limit of exclusion given the current intensity frontier searches (see, e.g., ref. [81] for a recent review), especially when invisible decay channels are not available for the vector mediator. However, the precise value of the kinetic mixing is strongly dependent on the UV physics, and additional VL fields can modify the prediction of Eq. (29) although not by many orders of magnitude.
LHC constraints on t → cµµ We work in this paper under the assumption that the only nonzero Yukawa couplings of the down-like type are y i s , y i b . However, as Eq. (5) shows, the corresponding Yukawa couplings of the up-like type, y i c , y i t , do not receive CKM suppression. They can thus generate non-negligible contributions to processes involving t → cµµ transitions.
Effective operator analyses of LHC bounds from rare top decays [82,83], derived originally for the very-high mass regime, m V ∼ O(TeV), impose a fairly weak bound on the coupling product when m V M t : A rough comparison with Eq. (12) shows that this is not likely to be constraining for our scenarios. However, given that Eq. (7) presents a nontrivial q 2 dependence, for the light mass range investigated in this paper one should rather perform a detailed recast of the experimental searches. This tasks exceeds the purpose of the present paper in view of the fact that, as we will show in Sec. 4, the flavour and intensity frontier experiments discussed above provide already a set of powerful and often inescapable constraints on the dark sector.
Fitting procedure and results
Fitting procedure We perform a multidimensional fit of the following free parameters: with only one light dark fermion contributing with Yukawa coupling y s and y b ). r is employed here as a proxy for the effective couplingg once the mass parameters are fixed. We choose m χ = 2.5 GeV and m Φ = 1 TeV; under these assumptions r relates tog asg = 1.1 × 10 −8 GeV −2 r. Since the fitted flavour observables depend ong rather than the couplings composing it, our results can be extended straightforwardly to the case with more light fermions. The prior ranges of the fitted parameters are presented in Table 1. Separate fits are performed whether m V lies above or below the relevant bins for B anomalies. This is required by the fact that, in order to reproduce the desired sign for C µ 9 , the product r · g V µ needs to have a different sign in these two regions. We observe that m V = 2 GeV works as a satisfactory threshold to separate the two regimes.
The first step of our numerical analysis is to perform a fit of the free parameters of the model to the available experimental data reporting anomalies in B-meson decays, namely the LFUV ratios R K , R K * , the angular observables in the B → K * µ + µ − decay and the branching ratio of B s → µµ. 4 In the high-mass regime we include in the likelihood function the experimental 2σ upper bound on the anomalous magnetic moment of the muon, δ(g − 2) µ 4 × 10 −9 .
To carry out the fit, we employ the HEPfit package [84], performing a Markov Chain Monte Carlo (MCMC) analysis by means of the Bayesian Analysis Toolkit (BAT) [85]. A set of O(500K) points is generated for the two scenarios described in Table 1, and for each case the subset of points reproducing the B anomalies, BR(B s → µµ), and the upper (g − 2) µ bound (high-mass only) at the 2σ level, is stored. These initial subsets of points are then subjected to additional constraints coming from B s mixing, Drell-Yan production and, when applicable, B → K + inv. searches.
Results
We start with addressing solutions in the range m V = 2 − 15 GeV, in cases where the dark photon features a non-negligible invisible width, which could stem from the presence of an additional light fermion χ 1 in the spectrum which do not participate significantly to the loop diagram. The results of the scan in the (m V , g V µ ) plane are presented in Fig. 4. The yellow points are obtained in the scanning procedure described above and correspond to the models in which the B anomalies, BR(B s → µµ), and δ(g − 2) µ are fitted at the 2σ level. The green 4 Due to the explicit q 2 dependence of C µ 9(10) it is not possible to match the Wilson coefficients to the modelindependent bounds obtained by the global fits. Instead, C µ 9(10) have to be fitted directly to the experimental data. On the other hand, if the dark photon mass is in the MeV range, it is possible to directly match the Wilson coefficients to the model-independent bounds obtained by the global fits. 2) µ are fitted at the 2σ level. Superimposed points are allowed after the limits from B → K + inv. are applied, as reported by CLEO [86] (purple) or using our recast procedure (green), cf. Appendix A. Grey shaded region is excluded by precision measurements of Drell-Yan at the LHC [78]. Purple dotted line indicates the exclusion bound from the neutrino trident production [59]. Dotted black line indicates the upper bound from Belle II [79]. In light blue we mark the region in which the (g − 2) µ constraint is satisfied at 2σ with g A µ = 0.
points are those that remain allowed after the limits from B → K + inv. transitions are applied, following the recasting procedure outlined in Appendix A. The on-shell process B → KV , V → χ 1 χ 1 typically proceeds unsuppressed for a GeV-scale mediator, so that all the yellow points below the M B − M K threshold are excluded. On the other hand, the limits ong are dramatically weakened above the M B − M K threshold (see Fig. 3), so that in this regime one can easily fit simultaneously both (g − 2) µ and the flavor anomalies. Incidentally, we exclude m V ≈ 2.5 GeV, corresponding to the solution found in ref. [33], due the q 2 dependence of g V bs , which induces an (m V /GeV) 2 enhancement in Eq. (22) with respect to the effective coupling of ref. [33].
An additional bound on the parameter space of the model is derived from the Z lineshape from Drell-Yan at the LHC, which strongly affects the maximal allowed value of g V µ [78]. The corresponding exclusion region is depicted in Fig. 4 in dark grey. The shading is obtained under the assumption g A µ = −0.44 g V µ , a relation that induces destructive interference in the calculation of (g − 2) µ (cf. Eqs. (25) and (26)). It is well known [33] that the above relation between the vector and axial-vector coupling requires some level of fine tuning. The grey dashed lines in Fig. 4 trace the value of g V µ that corresponds to the indicated level of fine tuning in g A µ ≈ −0.44 g V µ required to avoid exceeding the 2σ upper bound from the measurement of δ(g − 2) µ .
Note, however, that the tuning of the vector and axial-vector muon couplings is a priori not needed for the dark photon in the considered mass regime, as confirmed by the large number of green points within the blue shaded band, corresponding to the region satisfying the (g − 2) µ constraint at 2σ with g A µ = 0. This is an attractive feature of our model, in which we can obtain relative low values of g V µ and subsequently avoid the Drell-Yan limit. In Fig. 4 the limits from the neutrino trident production derived in ref. [76] for g A µ = 0 are shown as a dashed purple line. The constraint applies only if the mediator couples directly to neutrinos, cf. discussion in Sec. 2.2. It should be stressed that, even when the neutrino trident bound applies, solutions that escape the experimental limit exist, with m V = 5 − 7 GeV and r at the upper end of the scanned range.
Finally, it is instructive to compare the results obtained within our UV-complete setup to those derived in a simplified Z model with an effective coupling to the b − s current, g bs . A solution to the b − s anomalies (without the (g − 2) µ constraint) was found in ref. [18] with m Z = 10 GeV, g V µ = −g A µ ≈ 10 −2 , and g bs ≈ 5 × 10 −6 . Given the bound from the BR(B → K + inv.), which limits the value of g D Q Φ , and the requirement of perturbativity of the couplings, we never reach these high values of g bs ≡ g V bs in our scenario, see Fig. 2.
We conclude the discussion of Fig. 4 by pointing out that the picture does not receive substantial modifications if the light fermion χ 1 is not introduced in the theory. If there remains only one dark fermion of mass m χ = 2.5 GeV the B → K + inv. bound does not apply. However, all points with mass m V < M B − M K become in that case subject to the strong resonant B → K * µµ limit, which cuts drastically the parameter space and induces solutions not dissimilar to the area delimited in green in Fig. 4.
Note that the LFUV and angular observables in the fit depend only indirectly on the NP Yukawa couplings, via the effective couplingg. As discussed in Sec. 3.1, the former are thus constrained only by the tree-level contribution to B → Kχχ, which is kinematically forbidden for m χ ≥ (M B −M K )/2, and B s mixing, which dominantly proceeds via box diagrams involving dark fermions and heavy coloured scalars. In the presence of several light fermions, a possible way of suppressing B s mixing in the limit m χ i /m Φ 1 is obtained when ij y i * s y i b y j * s y j b 1, even if one of the χ i is relatively heavier than the others.
The dependence of B s -mixing bounds on the amount of fine tuning among the NP Yukawa couplings and particle masses is illustrated in Fig. 5. In the (γ V , g V bs ) plane, green points correspond to the solutions also marked in green in Fig. 4, which were consistent with the flavour anomalies and also escaped B → K + inv. limits (assuming that the invisible decay of the mediator is ensured by the presence of a very light fermion with small Yukawa couplings, as discussed above). Yellow points trace instead the solutions respecting the resonant B → K * µµ bound when there is no extra light fermion in the spectrum. The parameter space is very similar, showing perhaps a slight preference for small width when the bound B → K + inv. applies. A third dark fermion with large Yukawa couplings is introduced to cancel the strong The results of the scan in the (m V , g V µ ) plane for the low-mass range, m V = 0.6 − 2 GeV are presented in Fig. 6. This region of the parameter space is obtained by performing a sign switch in the product r · g V µ , which allows one to fit R K ( * ) correctly by means of destructive interference with the SM value of C µ 9 below the experimental bin. The colour code is the same as in Fig. 4. Note that the mediator must have a sizeable invisible width in this region to avoid stringent constraints from a visible dimuon resonance in the B → K * spectrum, as discussed in Sec. 3.1.
A few takeaways emerge from the scan in the low-mass region. The first is that all solutions with mass m V 1.4 GeV are cut out by the BR(B s → µµ) constraint, which is directly implemented in the likelihood function. As a consequence, the surviving points are all characterised by fairly large values of R K ( * ) , which lie 2 − 3σ away from the central value measured at LHCb Recast % % Drell-Ya n ( ) ± Figure 6: Results of the scan in the low m V region. The colour code is the same as in Fig. 4. and closer to the SM expectation. It is for the same reason that the plot also appears much sparser than in the high-mass case: there are few model points within 2σ of the measured values of LFUV observables and BR(B s → µµ). We thus identify a mild tension in this part of the parameter space. We then notice that the models surviving the bound from B → K + inv. searches (green points) require large coupling g V µ to the muon (g is directly constrained by the invisible search) and thus a large level of fine tuning in the corresponding g A µ value necessary to cancel δ(g − 2) µ . Overall, the whole region is under siege from a combination of complementary bounds but at present it is not entirely excluded.
Let us finish this section by mentioning the case of a very light mediator, with a dark photon with mass in the MeV range and GeV-scale dark fermions. An interesting property of this regime is that, in the limit where 2m χ i > M B − M K , the mediator V is essentially long-lived since is does not have any available tree-level decay channel. The invisible B decay is then driven exclusively by the B → KV process, which is strongly suppressed at low m V . Furthermore, the dependence of the Wilson coefficients on q 2 /(q 2 −m 2 V ) converges to a constant when m V q and it closely resembles the standard electromagnetic penguin contribution to the flavour anomalies. However, given that the limit from B s → µµ decay forbids such a light vector mediator to have a significant axial-vector coupling to the muon, the upper limit on δ(g − 2) µ leads to a stringent bound on the vector coupling, g V µ 7 × 10 −4 . Concretely, in order to obtain C µ 9 ≈ −0.7 a couplingg 10 −6 GeV −2 is required, which can only by achieved while satisfying B → K + inv. limits if m V 5 MeV, cf. Fig. 3. A vector mediator this light is already excluded by the standard searches for long-lived dark photon. We conclude that no such solution is available in penguin-generated scenarios.
Dark matter As an interesting aside, the lightest dark fermion can provide a good example of forbidden dark matter candidate when its mass is below the muon mass. The dominant annihilation channel for such a dark matter candidate would be χχ → V * → µμ. Such a process leads to a typical relic density of where x f ≈ 20 for the relevant masses, g D = √ 4π and ∆ = 1 − m χ /M µ . When m χ drops below the muon mass threshold the relic density is exponentially enhanced since the annihilation process can only occur due to the thermal velocity of the dark matter particle in the early universe [87,88]. This ensures that the thermal target is matched for one coupling-dependent mass below M µ , typically around ∼ M µ /2. Furthermore, all other annihilation processes are exponentially suppressed when the universe temperature decreases, ensuring that the CMB limits on late-time annihilating sub-GeV dark matter are automatically escaped.
Conclusions
We have presented in this work a solution for the b → s flavour anomalies based on the presence of a split dark sector with a light vector mediator as well as new light Dirac fermions which may constitute all or part of the dark matter. The interaction with the b and s quarks is generated at the loop level via the addition of a coloured scalar particle, resembling a supersymmetric squark. We analysed numerically and analytically the resulting low-energy effective theory, which in particular possesses a q 2 -dependent interaction of the vector mediator with b and s quarks. Varying the mass of the vector mediator from the MeV scale to the tens of GeV, we find two scenarios satisfying all experimental constraints while providing a good fit to the anomalies. In particular, the region with a GeV-scale mediator above the B − K mass threshold appears particularly promising, requiring little to no tuning in the low-energy effective parameters, and to the best of our knowledge it has not been considered previously.
Since our model is partially embedded in a UV completion, we additionally pointed out several constraints that can challenge its viability. We highlighted the constraints from the B → K + inv. decay rate and B s mixing and, in the latter case, provided an example of a mechanism to escape it. We did not make any assumption in this paper on the nature of the dark Dirac-fermion interactions with the neutrinos. Indeed, since the former are complete SM singlets, it would be very interesting to investigate whether or not they could behave as righthanded neutrinos (for instance via the coupling to a dark charged new Higgs doublet), and in that case investigate their relationship with the strong neutrino trident limits.
While the experimental constraints on models addressing the flavour anomalies with light mediators are already quite stringent, we have identified several observables that can easily exclude these scenarios entirely or provide smoking-gun proof of their detection. Chief among those are the limits from B → K and B → K * transitions. While the former play a critical role via the B → K + inv. bounds, experimental searches are typically optimised for the SM process B → Kνν. Including an analysis based on a light, and potentially broad, invisible resonance B → KV could likely strengthen significantly the existing limits, especially when the mediator is light. Similarly, the latest search for B → K * µµ has focused on a very narrow resonance, and should be properly recast for the case of a large and invisible width. Finally, it is important to note that limits on a light dark photon are due to improve in the next few years, and will further constrain the case of a mediator at and below the GeV scale.
A Appendix: Invisible decay limits
We present in this appendix a more detailed treatment of the recasting procedure performed to extract conservative B → K + inv. limits on our model. While the CLEO Collaboration searched explicitly for an on-shell light particle mediating the B → K decay [86], their 2σ limit BR(B → K + inv.) < 4.9 × 10 −5 is relatively weak compared to the ones from B factories. In the following we will base our limit both on the BaBar result [67] -which provides a differential branching ratio limit in bins of q 2 -as well as on an older analysis from the same collaboration [68], which also had some differential limits, albeit on a much larger range for q 2 . Note that the current bounds from the Belle Collaboration [69,70] are typically of the same order as for BaBar. They are however strongly optimised for the SM-like B → Kνν signal and only present their bounds in the total integrated branching ratio. We therefore concentrate on the two BaBar analyses.
First, using the BaBar hadronic-tagging analysis [67], we calculate the branching ratio BR(B → Kχ 1 χ 1 ) in s B = q 2 /M 2 B bins, where for low m V most of our NP signal is concentrated in the lowest bin, s B < 0.1. We then compare it with the 2σ limits from Fig. 6a of ref. [67]. While this approach leads to a strong bound when the real process B → KV , V → χ 1 χ 1 dominates, these limits can be significantly weakened when the virtual process dominates, since the branching ratio accounts for a broader spread in s B bins.
We therefore also include partially integrated limits from the BaBar semileptonic-tagging analysis [68], which combined the world-leading limit on B + → K + νν with detailed information about the signal efficiencies as function of the momentum of the K + (and hence on the missing energy). We select the q 2 ranges [3.4 2 , 4 2 ] GeV 2 and [0, 2.4 2 ] GeV 2 (corresponding to p K in the range [1, 1.5] GeV and 2 GeV, respectively), where the Boosted Decision Tree (BDT) efficiencies presented in Fig. 3 of ref. [68] are larger than ∼ 0.3. This ensures that the signal efficiencies for our NP kinematics are of the same order of magnitude or higher than the ones for the SM signal. We then compare both regions with the low-q 2 and high-q 2 95% C.L. limits, BR(B → K + inv.) < 1.1 × 10 −5 and BR(B → K + inv.) < 4.6 × 10 −5 , respectively. | 12,644.8 | 2020-02-25T00:00:00.000 | [
"Physics"
] |
Small area estimation using reduced rank regression models
Abstract Small area estimation techniques have got a lot of attention during the last decades due to their important applications in survey studies. Mixed linear models and reduced rank regression analysis are jointly used when considering small area estimation. Estimates of parameters are presented as well as prediction of random effects and unobserved area measurements.
Introduction
Small area estimation is an active research area in statistics with many applications. Today there exist several approaches to deal with small area problems. For recent contributions to the topic when considering discrete data (e.g. see Chandra et al. 2018), and for continuous data (see Baldermann 2018). In both articles many works of other authors are presented and discussed. For those who want to get a solid introduction to the subject the book (Rao and Molina 2015) can be recommended. Usually a common basis for small area estimation problems is that a survey study (finite population case) has been conducted and based on the survey one intends to extract information concerning small domains by adding specific local information which was not accounted for in the comprehensive survey.
In this article a case is considered where units of a survey sample are investigated several times, leading to the existence of dependent observations. It is assumed that observed background information exist which then is implemented via a linear model, giving a model which includes variance components. Moreover, it is assumed that latent processes exist which often show up when different types of systems are studied, for example when weather impact is studied. In this case often a huge number of variables are studied but we do not know the effects of the individual variables, for example we cannot directly relate temperature to plant growth. This leads us to exploit reduced rank regression models. Thus, the model which will be discussed in this article will include time dependent fix effects, i.e. structured mean responses, random parameters and rank restrictions on parameters, which to our knowledge is a new type of model. In Section 2 more motivating details are given, including basic references, and in Section 3 the assumptions are summarized together with all technical details. Briefly, in Section 4 the estimation of all unknown parameters is considered and finally two prediction results are presented in Section 5.
Background
In this section, via an example, we will introduce our way of thinking about extracting information from a survey sample. In particular, a number of different sources of uncertainty are presented which then will be included in a statistical model. It turns out that the model will comprise three different types of uncertainty. Section 3 includes all details of the model and in the subsequent section likelihood-based estimators are derived. In the last section these estimators are utelized when finding predictors.
Example 2.1. Let there be a national survey which collects data about some specific production from a certain type of companies. The survey is followed-up once per year and it is repeated, say, five years.
Altogether there exist 20 regions and each region consists of 10 subregions. The survey sample (units) consists of four companies from each subregion. Thus, in the survey, in total 800 companies are followed five years. With the models presented in this article the aim is to obtain information about the companies in the subregions. Note that in reality the companies should be classified into strata and then at least one company should be sampled from each stratum and each subregion. To simplify the presentation we will only have one stratum. However, in each subregion we assume that there is a lot of background information which we would like to take into account when predicting production in subregions or predicting production of unobserved units.
Small area models, also called small domain models, can be classified into two categories. One category is the area-level models category (Fay and Herriot 1979) and the other category is the unit-level models category (Battese et al. 1988). Both model categories are discussed in detail in a well-written book (Rao and Molina 2015). In this article we are thinking of area levels but if unit-level information would be available the proposed model can also be used.
Example 2.1 indicates the following use of notation. Let h ij ; i ¼ 1; ; :::; m; j ¼ 1; 2; :::; n i ; n ¼ X m i¼1 n i be the direct estimates from the survey for the jth subarea within the ith area. The estimates are based on the survey design where units have been sampled on the subarea level. Note that we will not discuss the underlying survey in this article, only use the outcome of the survey, i.e. the direct estimators of the subareas. Statistics is partly about quantifying uncertainty. Often this is carried out by assuming that an estimate is a representation of an estimator which follows some distribution. However, we also have cases where uncertainty is not described via distributions. Examples are rounding errors, specific types of truncations, identification of influential observations, outlier detection, model comparisons, the number of observations in a sample, etc. One choice of distribution for the estimatorĥ ij would be the one generated by the survey design, i.e. how many and under what model the units have been sampled would determine the distribution. However, for our purposes we have difficulties to work with the probability space generated by the survey design. Instead, we now think of the direct estimatesĥ ij being misspecified quantities since the survey sample distribution does not take into account those covariables we are going to use in the adjustment ofĥ ij : The covariables can be used to correct for bias as well as diminish the uncertainty in the estimate. For example, returning to our example, if the survey design does not take into account the size of companies small companies can be overrepresented in the sample and then h ij is wrongly estimated.
In order to derive an applicable model lets start with where for notational convenienceĥ ij has been replaced by y ij . The joint distribution for f ij g is specified via where ¼ ð 11 ; :::; 1;n 1 ; :::; m;n m Þ is a row vector, V is supposed to be known and determined by the survey design. Often V is solely a function of the number of sampled units. Moreover, is defined on another probability space than the one generated by the survey design. However, the uncertainty in the direct estimates, due to the survey, is incorporated in the dispersion of : Furthermore, one basic assumption in this article is that some of the h ij will be the same, i.e. the survey estimators are the same for clusters of subregions. To make such an assumption is often realistic but has to be checked in practise. Under this assumption, instead of (2.1), the model can be written in matrix notation where y ¼ ðy 11 ; :::; y 1;n 1 ; :::; y m;n m Þ; ðh 11 ; :::; h 1;n 1 ; :::; h m;n m Þ ¼ b 0 C; and b : k  1 consists of unknown parameters and C: k  n is a usual design matrix describing the relations among the elements fh ij g: Usually C consists of blocks of one-vectors but in principle it can be arbitrary as long as it fits together with the covariate structures which will be presented later. For example, with three areas and four subareas in each area C equals 2). The model means that a probability space is only connected to the modeling of fĥ ij g; i.e. the set of estimated h ij , but we include in the model extra variation due to the survey design.
In the introductory example it was stated that the survey was repeated five times. When survey studies are repeated it will for this article be assumed that when there exists a linear trend it is of the same form for all subareas. In this case there will be a matrix Y where each row in the matrix follows Equation (2.3), besides an unknown scalar multiplier. Thus, Equation (2.3) implies where A is a design matrix which models p repeated surveys (within subarea design matrix), B is a matrix of unknown parameters and E$N p;n ð0; R; VÞ; i.e. E is matrix normally distributed with a separable dispersion structure D½E ¼ D½vecE ¼ V R; where stands for the Kronecker product and R is an unknown positive definite matrix. The parameter R is used to model the dependency due to the repeated measurement which exist because the observed units of the design have been repeatedly measured. No particular structure in R is assumed since it would only make sense if we would have precise knowledge about the mean structure, which is not the case in this article. For example, it does not make sense to assume an autocorrelated structure if the mean has not been established. The model in Equation (2.4) is often called Growth Curve model, GMANOVA and recently bilinear regression model (see von Rosen 2018, for basic results and references).
As an example of A we can have meaning that there is a second order trend which is observed over five years. When expressing h ij as a linear function of an unknown parameter vector b; as it was done in Equation (2.3), we usually cannot say that the model is completely true for all subareas. Instead we believe that there is some random variation around a true imaginary model, i.e. we introduce a variance component. Thus, instead of Equation (2.4), the following model is set up: where U$N p;l ð0; R u ; I l Þ is independent of E and Z is related to C because, in the model, usually via C, the observations are linked to the subareas and Z helps to describe the random variation (errors) in the subareas. Therefore, formally, CðZ 0 Þ CðC 0 Þ; where CðÞ denotes the column vector space. Additionally we will standardize these random effects by assuming ZV À1 Z 0 ¼ I l ; meaning that it has been adjusted for a different number of subareas within an area, taking into an account the uncertainty in the survey estimates. This has two important implications. One is that it simplifies the mathematical treatment of the model and the second is that it becomes easier to compare predicted random effects. Now observed covariables will be introduced in the model. There exist different types of covariables but we are not going to distinguish between different types. For example one type of covariables are those which are accounted for in the survey design (e.g. variables defining different strata) and another type can be covariables observed in registries and a third type can be baseline data which were available when a survey study with repeated measurements on units was initiated. The effect of the covariate will be modeled by a term B 1 C 1 where B 1 stands for the unknown effect and C 1 collects the observed covariables leading to the model Moreover, suppose that there are a number of latent variables. It can be large clusters of variables, e.g. weather variables, socio-econometric variables, chemical characteristics of soil and water, etc. A typical phenomena of these clusters are that one knows that there should be an effect on the survey estimates but it is difficult to express this via some functional relationship. Instead we will model the relationship via rank restrictions on parameter matrices and will consider the following model: where F: t  n is known and W is unknown of rank rðWÞ ¼ r minðp; tÞ: The idea is that all clustered variables appear in F and these variables are governed by r unobserved latent variables (processes) which are incorporated in the model via the rank restrictions on W: It is important to note that p has to be larger than r meaning that modeling the number of underlying latent variables depends on how many times the survey has been repeated. Thus, we really see an advantage of performing repeated surveys when latent variables are supposed to exist, which indeed we believe is a common phenomena.
Detailed model specification
Let Y consist of the direct estimates of the subareas where each column corresponds to a subarea including p estimators from the p repeated measurements. When performing likelihood inference the direct estimates are used but in the presentation we will not distinguish between estimators and estimates.
Definition 3.1. Let where A: p  q, C: k  n, C 1 : k 1  n; F: t  n, are all known matrices, CðF 0 Þ CðC 0 Þ; rðWÞ ¼ r<minðp; tÞ; Z: l n, p l<n; ZV À1 Z 0 ¼ I l ; U$N p;l ð0; R u ; I l Þ; E$N p;n ð0; R e ; VÞ where U and E are independently distributed. The parameters R u and R e are supposed to be unknown and positive definite whereas V is known and determined from the survey. When in Definition 3.1 B 1 C 1 ¼ 0; WF ¼ 0 and UZ ¼ 0 then we have the classical growth curve model (GMANOVA) (see Potthoff and Roy 1964;von Rosen 2018). When WF ¼ 0 and UZ ¼ 0 then the growth curve model with covariate information appears which sometimes is identified as a mixture of the GMANOVA and MANOVA models. Some references to this model, as well as more general models (see Chinchilli and Elswick 1985;Verbyla and Venables 1988;von Rosen 1989;and Bai and Shi 2007). If B 1 C 1 ¼ 0 and WF ¼ 0 we say that we have the growth curve model with random effects (see Ip et al. 2007) whereas if only WF ¼ 0 holds (see Yokoyama and Fujikoshi 1992;Yokoyama 1995) where similar models are considered and where references to earlier works can be found. However, usually in these works one puts structures on the covariance matrices which will not be discussed in this article.
Estimation of parameters
Let Q be a matrix of basis vectors such that CðQÞ ¼ CðV À1=2 C 0 : V À1=2 C 0 1 : V À1=2 Z 0 Þ; where V 1=2 is a symmetric square root of V in E$N p;n ð0; R; VÞ: Further, let Q o be any matrix of full rank satisfying CðQ o Þ ¼ CðC 0 : C 0 1 : Z 0 Þ ? : We assume Q 0 Q ¼ I v ; where v ¼ rðC 0 : C 0 1 : Z 0 Þ; v > l. A one-one transformation of the model in Definition 3.1, using Q, yields (4.8) The identity in Equation (4.8) is postmultiplied by C leading to the model and this leads to that the model in Equation (4.10) will be split into two models. Let C ¼ ðC 1 : C 2 Þ : v  l; v  ðvÀlÞ: Then we have three models which will be used when finding estimators: (i) YV À1=2 QC 1 ¼ ABCV À1=2 QC 1 þ B 1 C 1 V À1=2 QC 1 þ WFV À1=2 QC 1 þUZV À1=2 QC 1 þ EV À1=2 QC 1 ; UZV À1=2 QC 1 $ N p;l ð0; R u ; I l Þ; EV À1=2 QC 1 $ N p;l ð0; R e ; I l Þ; (4.11) The idea is to utilize Equations (4.12) and (4.13) when estimating B, B 1 ; W and R e : To estimate R u these estimators are inserted in Equation (4.11). Suppose that the esti-matorsB;B 1 ;Ŵ andR e have been obtained, and let Under the assumption of no randomness inB;B 1 andŴ; Here it is assumed that the difference is positive definite. IfR u is not positive definite, which should occur with some positive probability, this indicates that data does not support the model presented in Definition 3.1 or the estimation procedure has some deficiencies. It can be noted that the problem with negative variance components in mixed linear models has a long history and an early reference is Nelder (1954). A more recent contribution to the discussion of the problem is provided by Molenberghs and Verbeke (2011). If we would start with the model in Equation (2.5) and only consider D½Y ¼ ZZ 0 R u þ V R e we could avoid to assume R u to be positive definite. However, in our model derivation, via U$N p;l ð0; R u ; I l Þ; R u has to be positive definite. Thus, if obtaining a non-positive definiteR u we have to either reformulate the model or change the estimator. One way to modifyR u is to work with the eigenvalues ofR u : If there are a few "small" (by absolute value) non-positive eigenvalues these eigenvalues can be replaced by small positive quantities. This should of course take place with caution and common sense but as far as we know there is no general recipe of how to handle a non-positive definiteR u : To estimateB;B 1 ;Ŵ andR e Equations (4.12) and (4.13) are merged: (4.14) whereẼ$N p;nÀl ð0; R e ; I nÀl Þ: Put Equation (4.14) is identical to which is an extended Growth Curve model (GMANOVA þ MANOVA) with a reduced rank regression component. Furthermore, the likelihood function which corresponds to the model in Equation (4.15) equals L B; B 1 ; W; R e ð Þ¼ 2p ð Þ À 1 2 p nÀl ð Þ jR e j À 1 2 nÀl ð Þ Â exp À 1 2 tr R À1 e X À ABD 1 À B 1 D 2 À WD 3 ð Þ ðÞ 0 È É ; (4.16) where we have used the convention that instead of ðHÞðHÞ 0 it is written ðHÞðÞ 0 for any arbitrary matrix expression H. Our first observation is that the likelihood in Equation (4.16) is smaller or equal to 2p ð Þ À 1 2 p nÀl ð Þ jR e j À 1 with equality if and only if which, under some full rank conditions on D 2 ; determines B 1 as a function of B and W: Thus, if B and W can be estimated the covariate effect described by B 1 D 2 can be estimated. The density in Equation (4.17) corresponds to the model which is a Growth Curve model with latent variables (reduced rank effect). The model has been considered (Reinsel and Velu 1998;von Rosen and von Rosen 2017), where also other references can be found. Now it is shortly described how B and W can be estimated. For notational conveniences the model in Equation (4.18) is written Note that CðD 0 3 Þ CðD 0 1 Þ because CðF 0 Þ CðC 0 Þ which is essential for being able to obtain explicit estimators.
Let The upper bound of the likelihood function in Equation (4.20) is well known (see Srivastava and Khatri 1979, Theorem 1.10.4): (4.21) and in Equation (4.21) equality holds if and only if ðnÀlÞR e ¼ W: Hence, if B and W are estimated then R e is also estimated. Maximizing the log-likelihood function is equivalent to minimizing the determinant jWj which now will take place. It is worth noting that it is used, since rðWÞ ¼ r; that the matrix W can be factored into two terms, i.e. W ¼ W 1 W 2 ; where W 1 and W 2 are of size p  r and size r  t, respectively. Depending on the knowledge about the model W 1 and W 2 can be interpreted but there are also cases where the factorization has no clear interpretation.
Following the approach presented in (Kollo and von Rosen 2005, Chapter 4 (4.23) and equality in Equation (4.22) holds if and only if ABD 1 ¼ P A;S 1 ðXPD0 1 ÀW 1 W 2D3 Þ: Throughout the article it is supposed that S 1 is positive definite which holds with probability 1. Thus, B as a function of W ¼ W 1 W 2 can be obtained. Moreover, (4.25) where (4.26) and the equality in Equation (4.25) holds if and only if Given W 1 ; the linear system of equations in Equation (4.27) is consistent and hence W 2 can always be estimated as a function of W 1 : (4.28) under the assumption that rðD 3 Þ ¼ k 1 and CðAÞ \ CðW 1 Þ ¼ f0g: The only parameter in the right-hand side of Equation (4.25) which is left to estimate is W 1 : The right-hand side of Equation (4.25) can be factored as with H : p  rðAÞ is such that T 0 1 S À1 2 T 1 ¼ HH 0 ; M : ðpÀrðAÞÞ Â r is semi-orthogonal of rank r and U is a positive definite matrix. The square root in M is supposed to be symmetric. Due to the Poincar e separation theorem (see Rao 1979) where k 1 ! Á Á Á ! k pÀrðAÞ are the eigenvalues of U which do not depend on any unknown parameter. The lower bound is then attained if M consists of the corresponding eigenvectors v 1 ; :::; v r : LetM ¼ ðv 1 ; :::; v r Þ: Therefore, a maximum likelihood estimator of W 1 is found if we can find W 1 satisfying the following equality: (4.29) SinceM 0M ¼ I r ; the following choice of W 1 is appropriate: There remains an unresolved problem of presenting all solutions to Equation (4.29), but this is not considered here.
Together with Equation (4.28) this yields thatŴD 3 ¼Ŵ 1Ŵ : Now, the obtained results will be stated in the following theorem.
Theorem 4.1. Let the direct estimates of a survey follow the model given in Definition 3.1. Moreover, all matrices in the statements have earlier been defined in this section.
Prediction
Small area estimation is very often about prediction of unobserved units (subareas). For our model, due to normality assumptions, it is relatively straight forward to construct predictors, in particular if the explicit estimators of the previous section are utilized. The strategy will be to first predict U which will be based on the observed data and thereafter the unobserved units are predicted. For prediction of U the conditional mean E½UjY is crucial and thus the joint distribution of U and Y will be derived. According to Definition 3.1, U and E are independently distributed ðvecU; vecYÞ is normally distributed, i.e. where Thus, vec YÀABCÀB 1 C 1 ÀWF ð Þ and the next proposition can be stated.
Proposition 5.1. Let the model be given in Definition 3.1 and use the notation introduced earlier in Section 4.
i. The predicted valueÛ is given by whereR u ;R e ; ABC;B 1 C 1 andŴF are presented in Theorem 4.1 and Y consists of subarea estimates.
ii. Let Y o consist of the unobserved direct estimates, corresponding to the non-sampled units. The corresponding covariables are available which are denoted C o ; C 1o ; F o and Z o : Then where vecÛ is given in (i).
Funding
This research has been supported by The Swedish Foundation for Humanities and Social Sciences (P14-0641:1) and The Swedish Natural Research Council (2017-03003). | 5,482.4 | 2020-07-02T00:00:00.000 | [
"Mathematics"
] |
Threshold-activated transport stabilizes chaotic populations to steady states
We explore Random Scale-Free networks of populations, modelled by chaotic Ricker maps, connected by transport that is triggered when population density in a patch is in excess of a critical threshold level. Our central result is that threshold-activated dispersal leads to stable fixed populations, for a wide range of threshold levels. Further, suppression of chaos is facilitated when the threshold-activated migration is more rapid than the intrinsic population dynamics of a patch. Additionally, networks with large number of nodes open to the environment, readily yield stable steady states. Lastly we demonstrate that in networks with very few open nodes, the degree and betweeness centrality of the node open to the environment has a pronounced influence on control. All qualitative trends are corroborated by quantitative measures, reflecting the efficiency of control, and the width of the steady state window.
Introduction
Nonlinear systems, describing both natural phenomena as well as human-engineered devices, can give rise to a rich gamut of patterns ranging from fixed points to cycles and chaos. An important manifestion of our understanding of a complex system is the ability to control its dynamics, and so the search for mechanisms that enable a chaotic system to maintain a fixed desired activity has witnessed enormous research attention [1,2]. In early years the focus was on controlling low-dimensional chaotic systems, and guiding chaotic states to desired target states [3][4][5][6]. Efforts then moved on to the arena of lattices modelling extended systems, and the control of spatiotemporal patterns in such systems [7]. With the advent of network science to describe connections between complex sub-systems, the new challenge is to find mechanisms or strategies that are capable of stabilizing these large interactive systems [8].
In this work we consider a network of population patches [9,10], or "a population of populations" [11]. Now, in analogy with reaction-diffusion processes, diffusive coupling has been very widely studied as a model of connections between population patches, with most models of metapopulation dynamics considering density dependent dispersal [12][13][14][15][16]. However, here we will investigate a different class of coupling, namely threshold-activated transport. The broad scenario underlying this is that each population patch has a critical population density it can support, and when the population in the patch, due to its inherent growth dynamics (which may be chaotic) exceeds this threshold, the excess migrates to neighbouring patches. The neighbouring patch on receiving the migrant population may become over-critical too, triggering further migrations. So this form of coupling is pulsatile and inter-patch transport occurs only when there is excessive build-up of population density in a patch, which may initiate a cascade of transport events [6,17]. Though much less explored, in many situations this form of coupling may be expected to offer a more appropriate description of the connections between spatially distributed population patches.
In this study we will then aim to obtain broad insights on the dynamics of a complex network under threshold-activated transport, through the specific illustrative example of spatially distributed populations connected by threshold-activated migrations. Our principal question will be the following: what is the effect of threshold-activated dispersal on the dynamical patterns emerging in the network, and in particular, can threshold-activated coupling serve to stabilize the intrinsically chaotic populations in the network to regular behaviour, such as steady states or regular cycles? In the sections below, we will first discuss details of the nodal dynamics, as well as the salient features of pulsatile transport triggered by threshold mechanisms. We will then go on to demonstrate, through qualitative and quantitative measures, that such threshold-activated connections manage to stabilize chaotic populations to steady states. Further we will explore how the critical threshold that triggers the migration, and the timescales of the nodal dynamics vis-a-vis transport, influences the emergent dynamics.
Model
Consider a network of N sub-systems, characterized by variable x n (i) at each node/site i (i = 1, . . .N) at time instant n. Specifically, we study a prototypical map, the Ricker (Exponential) Map, at the local nodes. Such a map has been considered as a reasonably accurate model of population growth of species with non-overlapping generations [18]. It is given by the functional form: where r is interpreted as an intrinsic growth rate and (dimensionless) x n (i) is the population scaled by the carrying capacity at generation n at node/site i. We consider r = 4 in this work, namely, an isolated uncoupled population patch displays chaotic behaviour. Note that the results we will subsequently present here, hold qualitatively for a wide class of unimodal nonlinear maps, of which the Ricker map is a specific example. The coupling in the system is triggered by a threshold mechanisms [6,[19][20][21]. Namely, the dynamics of node i is such that if x n+1 (i) > x c , the variable is adjusted back to x c and the "excess" x n+1 − x c is distributed to the neighbouring patches. The threshold parameter x c is the critical value the state variable has to exceed in order to initiate threshold-activated coupling. So this class of coupling is pulsatile, rather than the more usual continuous coupling forms, as it is triggered only when a node exceeds threshold.
Specifically, we study such population patches coupled in a Random Scale-Free network, where the network of underlying connections is constructed via the Barabasi-Albert preferential attachment algorithm, with the number of links of each new node denoted by parameter m [22]. The resultant network is characterized by a fat-tailed degree distribution, found widely in nature. The underlying web of connections determines the "neighbours" to which the excess is equi-distributed. Further, certain nodes in the network may be open to the environment, and the excess from such nodes is transported out of the system. Such a scenario will model an open system, and such nodes are analogous to the "open edge of the system". We denote the fraction of open nodes in the network, that is the number of open nodes scaled by system size N, by f open . In this work we also consider closed systems with no nodes open to the environment, where nothing is transported out of the system, i.e. f open = 0.
So the scenario underlying this is that each population patch has a critical population density x c it can support, and when the population in the patch, due to its inherent chaotic growth dynamics, exceeds this threshold, the excess population moves to a neighbouring patch. The neighbouring patch on receiving the excess may exceed threshold too. Thus a few over-critical patches may initiate a domino effect, much like an "avalanche" in models of self-organized criticality [23] or cascade of failures in models of coupled map lattices [24]. So the main mechanisms for mitigating excess is through redistribution of excess, which ensures that nodes that are under-critical will absorb some excess population, and through the transport of excess out of the network via the open nodes. All transport activity in the network stops, namely the cascade ceases, when all patches are under the critical value, i.e. all x(i) < x c .
So there are two natural time-scales here. One time-scale characterizes the chaotic update of the populations at node i. The other time scale involves the redistribution of population densities arising from threshold-activated transport. We denote the time interval between chaotic updates, namely the time available for redistribution of excess resulting from thresholdactivated transport processes, by T R . This is analogous to the relaxation time in models of selforganized criticality, such as the influential sandpile model [23]. T R then indicates the comparative time-scales of the threshold-activated migration and the intrinsic population dynamics of a patch.
Results
We have simulated this threshold-coupled scale-free network of populations, under varying threshold levels x c (0 x c 2). We considered networks with varying number of open nodes, namely systems that have different nodes/sites open to the environment from where the excess population can migrate out of the system. Further, we have studied a range of redistribution times T R , capturing different timescales of migration vis-a-vis population change [25]. With no loss of generality, in the sections below, we will present salient results for Random Scale-Free networks with m = 1, and specifically demonstrate, both qualitatively and quantitatively, the stabilization of networks of chaotic populations to steady-states under threshold-activated coupling.
Emergence of steady states
First, we consider the case of large T R , where the transport processes are fast compared to the population dynamics, or equivalently, the population dynamics of the patch is slow compared to inter-patch migrations. Namely, since the chaotic update is much slower than the transport between nodes, the situation is analogous to the slow driving limit [23]. In such a case, the system has time for many transport events to occur between chaotic updates, and avalanches can die down, i.e. the system is "relaxed" or "under-critical" between the chaotic updates. So when the transport/migration is significantly faster than the population update (namely the time between generations), the system tends to reach a stationary state where all nodal populations are less than critical.
An illustrative case of the state of the nodes in the network is shown in Fig 1. Without much loss of generality, we display results for a network of size N = 100, for a representative large value of redistribution time T R = 5000. It is clear that all the nodes in the network gets stabilized to a fixed point, namely all population patches evolve to a stable steady state.
The next natural question is the influence of the critical threshold x c on the emergent dynamics, and this will be demonstrated through a series of bifurcation diagrams. Note that in all the bifurcation diagrams presented in this work we will display on the vertical axis the state x of a representative site in the network, over several time steps after transience, with respect to threshold x c which runs along the horizontal axis.
It is clearly evident from the bifurcation diagrams in Fig 2 that a large window of threshold values (0 x c < 1) yield spatiotemporal steady states in the network [26][27][28]. It is also apparent that the degree of the open node does not affect the emergence of steady states here. Further, for threshold values beyond the window of control to fixed states, one obtains cycles of period 2. Namely for threshold levels 1 < x c < 2 the populations evolve in regular cycles, where low population densities alternate with a high population densities. This behaviour is reminiscent of the field experiment conducted by Scheffer et al [29] which showed the existence of self-perpetuating stable states alternating between blue-green algae and green algae. We discuss the underlying reason for this behaviour in S1 Appendix, and offer analytical reasons for the range of period-1 and 2 behaviour considering a single threshold-limited map.
So our first result can be summarized as follows: when redistribution time T R is large and the critical threshold x c is small, we have very efficient control of networks of chaotic populations to steady states. This suppression of chaos and quick evolution to a stable steady states occurs irrespective of the number of open nodes.
Influence of the redistribution time and the number of open nodes on the suppression of chaos
Now we focus on the network dynamics when T R is small, and the time-scales of the nodal population dynamics and the inter-patch transport are comparable. So now there will be nodes that may remain over-critical at the time of the subsequent chaotic update, as the system does not have sufficient time to "relax" between population updates. The network is then akin to a rapidly driven system, with the de-stabilizing effect of the chaotic population dynamics competing with the stabilizing influence of the threshold-activated coupling. So for small T R , the system does not get enough time to relax to under-critical states and so perfect control to steady states may not be achieved.
Importantly now, the fraction of open nodes f open is crucial to chaos suppression. In general, a larger fraction of open nodes facilitates control of the intrinsic chaos of the nodal population dynamics, as the de-stabilizing "excess" is transported out of the system more efficiently. We investigate this dependence, through space-time plots of representative networks with varying number of open nodes and redistribution times (cf. Fig 3), and through bifurcation diagrams of this system with respect to critical threshold x c (cf. Fig 4).
It is apparent from As a limiting case, we also studied the spatiotemporal behaviour of threshold-coupled networks without open nodes. Here the network of coupled population patches is a closed system. Again the intrinsic chaos of the populations is suppressed to regular behaviour, for large ranges of threshold values. However, rather than steady states, one now obtains period-2 cycles. This is evident through the bifurcation diagram of a closed network (cf. Fig 5) vis-a-vis networks with at least one open node (cf. Fig 2). Also, note the similarity of the bifurcation diagram of the closed system with that of a system with low T R and few open nodes. This similarity stems from the underlying fact that in both cases the network cannot relax to completely under-critical states by redistribution of excess between the population updates, either due to paucity of time for redistribution (namely low T R ) or due to the absence of open nodes to transport excess out of the system. Further, we explore the case of networks with very few (typically 1 or 2) open nodes, and study the effect of the degree and betweenness centrality of these open nodes on the control to steady states. Betweenness centrality of a node is given as bðiÞ ¼ node has more links to other nodes (namely, is of high degree), this would naturally facilitate the transport of excess to it, in parallel, through its many links. Also, an open node with high betweeness centrality implies that the node lies on many shortest paths connecting pairs of nodes. So this too should aid the process, as excess can reach the open node in fewer time steps.
Our expectations above are indeed verified through extensive simulations, where we observe the following: when there are very few open nodes, the degree and betweenness centrality of the open node is important, with the region of control being large when the open node has the high degree/betweenness centrality, and vice versa [30]. This interesting behaviour is clearly seen in the bifurcation diagrams shown in Fig 6a-6d, which demonstrate that the degree and betweeness centrality of the open node has a pronounced influence on control.
Quantitative measures of the efficiency of chaos suppression
We now investigate a couple of quantitative measures that provide indicators of the efficiency and robustness of the suppression of chaos in the network. The first quantity is the average Threshold-activated transport stabilizes chaotic populations to steady states redistribution time hTi, defined as the time taken for all nodes in a system to be under-critical (i.e. x i < x c for all i), averaged over a large sample of random initial states and network configurations. So hTi provides a measure of the efficiency of stabilizing the system, and reflects the rate at which the de-stabilizing "excess" is transported out of the network. Fig 7 shows the dependence of hTi on system size N. Clearly, while larger networks need longer redistribution times in order to reach steady states, this increase is only logarithmic. This can be rationalized as follows: the average redistribution time needed for all nodes in a system to be under-critical reflects the average time taken by the excess from any over-critical node in the network to reach some open edge. So this should be determined by the diameter of the random scale-free graph, namely the maximum of the shortest path lengths over all pairs of nodes in the network, which scales with network size as ln N. This is further corroborated by calculating the average fraction of nodes in the network that go to steady states with respect to the redistribution time T R , for networks of different sizes, with varying number of open nodes (cf. Fig 8). Clearly for small systems, with sufficiently high Threshold-activated transport stabilizes chaotic populations to steady states corroborated quantitatively by the dependence of hRi and hTi, displayed in Figs 11 and 12(b). The effect of the degree of the open node is less pronounced, though it also does have a discernable effect on the suppression of chaos. As evident from Fig 12(a), when the open node has a higher degree, it has a higher hRi, indicating that open nodes with higher degree yield larger steady state windows.
Finally, note that the different centrality measures are most often strongly correlated and therefore do not offer new insights. For instance, we have also studied the network with respect to open nodes of varying closeness centrality, where closeness centrality is the average length of the shortest path between the node and all other nodes in the graph. We find that qualitatively same broad trends emerge with respect to closeness centrality, as observed for betweenness centrality (cf. Fig 11 and its inset).
These results can be understood intuitively as follows: the emergence of steady states is crucially dependent on the efficacy of the excess being transported out of the network. Namely, excess population from any over-critical node in the network needs to reach an open node within T R steps. So if an open node has high degree transport of excess is facilitated, as the excess can flow to the node simultaneously through its many links. Further one can rationalize the effect of the betweeness centrality of an open node on the stabilization of the steady state, as betweeness is a measure of centrality in a graph based on shortest paths. If an open node has high betweenness centrality, a large number of shortest paths pass through it. This naturally aids the cascading process, as excess reaches the open node in fewer time steps. The trends expected from these arguments are corroborated in the results from simulations shown in Figs 11 and 12. Threshold-activated transport stabilizes chaotic populations to steady states
Conclusions
We have explored Random Scale-Free networks of populations under threshold-activated transport. Namely we have a system comprising of many spatially distributed sub-populations connected by migrations triggered by excess population density in a patch. We have simulated this threshold-coupled Random Scale-Free network of populations, under varying threshold levels x c . We considered networks with varying number of open nodes, namely systems that have different nodes/sites open to the environment from where the excess population can migrate out of the system. Further, we have studied a range of redistribution times T R , capturing different timescales of migration vis-a-vis population change.
Our first important observation is as follows: when redistribution time T R is large and the critical threshold x c is small (0 x c < 1), we have very efficient control of networks of chaotic populations to steady states. This suppression of chaos and quick evolution to a stable steady states occurs irrespective of the number of open nodes. Further, for threshold values beyond the window of control to fixed states, one obtains cycles of period 2. Namely for threshold Threshold-activated transport stabilizes chaotic populations to steady states levels 1 < x c < 2 the populations evolve in regular cycles, where low population densities alternate with a high population densities. This behaviour is reminiscent of field experiments [29] that show the existence of alternating states. We offer an underlying reason for this behaviour through the analysis of a single threshold-limited map.
For small redistribution time T R , the system does not get enough time to relax to undercritical states and so perfect control to steady states may not be achieved. Importantly, now the number of open nodes is crucial to chaos suppression. We clearly demonstrate that when there are enough open nodes, the network relaxes to the steady state even for low redistribution times. So more open nodes yields better control of the intrinsic chaos of the nodal population dynamics to fixed populations. We corroborate all qualitative observations by quantitative measures such as average redistribution time, defined as the time taken for all nodes in a system to be under-critical, and the range of threshold values yielding steady states.
We also explored the case of networks with very few (typically 1 or 2) open nodes in detail, in order to gauge the effect of the degree and betweenness centrality of these open nodes on the control to steady states. We observed that the degree of the open node does not have significant influence on chaos suppression. However, betweenness centrality of the open node is important, with the region of control being large when the open node has the high betweenness centrality, and vice versa. The emergence of steady states in this system, not only suggests potential underlying mechanisms for stabilization of intrinsically chaotic populations, but also has bearing on the broad problem of control in complex networks. When a steady state is the desired state of the nodal populations in the network, the threshold mechanism offers a very simple and potent strategy for achieving this, as we have demonstrated clearly. If the aim is to prevent steady states, as may be the case in variants of this model relevant to neuronal dynamics, our results suggest what threshold levels need to be avoided in order to prevent evolution to global fixed points. Note that a large class of control strategies entail complicated algorithms to calculate feedback, and these require knowledge of the global network topology and details of the network dynamics, which are often unknown. Here on the other hand, the nodes respond independently at the local level to a simple threshold limiter condition, requiring knowledge of only the local state at any point in time.
Lastly, interestingly, analogs of this class of coupling have been realized in CMOS circuit implementation using pulse-modulation approach [32,33]. So some of these results may be of potential interest to the engineering community as well. In the biological context, some experiments have studied similar dynamics in replicate laboratory metapopulations of Drosophila [34]. So our results have the potential to be demonstrated in extensions of such experiments in the future.
In summary, threshold-activated transport yields a very potent coupling form in a network of populations, leading to robust suppression of the intrinsic chaos of the nodal populations on to regular steady states or periodic cycles. So this suggests a mechanism by which chaotic populations can be stabilized rapidly through migrations or dispersals triggered by excess population density in a patch.
Supporting information S1 Text. Appendix. Analysis of a single population patch under threshold-activated transport. (PDF) | 5,324.2 | 2017-04-27T00:00:00.000 | [
"Biology"
] |
S YNERGY OF T ACKLING G RAND C HALLENGES – F ROM THE B USINESS D IPLOMACY ’ S P ERSPECTIVE
In a fast - developing and uncertain world, grand challenges emerge as a collection of programs encouraging creativity to address significant worldwide development and health issues. It is essential that grand challenges are debated at various levels comprising spheres of policymaking, publicity, and academics worldwide. Depending on the urgency of a challenge, the opinion and prioritisation of stakeholders may vary. Nevertheless, stakeholders demand collective business actions to handle these grand challenges jointly. A synergistic approach to business strategy draws the attention of scientific researchers and practitioners in the way that it could combine many crucial factors to create a dynamic in business. By applying a critical and integrative literature review, this paper aims to conceptualise the creation of competitive advantages and innovative dynamics from the perspective of business diplomacy. The consonance from business diplomacy, which is about stake holders’ engagement and negotiation, and shared values creation will ensure the operation of a company. This is a synergistic approach as business diplomacy and shared value creation are capturing the significant profit of synergy. This research will analy se the interconnection between business diplomacy and shared value creation based on synergy.
Introduction
A synergistic approach to business strategy draws the attention of scientific researchers and practitioners in the way that it can combine many crucial factors to create a dynamic in business.It is essential in the context that grand challenges are debated throughout the world.Stakeholders demand more collective actions from firms to handle these grand challenges jointly.Internationalising and internationalised companies risk losing legitimacy, social license to operate, and a bad reputation.To safeguard their reputational image, many companies opted for CSR activities to avoid being confronted by stakeholders.
Nonetheless, the reliability of CSR is decreasing because scholars are exposing the dark side of CSR, which is economic benefits only for companies rather than society and companies as a whole.Creating shared values is an excellent choice to prove that a company is thinking of values for its stakeholders and society when implementing its business activities.It is also a cost-benefit for companies.Moreover, The consonance from business diplomacy, which is about stakeholders' engagement and negotiation, and shared values creation will ensure the operation of a company.This is a synergistic approach as business diplomacy and shared value creation are capturing the significant profit of synergy.This research conceptualises how firms could create their competitiveness and innovative dynamics as synergistic consequences for jointly tackling grand challenges.While creating shared values implies mutual understanding and common objectives between businesses and stakeholders, business diplomacy-related studies explain how this implication could be achieved.
Innovative dynamics and competitive advantages
Market globalisation offers business opportunities for firms to contribute to the development of the national economy and to where they have business operations (Ordeix-Rigo, 2009).The internationalisation of firms is a complex strategy in the present day as competitive advantages are changed by non-business stakeholders (Ruël, 2020).Historically, dynamic capabilities were considered a driver for enterprise-level competitive advantage in rapidly changing technological environments (Teece, 2007).As part of a dynamic capacity-building approach, organisations can learn to adapt to changing environments by defining new products and processes (Teece, 2018).Competitive advantages will be subject to changes by economic and non-economic stakeholders from the business environmental spheres (Henisz, 2016).Egea et al. (2020) claimed that drivers of competitiveness changes were related to sustainability factors.Yeow et al. (2018) suggested that enterprises with dynamic solid capabilities could profitably adapt to environmental changes by aligning with organisational changes and competencies.Notably, dynamic capabilities in the management literature are related to the use and distribution of organisational resources.In this sense, resources are directed to enhance competitiveness and address challenges to the rapid changes in the environment (Lin et al., 2016;Zollo & Winter, 2002).Scholars proposed that dynamic capabilities stemmed from the resource-based view, where firms allocated their resources to innovate processes or technologies to survive business environmental changes (García-Leonard et al., 2023;Peteraf et al., 2013;Yeow et al., 2018).
In the post-environmental changes, companies face increased demand for addressing social and political issues (Fitzpatrick et al., 2020).These demands are sorted by allocating firms' resources.However, they must be aligned with business interests with firms (Bolewski, 2018;Marques, 2018).Stakeholders' demands increase, so firms must allocate resources to address these demands.This action assists companies in ensuring their responsibility and accountability towards society.
In this way, firms can survive environmental changes by employing business diplomacy.Firms that apply a business diplomacy agenda will likely allocate their resources to pursue diplomatic aims and initiatives (Ordeix-Rigo and Duarte, 2009).
In a highly competitive landscape, business diplomacy helps firms gain competitiveness to secure their position in the market (Henisz, 2016).Nobre (2017) and Willigen (2020) claimed that through the lens of diplomatic research, a firm acts as its diplomatic agent to pursue and protect its business interests.Tran (2023) stated that corporate social responsibilities play a role as the soft power of firms to enhance their influence in the market.Hence, firms will acquire more dynamic capabilities if they implement good business diplomacy strategies.Dynamic capabilities define the extent of allocating organisational resources to scan and seize new business opportunities during environmental changes (Arifin and Frmanzah, 2015).Firms acting as their diplomatic agents will be discussed in the below section.
A conceptualisation of business diplomacy from the perspective of mainstream diplomacy
The conceptualisation of diplomacy is constantly evolving around negotiation, building and maintaining relationships from one state to other states, mediation, and creating a network with a specific subject according to the type of diplomacy.Based on the above essences, business diplomacy is conceptualised with the business sector-either private or public companies implant business diplomatic activities.
The conceptualisation would stretch out the notion of diplomacy to the business sector.The nature of diplomacy changed from state-to-state to state-to-multistakeholders with the emergence of globalisation.In addition to the proliferation of parties, negotiations must address the diversification of political identities and ways of life (Scholte, 2008).Heine (2013) stated that globalisation and the development of communication led to the growth of several international actors such as NGOs, companies, social communities, etc. Modern diplomacy is where the government and other parties have collaborated in various diplomatic networking activities (Cooper, 2013).Ruel (2013) and Tran (2023) defined business diplomacy as building and maintaining a positive relationship and network among host state representatives and nongovernmental representatives.This is to preserve MNCs' legitimacy and license to operate.Interactions between firms and stakeholders also affect firms' reputational capital and capacity to form and impact the operational environment (Alammar and Pauleen, 2022;Saner and Yiu, 2014).Alammar and Pauleen (2016) and Marschlich and Ingenhoff, 2022) have expanded the topics by including internal and external business players.Therefore, MNCs would leverage the premise of CSR engaging with stakeholders to protect their reputational image and validity (Amann et al., 2007;Tran, 2023).
3.1
Synergy approach from business diplomacy and value creation Andersen et al. (1959) defined synergy as a concept describing the acquisition of resources for more competitiveness and the ability to adapt to external changes and pressures in the business environment.Weber and Dholakia (2000) justified the M&A strategy by using a Marketing synergistic approach to the financial development from the standpoints of related-industry companies with a wide range of financial measurement indicators such as stock prices, revenues, or investments.Scholars examined the economic benefits of synergy, which are conceived of return on investment via four major types: sales, operations, investment, and management (Holtström and Anderson, 2021;Zollo and Meier, 2008).Holtström and Anderson (2021, p. 29) outlined the list of "market power synergy, operational synergy, management synergy, and financial synergy".Scholas also conceptualised synergy toward two broader and precise separate ramifications, which were "a complementing effect (using resources more efficiently) and a synergy effect (using the companies' unique resources)".
From this point, synergy could seize companies' societal and value creation faces according to the recent social demands.Social well-being must be maintained at a particular level in industrialised nations through consistent economic expansion.
Business diplomacy plays a role in harmonising companies and society in the host country (White, 2015;Windsor, 2018).Businesses have always encountered great difficulty managing the alteration of various perspectives on their position in a broader political and social environment in the host country, especially when those perspectives contrast their economic goals or corporate interests (Muldoon, 2015).The company's operations and strategies must be more directly correlated with the larger social values and principles of CSR (Tran, 2023a;Ingenhoff and Marschlich, 2019).CSR is now considered a crucial leverage for negotiating and enhancing international companies via business diplomacy (Tran, 2023b;Yiu and Saner, 2017).
The business strategies of numerous top firms in major worldwide businesses progressively include social and economic challenges.CSR integrated with business diplomacy could be a perfect synergistic factor companies could utilise to optimise their operation and gain legitimacy and social license (Saner, 2019;Weber and Larsson-Olaison, 2017).The diplomatic agenda toolbox includes research and development programs, funding or sponsorship activities, partnerships with local communities, etc, in the host countries (Saner and Yiu, 2014).CSR activities act as either instrumental or political tools to address societal sustainability issues, enhancing visibility and influence (Marschlich mand Ingenhoff, 2021).Accordingly, CSR can be regarded as a diplomatic action due to its impacts on policymaking, public opinion agendas, and societal transformation (White et al., 2011).
To this end, firms engaging in business diplomacy agenda could create synergy effects which lead to positive outcomes for both firms and stakeholders.Firms could address and jointly solve social, environmental and economic issues from the stakeholders' sides.On the other hand, firms could gain and maintain legitimacy and social license to operate, along with the increase in reputational capital and market competitiveness.Hence, companies must enhance their dynamic capabilities and innovations to cope with environmental changes and meet stakeholders' demands (Breznik, 2012).These outcomes are a result of synergy effects for both firms and stakeholders.Delić et al. (2016) perceived that the synergy effect resulted in multiple positive outcomes for companies, but it was vital for them to recognise common objectives with stakeholders.The above section indicated that CSR could be regarded as a political and instrumental tool to influence public opinion, policymakers, civil communities, etc. Oetzela and Doh (2009) have recognised that NGOs and social communities could potentially increase risks by drawing attention to the detrimental effects of foreign investment and influence.In nations where innovative incentives fail, the social costs of certain items and the legal responsibility they cause outweigh the value of the business assets, causing negative externalities (Gande et al., 2020).Corporate shared value (CSV) is not generally corporate social responsibility but rather a novel strategy for achieving economic prosperity and emphasising the link with social welfare (Baldo, 2014).From a perspective, CSV is intrinsic to a company's prosperity and competitive advantage.This strategy could entail an innovative and more improved level of cooperation.
Business diplomacy and creating shared values as a synergy for grand challenges.
Global challenges have captured the attention of scholars in business management and companies.Aside from lucrative operations, companies are a joint hand in dealing with international issues.Furthermore, society, communities, NGOs, and governments have stricter demands for collective actions from the business sector.
Embeddedness
Entrepreneurs are seen as deeply ingrained in their environment and as facilitators of social learning.Thus, companies should be embedded in the socio-economic issues.
Stakeholders
The "embeddedness" could formulate "performance criteria" that involve stakeholders who could justify companies' legitimacy based on companies' practices of business activities.
Institutions
This theory could facilitate integrating companies' business activities into the host's societal and economic issues.
Design
This approach aims to structuralise a connective and progressive governance and management system attached to sociopolitical and economic issues in the host country.
Process
This term refers to cooperating with stakeholders to create an ecosystem that could leverage co-evolution in accordance with SDGs.
Effectuation
This approach reflects companies' perspective on the grand challenges and opportunities around it.Furthermore, flexibility and capacity to adapt in uncertain contexts were bolded.
Source: built based on Ricciardi et al. (2021) The research by Ricciardi et al. ( 2021) shared similarity with creating shared values as they could accelerate an ecosystem that benefits businesses and society.The dynamics capabilities exist where companies could re-design their business model and process to deal with grand challenges and respond to stakeholders' demands.This transition will increase a company's reputational capital and enhance its influence in the host market.
If collating the perspective of Itami and Roehl (1991) with two effects of synergy with six approaches of strategy to handle grand challenges from Ricciardi et al.
(2021), the first three approaches will correlate with the "a complementing effect (using resources more efficiently)".It describes the external factors complementing the synergy strategy to construct collective solutions to grand challenges with external participating factors such as the framework of SDGs or viewpoints contributed by stakeholders.The second group consists of 3 other factors aligning with the "synergy effect (using the companies' unique resources)", which portrays the internal factors shaping the paradigm shift in business strategy and activities toward inclusion of societal and economic issues in the host country (Holtström and Anderson, 2021, p. 27).From three means for companies to create shared value of Porter (2011), and combining with two synergistic effects of Itami and Roehl (1991), the relationship should be concluded as follows: Process "synergy effect (using the companies' unique resources)" 5.
Design "Reinventing products and markets"
Effectuation
Source: built based on Itami and Roehl (1991); Porter (2011);Ricciardi et al. (2021) Synergy from the side of shared value creation is defined based on the emergence of grand challenges and how companies need to cope with them.However, it is essential to recognise the importance of stakeholders who were mentioned and involved in the synergy.Business diplomacy serves the synergy as a concept contributing to protecting a company's image, reputation, and social license to operate via dialogues and communication with internal or external stakeholders.Marschlich and Ingenhoff (2019) reaffirmed the critical importance of stakeholder engagement.They stipulated that the firms' actions must align with the stakeholders' expectations in the host nations by actively taking part in decision-making processes regarding sociopolitical concerns and fostering constructive connections.Knowledge transfer to multiple stakeholders is essential and should be implemented through dialogues.These diplomatic strategies could help stakeholders to acknowledge a company's business strategy and practices.Nevertheless, companies need to hear opinions from stakeholders.Therefore, companies should create twoway communication to learn about socio-economic and political contexts in the host country.A company communicates its business strategy and practices to stakeholders with its shared values, which correlate with socio-economic concerns and political context.In this sense, a company could prove that the shared values contribute to the development of the host's economy and society.In exchange, that company could acquire more knowledge from the market's context and stakeholders' perception, including customers, competitors, and governmental actors in the host country, to shift the company's strategy accordingly.Information gathered from the host market could translate into competitive advantages for firms: "simultaneous global integration, local and professional differentiation, and worldwide learning and knowledge-sharing" (Søndergaard, 2014, p. 357). Salam et al. (2023) highlighted the exchange of market information and intelligence between businesses and stakeholders as the core agenda of business diplomacy.From that point, business diplomacy helps companies to reconcile their and stakeholders' values (Salam et al., 2023).Business diplomacy handles the strategy of interactions between a multinational company and its "external non-business counterparts", which affect the company's reputational capital and capacity to shape its organisational context (Saner and Yiu, 2014).The reconciliation of shared value promotes firms' innovative capabilities to respond to stakeholders' demands.Furthermore, innovation or innovative capabilities could be regarded as a driver for competitive advantages (Egea et al., 2020).
Conclusion
It is undeniable that grand challenges are rising strongly recently, especially after the Covid-19 pandemic.The energy crisis, the Ukraine War, and poverty and economic regression cause uncertainties which demand close collaborations between companies, governments, and stakeholders.Implementing international business is severe due to the harsh public pressure in the host and home countries where global companies operate.Business diplomacy is a concept describing a representation of companies in the host and home markets.This concept stimulates the nature of conventional diplomacy, which is state-centric.This concept will help businesses recognise the importance of stakeholders in their operations.Businesses could gather more information and gain legitimacy from these stakeholders.
However, leverage is a must for exchange.CSR activities are usually regarded as a good advantage as they involve elements of sustainability.Nevertheless, when scholars find out about political and instrument CSR, the credibility of companies decreases within the communities.Porter (2011) proposed the transition from CSR to CSV.The synergistic nature emerged from this stage when global companies could brace for both CSV and business diplomacy.This resonant from the two factors could ensure an effective operation in the host market and create shared values for the host country's society to deal with grand challenges.A compelling business diplomatic agenda could assist businesses in forming a market entry process by acquiring market intelligence.Business diplomacy forms a win-win business environment and mutual understanding between firms and stakeholders.In this case, firms must reconcile their values to the stakeholders' values and demands.In the modern world, businesses are required to be involved in tackling grand challenges.These grand challenges will enhance innovation and dynamic capabilities, leading to competitive advantages.As a result, a company could gain more competitiveness in the context of internationalisation.
T. A. Tuan: Synergy of Tackling Grand Challenges -From the Business Diplomacy's Perspective 483.
Figure 1 :
Figure 1: proposed communication roadmap Source: proposed by the author
Table 1 : "bridging approaches" to deal with grand challenges No Approaches Interpretation
(Gorissen et al., 2014)nary cooperation by utilising many sectors' varied perspectives, experiences, skills, and capabilities.George et al. (2016)stated that grand challenges have been formed by global issues that are only being solved by collective activities in the form of collaboration.Grand challenges emerged from the involuntary minimisation of businesses' unsustainable behaviour rather than drastically altering the behaviour of all actors and levels to increase the sustainability of the entire system(Gorissen et al., 2014).There are "bridging approaches" discovered by Ricciardi et al. (2021): Quayle et al. (2019)saw that grand challenges are complicated, all-encompassing issues | 4,149.6 | 2024-05-20T00:00:00.000 | [
"Business",
"Political Science",
"Economics"
] |
Sensitivity analysis for reproducible candidate values of model parameters in signaling hub model
Mathematical models for signaling pathways are helpful for understanding molecular mechanism in the pathways and predicting dynamic behavior of the signal activity. To analyze the robustness of such models, local sensitivity analysis has been implemented. However, such analysis primarily focuses on only a certain parameter set, even though diverse parameter sets that can recapitulate experiments may exist. In this study, we performed sensitivity analysis that investigates the features in a system considering the reproducible and multiple candidate values of the model parameters to experiments. The results showed that although different reproducible model parameter values have absolute differences with respect to sensitivity strengths, specific trends of some relative sensitivity strengths exist between reactions regardless of parameter values. It is suggested that (i) network structure considerably influences the relative sensitivity strength and (ii) one might be able to predict relative sensitivity strengths specified in the parameter sets employing only one of the reproducible parameter sets.
Introduction
Mathematical models for signal transduction pathway can support the understanding of molecular mechanism in the pathway and predict the dynamic behavior of molecular activity [1][2][3][4][5][6]. To construct a complete mathematical model, we require information pertaining to the experimentally known pathway, time-course and dose response of molecular activity, and model parameters such as phosphorylation and binding rates in a system. However, some of this information, in particular, the model parameters, is difficult or impossible to obtain or measure experimentally. Therefore, we must estimate the model parameter values to recapitulate experiments in simulations [7][8][9].
Signal molecules in signal transduction pathway transmit extra-cellular information into transcription factors by activation, such as phosphorylation and ubiquitination. We can measure such activities but their values are relative abundances and not absolute abundances. A mathematical model must recapitulate the dynamic behaviors based on such experimentally relative abundances (Fig 1) [2,3,10]. However, some candidate parameter sets that can a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 recapitulate the dynamic behavior of activities in experiments can be estimated because the combinations of the parameter values with the same dynamic behavior exist or the experimental data include noise and fluctuation.
To analyze the robustness of a model, sensitivity analysis has been implemented previously [11]. Local sensitivity analysis investigates an infinitesimal change in the target of a parameter set that can recapitulate experiments and can support features under a specific condition with known experiments. However, the sensitivity depends on the parameter values of the model. The common features for models with various reproducible candidates of model parameters are unclear.
In this study, we estimate diverse reproducible parameter values by parameter evaluation and analyze their characterization using local sensitivity analysis, focusing on the different and common features of sensitivity from reproducible parameter sets. The results show that although different reproducible model parameter values have absolute differences with respect to sensitivity strengths, specific trends of some relative sensitivity strengths exist between reactions regardless of parameter values. To the best of our knowledge, this is the first study to quantitatively investigate sensitivity and its relationships in reproducible parameter sets.
Mathematical models and parameter estimation
We used four models, as seen in the signaling pathway model (Fig 2A) [12]. These network structures resemble signaling hubs in well-known signaling pathways, such as p53, MAPK, or NF-κB pathway, and involve a reversible reaction (M1), a cycle (M2), a negative feedback loop (M3), and an incoherent feedforward loop (M4). The models are formulated considering Michaelis-Menten and mass action. These models have input signal patterns s of 10 different stimulations (Fig 2B). These input signal patterns express different combinations of "fast" and "slow" initiation and decay phases and can have specific respective effects on reactions in signaling hubs [12]. The functions and parameters of the input signal patterns are defined in S1 Fig. X � is the output. First, we performed stochastic simulations using the Chemical Langevin equation (CLE) [13] with the original parameter values reported in Behar et al. [12] (Fig 2A), and generated activity value sets every 30 min as control data. For the stochastic simulation, the CLE was integrated using the Euler-Maruyama algorithm [14] that reproduces the discrete Wiener process.
ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi a j ðXðtÞÞ where v j indicates the stoichiometry, M is the number of reactions, a j (X(t)) is the propensity function for a reaction, and W j (t) for 1≦j≦M are independent Wiener processes with gaussian noise N(0, 1).
To obtain the diverse parameter values in parameter estimation, the control data used were the stochastic simulation results in each parameter estimation round. Then, we used asynchronous genetic local search with distance independent diversity control (AGLSDC)-which combines local search with global search- [15] as the parameter estimation method, and estimated 1000 candidate sets for the control data. The fitness function was used as the cosine error [3] to obtain not the absolute values, but the dynamic behavior of the control data. where N is the number of stimulation patterns, i.e., 10 in this case; x ! simulation;i is a vector for values of X � in stimulation pattern i, obtained every 30 min for 300 min of the simulation; x ! control;i is a vector for values of X � corresponding to the control data in stimulation pattern i; control;i , and |�| denotes the magnitude of a vector. A low fitness means that the dynamic behaviors of the molecules in the simulation are similar to the ones in the control data. A scaling factor a ¼ jx ! control;i j=j x ! simulation;i j, where jx ! control;i j is the mean of 1000 stochastic simulations, simulated with respect to the control data, can be calculated and scaled to the closest scale of the simulation to the control data ( Fig 1B) [2,3,16].
In addition to the parameter estimation AGLSDC, we performed random sampling to collect 10000 parameter sets that could and could not recapitulate control data. Here, we defined positive data (reproducible parameter sets) as follows: a simulation produces positive data if it passes in standard errors from the averagesx ! control;i every 30 min into the control data, which are calculated in the 1000 stochastic simulations. Otherwise, the data are negative. In this study, all the parameters were estimated in the range of log(-15) to log(5).
Sensitivity analysis
The sensitivity of output X � to the i-th parameter was calculated as follows: where p i is the value of the i-th parameter, p is a vector p 1 , p 2 , . . ., p n , and q(p) is a target function. In this study, the target function used is a time-course integral of X � , which is a representative value of the dynamic behavior [17]. The sensitivity was numerically calculated with a 0.1% increase in the reaction rates.
Implementation
We used the CVODE (http://computation.llnl.gov/casc/sundials/main.html) solver to perform numerical integration in the simulation. For the abovementioned analysis, the parameter sets involving errors in calculation by CVODE were excluded.
Distribution of reproducible parameter sets in parameter space
We obtained positive data using AGLSDC and random sampling as follows: 546 in M1, 504 in M2, 112 in M3, and 169 in M4. These simulations can recapitulate dynamic behaviors but they are not necessarily consistent with the absolute values of the control data (Fig 3A). The estimated parameter sets were distributed in a wide range of values (Fig 3B). To In M3, the parameter values between k2b and k3 or k4 were correlated. In M4, the parameter space of k2b was expanded by km2. These results indicate that the relationships among reproducible parameter values were correlated or partially correlated and balanced to recapitulate the control data. In particular, the rate parameters such as k1 and k2 exhibit correlation for reproducibility of dynamic behavior in control data, and the Michaelis constants can provide the expansion of parameter space. These results are consistent with those obtained when we carried out parameter estimation manually.
Difference in distributions of sensitivity among signal patterns
The section describes the results for the sensitivity analysis of the time-course integral of X � to positive and negative data and presents their distributions between signal patterns (Fig 4). In these models, the sensitivities for each reaction were clearly separated into positive or negative values in all the parameter sets. All the sensitivities of reactions D, DS, FBA, FFA exhibited negative values. This indicates that the increase in a reaction rate always causes increase or decrease in X � as per the specific reaction in these network structures.
In these models, the distributions of sensitivity in positive data were different between signal patterns (Fig 4). For example, the distributions of sensitivity of reactions A and D at signal pattern S4 in M1 were smaller than that at S6 in M1. These indicates that the range of sensitivity strength in positive data depends on the signal patterns. Furthermore, the distributions of sensitivity among rate parameters were different, which indicates that the influences of the rate parameters on X � were different. Overall, the distribution of sensitivity for reproducible parameter sets depends on the signal patterns and network structures.
Statistical separation of sensitivity strengths between reactions at different signal patterns
Next, we performed principal component analysis (PCA) for the sensitivity strength in each model to analyze the trends of sensitivity among reactions and signal patterns (Fig 5). In all the positive and negative data (bottom part of Fig 5), the principle components (PCs) of sensitivity were clearly separated into each reaction compared to those of the positive data (top part of Fig 5). The result indicates that the sensitivity strengths of each reaction for output X � were statistically different for all input signals, although the ratios of sensitivity strengths between reactions were different for different parameter sets (S3 Fig). In fact, the ranges of sensitivity strength in the negative data were different between reactions compared to differences between signal patterns (Fig 4). Besides, for the positive data, the PCs of sensitivity were different between input signal patterns except for the case of M1. For M1, the sensitivity strengths of reaction A and D at each signal pattern were similar (Fig 5); this is because the PCs were almost same between reactions at a given signal pattern (S3 Fig). In the other models, sensitivity strengths did not exhibit clear trends among reactions and signal patterns.
Dependence of sensitivity strength on parameter values and its relative trends in a model
Next, we performed the PCA of parameter values to investigate the relative sensitivity strength of reactions for each parameter set (Fig 6). We present the heatmaps for the zscore of sensitivity strengths in each parameter set to compare the relative sensitivity strengths between reactions for a given signal pattern. The zscore is calculated from sensitivity strengths of all reactions at a signal pattern in a model for a parameter set to check whether the sensitivity for a given reaction is higher or lower than that for other reactions. At signal pattern S3 in M1, the sensitivity strengths for reaction A were higher than those for reaction D but the strengths for reaction D were higher when k2a was high. However, these strengths were nearly equivalent, as shown in S3 Fig. In M2, the relative sensitivity strengths between the reactions were same in any given parameter set. In M3, when k2b was high, the sensitivity of reaction FBA was higher, whereas in the other parameter sets, the sensitivity of reaction A was higher. In M4, when k3 and k4 were high, the sensitivity of reaction D was slightly lower. This indicates that the relative sensitivity strengths depend on a balance or combination of parameter values. To gain further insights, we qualitatively investigated percentages at which a given reaction has higher sensitivity than another reaction (Fig 7), showing that the sensitivities of some reactions in all models were always higher or lower than those of other specific reactions and concluding that these relative sensitivity strengths may be features of reproducible parameter sets. Overall, the results suggest that the relative sensitivity strengths between all reactions in a model are not necessarily the same for positive data and depend on the balance of parameter values and signal patterns (Fig 6, S4 Fig), whereas identical trends of the relative sensitivity strengths for specific reactions are obtained for all reproducible parameter sets (Fig 7, S5 Fig).
Correlation between sensitivity and its target integral
Finally, we calculated the Pearson correlation coefficient between the integral and sensitivity to investigate the dependency of sensitivity target on the sensitivity strength (Fig 8). A negative correlation means higher sensitivity strength at lower integral and lower sensitivity strength at higher integral, while the opposite is true for positive correlation. The correlation coefficients at the signal pattern S2 and of reaction R in M2 were higher. In these reactions, the sensitivity strengths depend on the absolute integral without scaling (Fig 3). However, in most cases, the correlation coefficients exhibited low values. These results suggest that the influence of the absolute dynamic behavior on sensitivity is slight in most models.
Discussion
We examined diverse parameter sets that can or cannot recapitulate experimental data to investigate the features of sensitivity in specific network structure and dynamics. We found that such reproducible parameter sets show different distributions and ranges of sensitivity strength between reactions and signal patterns. The relative trends of sensitivity strength in positive data were not necessarily the same between every reaction, but specific relative trends of sensitivity strength between specific reactions were always observed for reproducible parameter sets (Figs 6 and 7 and S4 and S5 Figs). These specific relationships of relative sensitivity strengths were found to be reliable sensitivity features for model prediction. Furthermore, these features imply that the prediction of relative sensitivity strengths specified in positive data may be accomplished using only one reproducible parameter set, because the trends of relative sensitivity strength are identical across reproducible parameter sets. In previous work [18], the sensitivity from network topology in steady-state was analytically solved. It was shown that the network structure or topology determines the qualitatively positive or negative value of sensitivity. In the present study, all the values of sensitivity at each reaction also showed positive or negative trends (Fig 4). This may be approximately proved by the method given in a previous work [18]. Recently, quantitative or relative sensitivity strengths between reactions have become indicators for the target molecules of a disease [19]. Usually, the signaling system is important to the dynamic behavior after stimulation. When steady-state is assumed, we cannot know the dynamics of activity such as damped oscillations [3]. To understand the effect of reaction on such transient responses is an advantage of sensitivity analysis such as the numerical analysis performed herein. In our analysis, the absolute sensitivity strength was different for the reproducible parameter sets, but we could confirm that there are relative trends between sensitivity strengths in positive data.
In numerical analysis, the estimated parameter values depend on parameter estimation method employed. Therefore, we cannot know whether all parameter sets that recapitulate experimental data are accurately estimated. In this study, we checked the ranges and distributions of parameter values (Fig 3B, S2 Fig) and S5 Figs). The results suggest that the positive data used in this study might be sufficient to determine the features of sensitivity in these models.
In this study, we focus on the local sensitivity analysis. However, global sensitivity analysis, which investigates change at a wide range of values in parameters, is also effective to understand the changes in dynamic behaviors for parameter change at a wide range of parameter values in a model [20]. We will investigate them in the near future. Another task to be considered in more detail is the parameter estimation. Current parameter estimation methods, such as genetic algorithms, find it difficult to estimate a large number of unknown parameter values in a large-scale and complex model. Thus, we used simple models in this study to evaluate the sensitivity features in estimable reproducible parameter sets. A better parameter estimation method is however required to understand a complex model. | 3,756.2 | 2019-02-12T00:00:00.000 | [
"Mathematics",
"Computer Science",
"Biology"
] |
Measurement Uncertainty Evaluation of Digital Modulation Quality Parameters : Magnitude Error and Phase Error
In digital modulation quality parameters traceability, the Error Vector Magnitude, Magnitude Error and Phase Error must be traced, and the measurement uncertainty of the above parameters needs to be assessed. Although the calibration specification JJF1128-2004 Calibration Specification for Vector Signal Analyzers is published domestically, the measurement uncertainty evaluation is unreasonable, the parameters selected is incorrect, and not all error terms are selected in measurement uncertainty evaluation. This article lists formula about magnitude error and phase error, than presents the measurement uncertainty evaluation processes for magnitude error and phase errors.
Measuring method
According to JJF1128-2004 Calibration Specification for Vector Signal Analyzers, digital modulation quality parameters are calibrated by direct comparison method.Schematic diagram is shown in Figure 1.
Connect calibrated Digital Signal Generator (DSG) to a Vector Signal Analyzer (VSA) or a Spectrum Analyzer with the analyzing function of vector signal.The time base output of Vector Signal Generator (VSG) should be connected to the time base input of VSA, and the time base of VSA should be set as external time base.Set the output frequency of the calibrated DSG, and generally the output level is set to -10 dBm.Select the required standard format or the customized modulation parameters, such as modulation, filter type, filter factor Į and symbol rate.Set the input frequency of the VSA equal to the one of calibrated DSG, and input level is -10 dBm or select Level Auto Range button to regulate the input level of VSA automatically.Select the required modulation standard format or set the modulation system, filter type, filter factor Į and symbol rate which is consistent with the VSG.Read the digital modulation quality parameters from VSA.
Definition of digital modulation quality parameter
For various vector modulation, the signal magnitude and phase can both be represented by the points on the constellation diagram.However, due to the effects such as non-ideal performance of vector modem hardware and the precision of vector modulation algorithm, there are errors occurring between signal vector in real and ideal situation.Vector modulation error defines the error between real constellation points and corresponding As Eq. ( 1)~(3), the subscript rms is expressed as the root mean square value.MagErr rms is expressed as: PhaseErr rms is expressed as: EVM rms is expressed as: In Eq. ( 1)~(3), S is real signal vector, and R is reference signal vector.
Evaluation of measurement uncertainty of magnitude error
EVM will be first introduced before the evaluation of measurement uncertainty of digital modulation quality parameters are introduced.
An ideal vector modulation signal expression is shown in the Eq. ( 4).
Assume I/Q gain imbalance degree g (g=1 means complete balance), phase error ij, and interference signal Dsin(2ʌf i t) are exist.Phase noise ș(t) is a random variable which variance is ı P .n(t) is Gauss white noise with a variance of ı G .
Then Eq. ( 4) turns to Eq. ( 5): It is concluded that EVM and the error factor can be obtained by the derivation ([3]).
In Eq. ( 7): I/Q gain imbalance degree g is a linear value, not a logarithm value.
Unit of phase error ij is radian.ISR is the power ratio of interfering signals and useful signals, and is also a linear value.
If ij=0, EVM in Eq. ( 6) equals MagErr rms . 2 Eq. ( 8) is the expression of measurement uncertainty of magnitude error in digital modulation quality parameters traceability.
Uncertainty source
In JJF1128-2004 Calibration Specification for Vector Signal Analyzers, an example of measurement uncertainty evaluation is also given.For magnitude error and phase error, its uncertainty source is the error of vector signal generated by the VSG, resolution of VSA and measurement repeatability.When the VSG is to be calibrated, the VSA is the standard.The connection graph is the same as Figure 2, which means calibration of digital modulation quality parameters of VSG and digital modulation quality parameters of VSA is mutual cycle.
From Eq. ( 8) it can be concluded, when using the DSG to calibrate the digital modulation magnitude error of the VSA, the magnitude error is related to the following factors: I/Q gain imbalance (including inband flatness) g, signal-noise ratio ISR, phase noise, I/Q origin offset, resolution and measurement repeatability ( [2, 4, 5, 6]).
Uncertainty analysis
The evaluation process of the measurement uncertainty component is introduced below.For the Eq. ( 8) is complex, in the calculation of the uncertainty of the component, consider the impact of individual components, and finally combine the components.
Standard measurement uncertainty components produced by I/Q gain imbalance g
I/Q gain imbalance g includes inband flatness, which means g is related to signal bandwidth.Bandwidth of the signal becomes wider, then I/Q gain imbalance increaser, and magnitude error also increaser.
Only consider the effect of I/Q gain imbalance on magnitude error, it can be obtained from Eq. ( 8).According to the technical specification, I/Q gain imbalance is 1.01.It can be obtained from Eq. (9). a 1 =0.5%,Assuming it's a uniform distribution, the coverage factor is 3 .
Standard measurement uncertainty components produced by signal-noise ratio
Signal-noise ratio is the ratio between signal power and noise power.The greater the SNR is, the stronger the receiver's receptivity.
Only consider the effect of SNR on magnitude error, it can be obtained from Eq. ( 7).
Standard measurement uncertainty components produced by I/Q origin offset
Demodulator imbalance will cause the carrier leakage, which generates an interference signal.It interferences signal vector and deviates it from the correct position, then origin drift.Therefore, origin offset is carrier leakage signal components of demodulated signal.
Ignoring other factors, it can be obtained by the effect of interference signal on magnitude error.It can be obtained from Eq. ( 8).ISR is I/Q origin offset.According to the technical specification of VSA, I/Q origin offset is -60dB.It can be obtained from Eq. ( 14). a 4 = 0.1% Assuming it's a uniform distribution, the coverage factor is 3 .u 4 =0.06%
Standard measurement uncertainty components produced by resolution
When the vector signal is demodulated by the VSA, the minimum resolution is 0.01.a 5 =0.005%Assuming it's a uniform distribution, the coverage factor is 3 .u 5 =0.003%
Standard measurement uncertainty components produced by measurement repeatability
Measure the magnitude error for 10 times when standard value is 4.5%.
The synthesis of the standard uncertainty
Each Standard uncertainty component is independent from each other with no correlation, then consider the larger value of resolution and measurement repeatability involved in the synthesis of the standard uncertainty.
Uncertainty source
Phase error is related to the following factors: I/Q gain imbalance, I/Q origin offset, resolution and measurement repeatability.
Standard measurement uncertainty components produced by I/Q gain imbalance
The formula of the effect of I/Q gain imbalance on phase error is: 3 as an example, the effect of I/Q origin offset on the measurement results is introduced.According to the analysis, the magnitude of the error vector is perpendicular to X axis, and the I/Q origin is shifted in X axis, which has the greatest influence on the phase error.Assuming the origin offset is -60dB, which is 0.1%.According to Figure 3, the influence of the phase error is approximately equal to the origin offset value, which is in radian.It needs to multiply 180 S q ,then turns to degree.a 2 =0.06°Assuming it's a uniform distribution, the coverage factor is 3 .u 2 =0.04°
Standard measurement uncertainty components produced by resolution
Resolution is 0.01°.a 3 =0.005°,Assuming it's a uniform distribution, the coverage factor is 3 .
The synthesis of the standard uncertainty
Each Standard uncertainty component is independent from each other with no correlation, then consider the larger value of resolution and measurement repeatability involved in the synthesis of the standard uncertainty.0.174 u u u u q
Conclusion
In JJF1128-2004 Calibration Specification for Vector Signal Analyzers, the measurement uncertainty evaluation of magnitude error is unreasonable and the measurement uncertainty evaluation formula of phase error is not perfect.In this article, the uncertainty source of the magnitude error and phase error are comprehensively analyzed.
Figure 1 .
Figure 1.Schematic diagram of digital modulation quality parameter calibration in every symbol period.It is also called Digital Modulation Quality Parameter.The major parameters are magnitude error (MagErr), phase error (PhaseErr) and EVM, shown in Figure 2.Those parameters are usually represented by root mean square (RMS) ([2-3]).
Figure 3 .
Figure 3. Phase error with origin offset
Table 2 .
Phase error comparison result (1800MHz, 16QAM modulation, symbol rate 4Mbit/s, filter RRC, filter coefficient 0.22) From table 1 and table 2, the evaluation of measurement uncertainty is basically same as the result assessed by Comparison leading laboratory (NIM). | 2,000.6 | 2016-01-01T00:00:00.000 | [
"Physics"
] |
Image Processing Technique for Zinc Ion Sensing Using a Crystalline Fiber Sensor
Image processing was used with fiber to reduce the cost of work and increase the speed of obtaining results. The results of the image were translated into an intensity curve and drawn in the form of a force curve. The sensitivity of the sensor was obtained and found that it was equal to 73.47%. In this paper, crystalline optical fibers were used as a sensor for sensing the zinc ion concentration using the image processing technique. The image of the laser spot transmitted through the optical fiber crystal sensor for each concentration of zinc ion solution. The sensor was made by welding a piece of LMA-10 crystal optical fiber from both ends of a single-mode optical fiber to obtain an SM-PCFSM type sensor. And it is possible to distinguish between one concentration and another by studying the change of the images obtained as a result of changing the concentration of the zinc ion. The sensitivity of the manufactured sensor was about 73.47%. A R T I C L E I N F O Handling editor: Ivan A. Hashim
H I G H L I G H T S A B S T R A C T
Image processing was used with fiber to reduce the cost of work and increase the speed of obtaining results. The results of the image were translated into an intensity curve and drawn in the form of a force curve. The sensitivity of the sensor was obtained and found that it was equal to 73.47%.
In this paper, crystalline optical fibers were used as a sensor for sensing the zinc ion concentration using the image processing technique. The image of the laser spot transmitted through the optical fiber crystal sensor for each concentration of zinc ion solution. The sensor was made by welding a piece of LMA-10 crystal optical fiber from both ends of a single-mode optical fiber to obtain an SM-PCF-SM type sensor. And it is possible to distinguish between one concentration and another by studying the change of the images obtained as a result of changing the concentration of the zinc ion. The sensitivity of the manufactured sensor was about 73.47%.
Introduction
Microstructural fibers or optical fibers (PCFs), which are known as Holey fibers, have an array arrangement of microholes running in parallel along the entire length of the fiber. There are two types of photonic fibers, the first type is hollow core with an air clad around it in silica glass and the other is a solid core surrounded by an air clad in silica. The process of light transmission in the first type is based on the effect of the photon band gap (PBG) is the light-guiding mechanism of the second type, through which light is transmitted by the phenomenon of modulated total internal reflection (MTIR) to direct light [1]. There are elements in the environment that are considered heavy metals that have a dangerous impact on environmental pollution, as zinc is considered one of the elements polluting the environment when it exceeds the permissible levels in the environment [2].
Optical fiber sensor is one of the most interesting and advanced fields. Fiber sensor is day by day more attractive than other sensors, as it is immune to electromagnetic interference, high accuracy, needs no electrical source, can be operated remotely without contact, uncomplicated installation, explosion-proof, small size and light weight hence the importance of replacing fibers Optical sensors with other sensors [ 3][9][10] [11].
Novelty in this paper is the use of image processing to analyze the results of laser spots resulting from environmental pollution with zinc ion using a PCF photonic fiber sensor using Mach Zehnder technique.
Image processing
An image can be described as a width and length area of pixels. It has a specific height and width calculated in pixels [5]. They are captured by optical devices such as mirrors, cameras, telescopes, binoculars, lenses, etc., [6]. In low-cost imaging and recent developments and increasing capabilities of hardware storage, there are increasing demands for high image quality in various types of applications including video and image processing [7]. The images were divided into multiple parts called image segmentation. It was commonly used to identify objects or other related details in digital images. Different ways lead to image fragmentation [8]. Image processing is based on one of the following methods: 1. Otsu's method from the threshold method.
Practical Side
The practical aspect included a basic part as can be seen in figure (Figure 1). The first included the manufacture of the optical crystal fiber sensor, or which was manufactured by welding a piece of crystal optical fiber with a length of 5.2 cm of type PCF-LMA_10 with a single-mode optical fiber on both sides. Welding optical fibers. The sensor is installed in a plastic case on both sides. Experimental setup block diagram of the practical aspect performed is shown in Figure (Figure 1).
Figure 1: The experimental setup
The wavelength of the laser source is 450 nm. The manufactured sensor is immersed in a reference sample of distilled water to take an image of the reference laser spot. CCD camera that was used with the following specifications: 20.7 MP, auto/manual Focusing, 10x optical zoom (24-240mm), OIS, Xenon and LED flash Spots obtained from the end side of the sensor far from the output laser source The cable is shown in Figure 5 (normal condition) and Figure 6. (Abnormal Status) These images are sent by a router to keep the room Closed and dark. These images are then processed using Image J. The sensitivity of the sensor was studied with decreasing concentration of zinc solution. The zinc solution was prepared with different concentrations in the range of (1%, 2% and 3%) mg/L, and the reference used was distilled water.
Using image J (version 1.8.0_172) for image processing
To compute the percentage of the laser spot's affected pixels from the outside concentration, image processing was the best way to do that. An application developed at the National Institutes of Health and the Laboratory for Optical and Computational Instrumentation (LOCI, University of Wisconsin) using for image processing.
Image processing algorithm 1-Reading the Image.
2-
Convert the image to gray scale.
3-
From 2D image to 3D surface image.
4-
Reading the image of the spot as a power curve.
Results
The laser distribution inside PCF LMA-10 surrounded with air and other concentration only can be seen with the CDD camera of smart phone.
The first step is to convert RGB images to 3D surface images as shown in figure (Figure 2) to compare the height of the entire image surface with different focus. This image shows the laser spot regions on the outer surface of the photonic fiber end that were surrounded by different concentrations of zinc solution compared with the front image, therefore, this image must first be read to calculate the total number of pixels of the unaffected regions. After reading the image at first this image must be converted from RGB to Gray scale image then the gray scale image will be converted into shown as an information image. In this image white colored regions indicates the spot size and its affected part from the outside effects that surrounded the PCF, as shown in the following figure ( Figure 3) it describes the gray scale images. It helps to calculate total number of pixels of unaffected regions of the laser spot. When different concentrations of immersed zinc were used for the fiber sensor, some different parameters were obtained with a change in the concentration surrounding the sensor.
The 3D surface of the laser spot image of the output fiber sensor with distal water is shown in figure (Figure 4). The 3D surface of the laser spot image of the output light with 1% Zinc solution in water continuation is shown in figure (Figure 5). The 3D surface of the laser spot image of the output sensor with 2% Zinc solution in water continuation is shown in Figure (6). The 3D surface of the laser spot image of the output sensor with 3% Zinc solution in water continuation is shown in Figure (7). Converting the spot intensity in the images shown in Figure (8) above leads to a method for comparing light output in image processing using a CCD camera to capture the image and software to compare the results.
By comparing the results obtained, we find that there are differences in the intensity distribution of the laser radiation for each concentration compared to the source radiation without any external effects. These changes act like as sensor to the changes in the materials surrounding the PCFs. It is important to make the fiber sensitive to every change, no matter how small, to ensure quality and monitor changes.
The sensitivity of the sensor was obtained by taking the slop of the curve of the maximum intensity with the concentration, and we found that it was equal to 73.47%.
Conclusions
PCF is used as a general sensor for contamination, and zinc is one of the important and variable factors that directly affect the wave outside the fibers as a result of direct adhesion to the fibers. The use of PCFs as a fiber sensor produces a new result that is useful in future works which leads to the development in the field of optical fiber sensor with high sensitivity sensors, and the high sensitivity of the fiber towards zinc enables its detection at levels close to change from the normal level, it is now possible to find Its concentration is higher than normal values. Image processing is not an acceleration in the work of fiber sensors without the need for complex devices to analyze the external signal and once it is translated into digital images and compared with the source and obtain clear results that can be calculated. The sensor proved highly sensitive in distinguishing between the concentration of zinc-contaminated solution with distilled water. | 2,336.6 | 2021-10-25T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Epigenetics Control Microglia Plasticity
Microglia, resident immune cells of the central nervous system, fulfill multiple functions in the brain throughout life. These microglial functions range from participation in innate and adaptive immune responses, involvement in the development of the brain and its homeostasis maintenance, to contribution to degenerative, traumatic, and proliferative diseases; and take place in the developing, the aging, the healthy, or the diseased brain. Thus, an impressive level of cellular plasticity, appears as a requirement for the pleiotropic biological functions of microglia. Epigenetic changes, including histone modifications or DNA methylation as well as microRNA expression, are important modifiers of gene expression, and have been involved in cell phenotype regulation and reprogramming and are therefore part of the mechanisms regulating cellular plasticity. Here, we review and discuss the epigenetic mechanisms, which are emerging as contributors to this microglial cellular plasticity and thereby can constitute interesting targets to modulate microglia associated brain diseases, including developmental diseases, neurodegenerative diseases as well as cancer.
INTRODUCTION
Microglia are the immune cells of the brain, they derived from myeloid precursors which migrate into the brain during early embryonic development and play a major role in maintaining a healthy environment in the brain (Ginhoux et al., 2010;Prinz and Mildner, 2011). Microglia are constantly screening the brain environment by using their surface receptors to detect damaged neurons, plaques, and infectious agents. In this steady or surveying state, microglia presents a high amount of ramified processes that perform constant screening of the central nervous system (CNS) environment. When activated by a stimulus, microglia can act as potent immune cells able to mediate innate and adaptive responses and to perform different function during CNS disease or injury. Microglia are of main importance for the brain homeostasis but uncontrolled or overactivated microglia can also contribute to brain diseases. Over-activated microglia can promote neuronal cell death in the course of neurodegenerative pathologies like Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD) as well as amyotrophic lateral sclerosis (ALS) (Sarlus and Heneka, 2017). Uncontrolled microglia will also lead to increase tumor cell migration and invasion in the case of brain tumor (Saijo and Glass, 2011).
Some of the main characteristics of microglia is their ability to adapt to their microenvironment and to acquire a specific phenotype based on stimuli they perceive. As innate immune cells they will become activated in response to injury, infection or death of cell in their vicinity . Microglia can harbor different phenotypes depending of the signal they receive/detect from the microenvironment. Since microglia are frequently define as the macrophages of the brain, their classification and phenotypes have been established and named based on macrophages characteristics. Microglia will be activated in a pro-inflammatory phenotype or so called classical activation (M1 phenotype) in response to viral or bacterial infection, in the opposite, microglia will harbor an anti-inflammatory phenotype or alternative activation (M2 phenotype) in case of neurodegenerative diseases when detecting neuronal cell death for example. Worth a notice, the concept of the microglial M1/M2 polarization has been heavily questioned. In fact, it is now recognized that microglia display a continuum of activation states, which are largely oversimplified in the M1/M2 classification. However, since the M1/M2 paradigm has been heavily used in the literature, and for simplicity, references are made in this review to microglial M1 and M2 phenotypes.
Recently, it has also been shown that microglia can acquire an alternative phenotype, or pro-tumoral phenotype, where the microglia cells are involved in increasing tumor invasiveness (Hambardzumyan et al., 2016). With the discovery of the ability of microglia to support tumor growth, it was shown that microglia is able to adapt to the microenvironment and its activation process and phenotype acquisition is more complex than thought. Indeed, microglia will harbor different transcriptional factor activation signatures depending on the phenotype they will acquire (e.g., neuroprotective, lipid response, neuropathic pain) which confirm the extreme plasticity of these cells and the complexity of the microglia phenotypical changes (Holtman et al., 2017). Thus, microglia fulfill multiple and contrasting functions across development and adulthood, in steady-state conditions, but also in the context of diseases. To perform their multiple functions, microglia appear to exhibit various phenotypes. Microglia thus appear to display extreme plasticity, and are able to modify their structure and functions based on their role and location. But, how can microglia obtain this level of cell plasticity?
In response to different stimuli, cells can change their state of activation and adapt their identity. These modifications are regulated by finely tuned modulation of gene expression. This coordinated regulation is mainly a result of changes in the composition and the structure of the chromatin, driven by epigenetic modulators. Recently, it has been shown that cellular and genomic reprogramming is highly dependent of chromatin modifications in order to lead and maintain the fidelity of target cell states. Epigenetic modifications such as histone modifications (e.g., methylation, acetylation, phosphorylation), DNA methylation or gene expression regulation by non-coding RNAs are crucial for normal development but can also be involved in diseases (Esteller, 2008). Epigenetic modifications do not modify the genome per se of the cell but they do regulate gene expression and thereby contribute to the definition of cell phenotype. For example, DNA methylation patterns, which are reversible heritable marks conserved during cell division, are involved in cell reprogramming processes (De Carvalho et al., 2010). Epigenetic modifications have been involved in cell phenotype regulation like the reprogramming of stem cells. Indeed, stem cells have the ability to self-renew or differentiate and these changes in cell phenotype involve a fine-tuned regulation by epigenetic mechanisms (Chung and Sidhu, 2008;Zhou et al., 2011;Katsushima and Kondo, 2014). Microglia can be compared to stem cells in their ability to adapt to the microenvironment and to differentiate into a specific cell phenotype in response to the signal activation. Thus, these cells are very plastic, however, until now, even if microglia have been known for almost a century, the mechanisms leading to their activation toward a specific phenotype are not yet fully established, but it seems obvious that epigenetic changes should contribute to the microglia plasticity.
Here, we review the roles of epigenetic alterations, including histone modifications, DNA methylation or microRNA expression, as well as the enzymatic systems regulating those modifications, may have in regulating microglia plasticity and polarization toward unique phenotypes. We also consider the contribution of the epigenetic control of microglia to their activation states in the context of health and disease, and possible long-term and lasting microglial effects such as the one observed upon microglial priming/memory or even transgenerational microglial effects.
HISTONE MODIFICATIONS IN MICROGLIA
The structure of the chromatin is regulated by its compaction depending on the organization of the histone proteins. The chromatin is composed of the DNA, tightly packed around histone proteins organized in units called nucleosomes. A nucleosome is composed by two H3-H4 dimers surrounded by two H2A-H2B dimers corresponding to an octomeric core of histone proteins. Histone tails amino-terminal parts are protruding from the nucleosome, which make them accessible for possible post-translational modifications. Depending on the nucleosome spacing, the chromatin structure will be defined as heterochromatin and euchromatin. Heterochromatin corresponds to a condensed state of the chromatin while euchromatin is non-condensed or open chromatin. This open state of the chromatin allows the nuclear factors to access the chromatin. Modifications occurring on the histone tails and on the DNA are involved in the regulation of the chromatin structure and of the gene accessibility by the transcriptional machinery. Histones can be methylated, acetylated or phosphorylated on specific amino acids residues located on the histone tails. The chromatin accessibility is altered by histone acetylation which allow the interaction of DNA binding protein to accessible sites in order to activate gene transcription. Histone acetylation is carried out by histone acetyl transferases (HATs), which acetylate the lysine residues on histones tails or core, on the converse, the role of histone deacetylases (HDACs) is to remove the acetyl groups from those lysine residues. Histone methylation is either associated to transcription activation or repression depending on which amino acid the modification occurs. Histone methyltransferases (HMTs), promote the monodi-or tri-methylation on target histone residues, whereas histone demethylases (HDMs) counteract the effects of the HMTs.
The potential use of HDAC inhibitors in inflammatory/neurodegenerative diseases has been extensively investigated, since histone acetylation was shown to regulate the extent of inflammatory response (Blanchard et al., 2002;Ito et al., 2002). In the recent years, HDAC inhibitors have been widely used to target microglia with the aim of reducing inflammation. Valproic acid (VPA), defined as a non-selective HDAC inhibitor, is a FDA approved drug used to treat epilepsy and bipolar disorders. In the context of spinal cord injury, the ability of this drug to reduce the inflammatory response after injury and to avoid the appearance of exacerbating pathogenic events was assessed. In their study, Abdanipour et al. (2012) observed a reduction of the local inflammation and a reduction of microglia activation (as illustrated by reduction of the ED1 lysosomal marker), which was associated with an improvement in the animal, behavior, in rats treated with VPA after the injury. The use of VPA to target microglia has also been studied in the context of inflammation and innate antiviral gene expression (HIV for example) using a model of primary human microglia and astrocytes treated with TLR3 or TLR4 ligand. VPA has a direct effect on microglia since it suppresses the expression of chemokine and cytokine gene expression, of innate antiviral molecules (like IFNβ) and of protein related to the activation of the TLR3-TLR4 signaling pathway (Suh et al., 2010).
The potential use of TSA to target microglia has also been widely explored. Its effect was tested on lipopolysaccharide (LPS)induced inflammatory response in microglia. TSA treatment has been described first as strongly potentiating the inflammatory response induced by LPS treatment in different models: from the N9 cell line and primary microglia cells to hippocampal slices culture as well as in neural co-cultures (Suuronen et al., 2003). More recently, TSA has been defined as a suppressor of the inflammatory phenotype of microglia after LPS treatment. Indeed, HDAC inhibition by TSA or SAHA (also known as Vorinostat) leads to a suppression of cytokine expression and release after LPS induction in primary microglia cultures (Kannan et al., 2013), but also in the mouse brain where cognitive dysfunction induced by LPS (weight loss, anorexia, and social withdrawal) was also attenuated (Hsing et al., 2015). Moreover, hyperacetylation induced by TSA treatment has a neuroprotective effect in the female neonatal mouse after LPS and hypoxia-ischemia and correlates with an improvement of long-term learning (Fleiss et al., 2012).
As VPA and TSA, sodium butyrate (SB) has also been studied for its potential role in inflammation regulation through microglia targeting. In this matter, it has been shown that SB induces changes in microglia shape, with apparition of microglial processes elongation in inflammatory and normal conditions associated with changes in pro-inflammatory and anti-inflammatory microglia markers . Patnala et al. (2017) also observed an alteration of the H3K9ac enrichment and transcription at the promoters of genes related to microglia activation (Tnf-α, Nos2, Stat1, IL6, and IL10) after SB treatment. In a microglia model of middle cerebral artery occlusion (MCAO), an upregulation of H3K9ac levels was observed, which links H3K9ac upregulation to microglial activation in vivo. The same observation was made in post-seizure microglia cells where SAHA pre-treatment reduces the levels of H3 and H3K9 acetylation to baseline levels (Hu and Mao, 2016).
Inhibitors of HDACs decrease the inflammatory response of isolated microglia exposed to inflammogens such as LPS. HDAC inhibitors treatment also produces a rapid and sustained increase in the level of histone H4 acetylation in microglia (Durham et al., 2017). Likewise, specific knockdown of Hdac1 and Hdac2 using siRNA was found to reduce LPS-induced microglia activation in vitro (Durham et al., 2017). In the context of regulating the inflammatory response of microglia, these two HDACs show functional redundancy, as an upregulation of HDAC2 expression appears to be able to compensate for HDAC1 deficiency. Of note, these two HDACs are reported to exert both distinct and redundant functions. In fact, with few exceptions, targeted deletion of either Hdac1 or Hdac2 does not cause obvious phenotype in most tissues or cell types (Kelly and Cowley, 2013). Recently, Datta et al. (2018), taking advantage of Cx3cr1Cre Hdac1 fl/fl Hdac2 fl/fl mouse model, demonstrated that the combined Hdac1 and Hdac2 gene depletion in vivo differentially affected microglia during development, homeostasis and during neurodegeneration. Indeed, whereas Hdac1-2 deletion in adult microglia has no effect on cell number or morphology during homeostasis, in the course of neurodegeneration such as in an AD mice model, Hdac1-2 deletion positively affected microglial phagocytosis of amyloid plaques. Identical genetic intervention in microglia prior to birth also strongly impaired on their development, and resulted in reduced cell number and altered morphology (Datta et al., 2018). Transcriptome analysis revealed that at birth 4,338 genes were differentially expressed in Hdac1-2 null mice as compared to wild-type mice, the number of differentially regulated genes appeared to decrease overtime, suggesting that HDAC1 and HDAC2 are key players for microglia development.
Genome-wide profiling for histone H3 lysine K9 and K27 acetylation, revealed that the global levels of H3K9ac and H3K27ac are not significantly affected in microglia lacking Hdac1 and Hdac2. Only a deeper inspection revealed that the abundance of H3K9ac and H3K27ac are increased at the proximal promoters of genes regulating cell cycle and cell activation, e.g., Sema6d, Cdkn2c, Ifnar2, and Kcna3 (Datta et al., 2018). Thus, which histone post-translational modification(s) may account for the significant transcriptional effects of Hdac1-2 deletion in microglia remain to be identified. This identification is a challenge as there is no direct binding of HDAC1/2 to the DNA and also because those proteins are part of several multiprotein complexes in the nucleus i.e., the NuRD (nucleosome remodeling and deacetylation), the Sin3 and the CoREST (co-repressor for element-1-silencing transcription factor) complexes (Kelly and Cowley, 2013).
Enhancer of zeste homolog 2 (EZH2) histone methyltransferase and the catalytic subunit of the Polycomb repressive complex 2 mediate transcriptional silencing through the tri-methylation of the lysine 27 in histone H3 (H3K27me3). The histone H3K27me3 demethylase Jumonji domain containing 3 (JMJD3, also known as KDM6B) counteracts the effect of EZH2, and therefore the H3K27me3 levels are regulated by the balance between activities of EZH2 and JMJD3. In microglia, EZH2 gene expression is found to be significantly and rapidly increased upon a pro-inflammatory stimulus such as LPS-mediated TLR4 stimulation Zhang et al., 2018). Remarkably, JMJD3 expression is found to be up-regulated upon IL-4 treatment which promotes alternative microglial activation (Tang et al., 2014), but as well LPS-treatment (Lee et al., 2014;Das et al., 2017). These observations suggest that M1-and M2-stimuli differentially regulate microglial H3K27me3 expression level, and thereby these microglia activation states (Figure 1). In fact, inhibition of EZH2 with the selective small molecule inhibitors EPZ-6438, GSK343, or GSK126 suppressed the expression of genes mediating the pro-inflammatory response (Koellhoffer et al., 2014(Koellhoffer et al., , 2015Arifuzzaman et al., 2017;Zhang et al., 2018). Moreover, inhibition of EZH2 with GSK343 in the presence of an IL-4 stimulus results in a significant upregulation of M2related genes (Koellhoffer et al., 2014(Koellhoffer et al., , 2015. Furthermore, Ezh2 deficiency in microglia in vivo, achieved by crossing Ezh2 floxed mice with Cx3cr1-CreER mice, confirmed in an animal model of experimental autoimmune encephalomyelitis, that EZH2 facilitates activation of microglia toward a pro-inflammatory M1 phenotype . Mechanistically, EZH2 has been proposed to directly target and repress the expression of gene encoding for suppressor of cytokine signaling 3 (SOCS3), known to promote the proteosomal degradation of tumor necrosis factor receptor-associated factor 6 (TRAF6), key component of the TLR-induced MyD88-dependent NF-kB activation contributing to acquisition of the microglial pro-inflammatory phenotype . In addition, EZH2 also appears to regulate positively the expression of the transcription factors interferon regulatory factor (IRF) 1, IRF8, and signal transducer and activator of transcription 1 (STAT1), which have known roles in regulating inflammation . In summary, the H3K27 histone tri-methyltransferase activity of EZH2 promotes M1 microglia polarization but represses M2 microglia polarization.
In contrast, the histone H3K27me3 demethylase activity of JMJD3 appears to promote M2 microglia polarization but represses M1 microglia polarization. At first, the observation that JMJD3 expression is found to be increased in microglia upon stimulation with an inflammogen may sound contradictory to a role in repressing a pro-inflammatory activation of these cells. However, if one considers a negative-regulatory loop induced upon M1 stimulation, the effects of JMJD3 can be explained and supported by the existing literature. Indeed, JMJD3 depletion or inhibition in microglia leads to an inhibition of the polarization of microglia into a M2 phenotype and increases the M1 inflammatory response of microglia in vitro (Tang et al., 2014;Das et al., 2017). In addition, dehydroepiandrosterone, one the most abundant circulating steroid hormone in humans, is reported to inhibit LPS-induced microglia-mediated inflammation by further increasing JMJD3 expression level above the one induced by LPS treatment alone (Alexaki et al., 2017), suggesting that JMJD3 expression levels are of importance in regulating microglia polarization. Interestingly, knockdown of Jmjd3 gene per se is sufficient to compromise the expression of various M2 genes, including Arg1, CD206, and Igf1 genes, suggesting that the regulation of microglia M2 polarization by JMJD3 is an intrinsic and cell autonomous mechanism. Suppression of JMJD3 expression per se also induces an exaggerated polarization of microglia into M1 phenotype as illustrated by the expression of genes of pro-inflammatory factors including iNOS, IL1β, and IL6. Finally, in vivo, in an animal model of PD, the suppression of Jmjd3 in the substantia nigra was shown to promote microglial over-activation and thereby exacerbate dopamine neuron loss (Tang et al., 2014). Of note, controversial observation of JMJD3 role has been observed in microglia. Indeed, by transcriptional sequencing analysis of the effect of the JMJD3 inhibitor GSK-J4 on primary microglia cells and BV2 microglia cells, Das et al. (2017) showed that this inhibitor have a selective effect on the expression of genes induced by LPS since it suppresses the induction of chemokines and cytokines, of transcription factors and interferon-stimulated genes. Moreover, it is shown that STAT1 and STAT3 are regulating JMJD3 transcription and cooperates with this transcription factor to induce pro-inflammatory genes expression (Przanowski et al., 2014).
The H3K9ac histone mark which, when located near the transcription start site is essentially related to transcription activation, has been linked to microglia activation (Figure 1). Indeed, in primary and BV2 microglia cells, acetate treatment induces H3K9 hyperacetylation, reverses LPS-induced H3K9 hypoacetylation (Soliman et al., 2012) and increases the amount of acetylated H3K9 bound to the promoter regions of Cox1, Cox2, IL1β, and NFκB p65 genes (Soliman et al., 2013). An increase of H3K9ac has also been found in microglia in a model of neuropathic pain. In this model, the authors study the benefit of exercise on neuropathic pain. By investigating the nuclear expression of H3K9ac in microglia, it has been shown that, in a mice model of partial sciatic nerve ligation (PSL), running exercise increased significantly the number of microglia expressing acetylated H3K9 compared to sedentary mice suggesting a role of H3K9 acetylation in producing exerciseinduced hypoalgesia (Kami et al., 2016).
Methylation of histone 3 on lysine 4 residue is linked to active transcription. H3K4me2 is defined as a specific mark for promoters and enhancers (He et al., 2010;Kaikkonen et al., 2013), while acetylation of H3K27 corresponds to their transcriptional activity (Creyghton et al., 2010). By analysis of the transcriptomes and enhancer landscapes of resident macrophages and microglia using ATAC-seq and ChIP-seq approaches, Gosselin et al. (2014) revealed the existence of specific pattern for H3K4me2 deposition in microglia, which substantially differs from that of large peritoneal macrophages. Moreover, by identifying a new type of microglia, Disease associated microglia or DAM, Keren-Shaul et al. (2017) suggested a role of H3K4me2 in the priming of microglia. Indeed, by using a specific method of ChIP-sequencing with high sensitivity and comparing enhancers from DAM and microglia in wild-type or Alzheimer disease mouse model, they observed a highly similar global pattern of H3K4me2, which is present at promoters and enhancers regions. For DAMspecific genes, the authors also observed that active H3K4me2 regions are present in DAM but also in microglia. This suggests that homeostatic microglia already contains the DAM program (Keren-Shaul et al., 2017).
In opposition to the JMJD family, the histone lysine demethylases enzymes (KDM) catalyze the removal of methyl marks from histone lysine residues which leads to the regulation of the structure of the chromatin and the gene expression by epigenetic changes. By LPS treatment activation, BV2 cells and primary microglia cells showed an upregulation of the JMJD2 (also known as KDM4A) and KDM1A enzymes, respectively (Das et al., 2015b(Das et al., , 2016. Moreover, stimulation of microglia cells by TLR3-and TLR4 ligands results in an upregulation of the JMJD2 enzyme in the BV2 microglia cell line (Das et al., 2015a).
Little is known concerning histone phosphorylation in microglia. One study focused on Endocannabinoids which are supposed to reduce neuronal damage after their release in the case of brain injury. In this context, Eljaschewitsch et al. (2006) showed that in microglia, the cannabinoid receptor pathway (CB1/2) and the production of MKP-1 (via the phosphorylation of Histone H3) lead to the suppression of iNOS expression and NO production. The authors conclude that the AEA endocannabinoid activates the phosphorylation of the histone H3 and subsequently the expression of MKP-1 which will lead to block the release of NO only in microglia treated with LPS, leading to neuroprotection (Eljaschewitsch et al., 2006).
Sirtuin 1 (SIRT1) is involved in different cellular processes like inflammation or aging/senescence (Gan and Mucke, 2008;Libert and Guarente, 2013). SIRT1 acts as a deacetylase enzyme with different intracellular targets like histones among others (Michan and Sinclair, 2007;Zhang et al., 2011). Cho S.H. et al., 2015 reported a reduction of SIRT1 with the aging of microglia. This SIRT1 reduction can be associated with aging but also with memory deficits mediated by Tau via the upregulation of IL-1β in mice (Cho S.H. et al., 2015). Moreover, we recently studied the epigenetic changes occurring in microglia under the influence of brain tumor cells (glioma cells). By using a coculture system between microglia and glioma cells we observed that the activation of microglia by glioma cells induces an increase of H4K16ac in microglia. This is due to the increase of SIRT1 into the nucleus of microglia cells which will deacetylates hMOF (a H4K16 acetyltransferase) leading to its chromatin recruitment at the promoter of specific microglia target genes (Saidi et al., 2018; Figure 1).
MICRORNA AND LONG NON-CODING RNA
Non-coding RNAs (ncRNAs), including long non-coding RNAs (lncRNAs) and microRNAs (miRNAs/miRs) serve important roles in regulating the expression of certain genes (Patil et al., 2014) and have emerged as epigenetic regulators of biological processes in microglia.
MicroRNAs are small (20-30 nucleotides in length), highly conserved ncRNAs that regulate gene expression posttranscriptionally. They bind to the 3 -UTR (untranslated region) of their mRNAs target(s) and thereby downregulate gene expression through the RNA interference pathway. Worth a notice, miRNAs are involved in the fine-tuning regulation of the expression of around 30% of all mammalian protein-encoding genes.
The role of miRNAs in regulating the microglial activation has been investigated in the context of multiple stimuli, brain challenges and diseases. For simplicity, the M1/M2 paradigm is used in this review when reporting the effect of various miRNAs on microglial activation states. The current literature does not allow for most of the reports a more detailed definition of the microglia phenotypes. In order to elucidate the role miRNAs may exert on the acquisition of the M1 versus M2 microglial phenotypes, Freilich et al. (2013) performed miRNAs expression profiling on primary murine microglia exposed to lipopolysaccharide (M1-like condition) or interleukin-4 (M2-like condition). Using a mouse miRNA Array that interrogates 690 pre-miRNAs and 722 mature miRNAs, they revealed that: (1) upon lipopolysaccharide stimulation of microglia, 12 miRNAs were increased and 35 were reduced; (2) upon interleukin-4 stimulation of microglia, 16 miRNAs were increased and 28 were decreased. In this study, miR-155 and miR-145 were identified as the most significantly up-regulated miRNAs in the context of lipopolysaccharide-stimulation of microglia and interleukin-4 -stimulation of microglia, respectively. The differentiation regulation of miR-155 and miR-145 expression in microglial M1 versus M2 phenotypes, has been confirmed in the context of other stimuli promoting those phenotypes (Cardoso et al., 2012;Woodbury et al., 2015;Qi et al., 2017;Xie et al., 2017;. The microRNA miR-155 has gained particular attention in the context of ALS. Indeed, in the SOD1 G93A mouse model of ALS, overexpressing the human SOD1 gene carrying a glycine to alanine point mutation at residue 93 (G93A), increased microglial miR-155 expression in the pre-symptomatic mice suggests that this miR could be used as a marker to track ALS at an early stage. In contrast, other microRNAs, such as miR-125b, miR-146a, and miR-124, were only found to be upregulated at the symptomatic stage (Cunha et al., 2018). Furthermore, targeting miR-155 appears to restore microglia function and prolongs survival of the SOD1 G93A mice (Koval et al., 2013;Butovsky et al., 2015). Further, significant down-regulation of miR-689 was found to associate with the M1-activation phenotype, whereas down-regulation of miR-711 associated with the M2activation phenotype. Remarkably, reduced miR-124 expression was observed upon activation toward both phenotypes suggesting that miR-124 expression may be associated to the microglia steady-state and thus to be added to the miRNAs microglial signature described by Butovsky and coworkers (Butovsky et al., 2014).
Of the potential epigenetic regulators that can control microglia biology, miRNAs are probably the most investigated one. Illustration of these considerable efforts are depicted in Table 1 that report the different miRNAs, as well as their target genes, which have been reported to affect microglia activation.
In addition to miRNAs, lncRNAs, which are a type of ncRNAs that exceed 200 nucleotides in length, have also emerged as potential factors contributing to the regulation of microglia activation states. lncRNAs H19, MALAT1, Gm4419, and SNHG14 have been reported to stimulate the activation of microglia toward the M1 phenotype and thereby promote neuroinflammation (Qi et al., 2017;Wang et al., 2017;Wen et al., 2017;Han et al., 2018;Zhou H.J. et al., 2018), lncRNAs GAS5 has an inhibitory effect on M2 polarization of microglia and increases demyelination .
DNA METHYLATION
DNA methylation is a chemical chromatin modification occurring on the cytosine residues from CpGs dinucleotides. DNA methylation leads to repression of gene transcription when CpG islands (regions of the genome containing high density of CpGs) are methylated (Goll and Bestor, 2005). DNA methylation corresponds to the addition of a methyl group on the targeted cytosine from a CpG dinucleotide. Once methylated, the cytosine is referred as 5-methyl cytosine or 5-mC. During gametogenesis and embryogenesis, DNA methylation plays a major role in regulating the chromatin organization and the expression of genes. (Goll and Bestor, 2005;Surani et al., 2008). DNA methylation patterns are regulated by specific enzymes called DNA methyltransferases or DNMTs which can act as de novo or maintenance DNA methylation enzymes.
De novo DNA methyltransferases add methyl groups after the replication of the DNA while maintenance methyltransferases will act on hemi-methylated DNA during the replication of the DNA (Prokhortchouk and Defossez, 2008). Ten-eleven translocase (TETs) family enzymes catalyze 5-mC marks to 5-hmC (5-hydroxymethylcytosine), which corresponds to an active demethylation process leading to an increase of gene expression (Tahiliani et al., 2009). DNA methylation is involved in different cellular processes. Indeed, DNA methylation regulates the X chromosome inactivation, the silencing of centromeric and repetitive sequences but also mammalian imprinting showing the importance of DNA methylation in term of stable and heritable epigenetic regulation (Bestor, 2000;Li, 2002;Reik, 2007).
In contrast to the extensive investigations made on the effect of HDACs inhibitors on microglial gene expression, the regulation of gene expression by DNA methylation is poorly studied in those cells. Two different approaches have been used to investigate the impact of DNA methylation in microglia. This epigenetic modification has been investigated either at the global level, looking for changes in total DNA methylation, or at selected targeted sites, looking at specific gene methylation and related gene expression.
Few studies focused on the global level of DNA methylation in the brain associated to specific brain injury or disease. By immunochemistry, a global hypermethylation has been observed in AD brain and also a significant increase of 5-hmC in the middle frontal gyrus and middle temporal gyrus of human AD brain (Coppieters et al., 2014). The authors also observed that the levels of 5-mC and 5-hmC were low in microglia in control and AD brains. This confirms the observation made by Phipps et al. (2016) whom have shown no differences in 5-mC or 5-hmC in AD in microglia or interneurons. Moreover, no differences were observed in 5-mC or 5-hmC in cells in plaque free regions or near the plaque in late AD (Phipps et al., 2016). So even if AD is associated with microglia activation, it seems that global DNA methylation changes occurring during the disease need to be investigated further in order to confirm a role of DNA methylation in microglia. On the opposite, global methylation changes in microglia has been observed in a rat model of traumatic brain injury (TBI). In this model, by using an immunohistochemistry and a double staining approaches, a sub-population of reactive microglia has been identified and this population is characterized as the major source of hypomethylated cells (Zhang et al., 2007).
When focusing on specific gene methylation, it has been shown that IL1β gene expression is regulated by DNA methylation in aging microglia. Indeed, IL1β gene hypomethylation is associated with upregulation of the cytokine production in two different models of aging. In the first one, the authors associated SIRT1 deficiency to an upregulation of IL1β by hypomethylation of specific CpGs sites on IL1β proximal promoter (Cho S.H. et al., 2015). Matt et al. (2016) confirmed that IL1β upregulation is due to its hypomethylation and validates the role of DNA methylation by treating BV2 and primary microglia cells with the DNA methylation inhibitor 5-azacytidine which increased IL1β gene expression.
In AD, S-Adenosylhomocysteine (SAH), is a potent inhibitor of methyltransferases. SAH increases the production of amyeloidβ in BV2 microglia cells possibly by increasing the expression of amyloid precursor protein (APP) via the promotion of hypomethylation of the APP and PS1 gene promoters (Lin et al., 2009). Although SAH only significantly increased the expression of BACE1 (beta-site APP cleaving enzyme 1) at the highest concentration used, Byun et al. (2012) uncovered two specific methylation sites of Bace1 in BV2 microglia cells. By using a bisulfite sequencing approach, they showed that after treatment of BV2 cells with 5-Azacytidine, the Bace1 gene presented specific demethylation of two CpG sites (+298 and +351) in its 5 -UTR region. Mitochondrial dysfunction has been implicated in the pathogenesis of PD. Indeed, in brain tissue from PD patients, a reduction of the expression of PGC-1α (peroxisome proliferatoractivated receptor gamma coactivator-1) has been observed. Su et al. (2015) investigated the PGC-1α promoter methylation after activation of neuroinflammation by the pro-inflammatory acid palmitate. They observed that palmitate induces PGC-1α promoter methylation in microglia cells as well as mouse primary cortical neurons and astrocytes which reduces the expression of the gene and reduces the mitochondrial content . Finally, it is shown that a specific DNMT is involved in microglia activation. Indeed, by transcriptome sequencing approaches, DNMT3L is found upregulated after microglia activation either by LPS treatment or by stimulation of microglia with TLR3 and TLR4 ligands (Das et al., 2015a,b).
PERSPECTIVES
For a long period of time, microglia have been considered as cells capable of polarization into two distinct phenotypes (so called M1 and M2 phenotypes), however, nowadays it is clear that microglia can multiple phenotypes and that the regulation of microglia activation is far more complicated than initially described. The transcriptional and epigenetic machineries have emerged as important regulators of microglia phenotype acquisition (Figure 2A). Indeed, specific transcriptome profiles have been shown to define the distinct microglial phenotypes acquired in response to various stimuli or challenges with the CNS (Holtman et al., 2017). Collectively the above-mentioned reports strongly support the contribution of epigenetic mechanisms to the acquisition of these different microglial activation states. Worth a notice, even in the context of the unchallenged brain, microglia exhibit different transcriptome throughout life (Butovsky et al., 2014; Figure 2B). Moreover, surveying microglia as well as activated ones can be characterized by distinct miRNA signatures (Butovsky et al., 2014). Thus, it seems that unique epigenome and transcriptome can define the different microglia states of activation in the developing, aging, and diseased brain. Furthermore, a recent study illustrated the idea that since microglia possess different epigenomes and associated transcriptomes throughout life, and in the course of diseases, intervention of the epigenetics machinery could therefore have different impacts on these cells. In fact, Hdac1 and Hdac2 depletion in microglia led to contrasting effects in the developing, the homeostatic and the diseased brain (Datta et al., 2018). The involvement of epigenetic mechanisms in the control of microglia polarization toward a specific phenotype or activation state raises the question; how long microglia can be affected by such epigenetic changes? Since microglia are capable of both innate and adaptive responses, it has been proposed that they can acquire a memory of particular events. Indeed, there are compelling evidences for a microglial epigenetic memory, which is illustrated by the observed differential response to a challenge by microglia, which have been previously exposed to the same or a different challenge, as compared to the response of naive microglia. This process is known in the field as the microglia priming ( Figure 2C). For example, priming of microglia with a LPS stimulus leads to a different response to a second stimuli (Liu et al., 2012). This epigenetic memory of primed microglia has been confirmed in a model of long-lasting pro-inflammatory suppression where the microglia presents an immune-suppressed phenotype acquired by epigenetic changes after LPS-preconditioning (with a reduction in H3K4me3 on IL1-β and TNF-α promoters) to prevent excessive damages (Schaafsma et al., 2015). This acquired memory by microglia after priming is also observed in vivo in the case of development and aging where IL10 gene, for example, is specifically regulated by methylation in the adult after an early life drug experience (Schwarz et al., 2011). Epigenetic regulation is also involved in microglia memory during development since pre-exposed microglia to LPS in vivo shown different patterns of gene expression including HDACs prior and after a second exposure to LPS in vitro . Recently, immune memory of microglia has been observed in vivo after inflammatory stimuli. In their model, Wendeln and coworkers observe that priming of microglia leads to an acute immune training and tolerance in the brain, that microglia are reprogrammed epigenetically and that these changes could be involved in differential responses to neuropathology (Wendeln et al., 2018).
It is known that epigenetic modifications are heritable marks like DNA methylation and can be sustained in time but also into generations. This brings the point on a long term regulation of microglia by epigenetic modifications or signatures. It is very interesting to observe the heritable phenotypes of microglia through the generations. In mammals, maternal immune activation, or MIA, which can be due to an inflammatory stimulus like bacterial or viral infections but also activated after allergy during the pregnancy, has been shown to positively correlates with the risk of developing neuropsychiatric disorders like autism in the offspring (Vogel Ciernia et al., 2018). This link may be bridged to an impact of MIA on the microglia from the fetus which leads to the acquisition of functional changes that are maintained in the adult (Mattei et al., 2017;Prins et al., 2018; Figure 2D). In a majority of animal models of maternal immune activation, the rat dams or pregnant mice are treated with either a viral mimetic (poly I:C) or LPS. Microglia in offspring from MIA challenged rodent show an altered transcriptome signature, with increased expression of genes related to proinflammatory signaling pathways, but a reduction in expression of genes related to proliferation and cell cycle (Ben-Yehuda et al., 2017;Mattei et al., 2017). The transgenerational effect of MIA, has led to the proposal that microglia epigenetic changes, that can induce long-term modification of microglia phenotype, should be responsible for the observed effect in progenies (Nardone and Elliott, 2016). Indeed, recent study shows alteration of the microglial DNA methylome in a mouse asthma model of MIA. Genome wide analysis identified alteration in the expression of genes involved in the control of microglial sensitivity to the environment and in the shaping of neuronal connections in the developing brain (Vogel Ciernia et al., 2018).
Finally, one should keep in mind that it is highly improbable that the control of microglia plasticity is limited to epigenetic mechanisms. For example, metabolic changes in the microglia and its microenvironment have been linked to microglia phenotype polarization and associated to various diseases (Gimeno-Bayon et al., 2014;Kalsbeek et al., 2016;Orihuela et al., 2016). Thus, there is a multifactorial regulation of microglia activation states, and this level of complexity should be taken into account when designing potential therapeutic strategies targeting microglial epigenetic mechanisms.
AUTHOR CONTRIBUTIONS
MC and BJ planned and wrote the manuscript.
FUNDING
This work has been supported by grants from the TracInflam grant from ERA-NET NEURON Neuroinflammation, the Swedish Research Council, the Swedish Childhood Cancer Foundation, the Swedish Cancer Foundation, the Swedish Cancer Society, and the Swedish Brain Foundation.
ACKNOWLEDGMENTS
We apologize to authors whose primary references could not be cited owing to space limitations. | 8,722 | 2018-08-03T00:00:00.000 | [
"Biology",
"Medicine"
] |
Using helium 10830 {\AA} transits to constrain planetary magnetic fields
Planetary magnetic fields can affect the predicted mass loss rate for close-in planets that experience large amounts of UV irradiation. In this work, we present a method to detect the magnetic fields of close-in exoplanets undergoing atmospheric escape using transit spectroscopy at the 10830 Angstrom line of helium. Motivated by previous work on hydrodynamic and magneto-hydrodynamic photoevaporation, we suggest that planets with magnetic fields that are too weak to control the outflow's topology lead to blue-shifted transits due to day-to-night-side flows. In contrast, strong magnetic fields prevent this day-to-night flow, as the gas is forced to follow the magnetic field's roughly dipolar topology. We post-process existing 2D photoevaporation simulations to test this concept, computing synthetic transit profiles in helium. As expected, we find that hydrodynamically dominated outflows lead to blue-shifted transits on the order of the sound speed of the gas. Strong surface magnetic fields lead to unshifted or slightly red-shifted transit profiles. High-resolution observations can distinguish between these profiles; however, eccentricity uncertainties generally mean that we cannot conclusively say velocity shifts are due to the outflow for individual planets. The majority of helium observations are blue-shifted, which could be a tentative indication that close-in planets generally have surface dipole magnetic field strengths $\lesssim 0.1$ gauss. More 3D hydrodynamic and magneto-hydrodynamic are needed to confirm this conclusion robustly.
INTRODUCTION
Planets that reside close to their host stars receive significant amounts of high-energy flux.Photoionization of their upper atmospheres results in heating that can drive mass loss in the form of a hydrodynamic wind (Lammer et al. 2003;Muñoz 2007;Murray-Clay et al. 2009;Owen & Jackson 2012;Kubyshkina et al. 2018).For small, lowmass planets (1 − 4 R ⊕ , ≲ 20 M ⊕ ), photoevaporation can completely strip their primordial hydrogen/helium envelopes, leaving behind a "stripped core" with a bulk composition similar to Earth's (Owen & Wu 2013;Lopez & Fortney 2013).This photoevaporation-driven evolution has been used to explain the lack of short period Neptune sized planets (e.g.Szabó & Kiss 2011;Lundkvist et al. 2016;Owen & Lai 2018), and the bimodal radius distribution of small planets (Fulton et al. 2017;Owen & Wu 2017;Van Eylen et al. 2018;Rogers et al. 2023b).However, other interpretations have been proposed to explain the demographics of close-in, low-mass exoplanets.For example, other mass loss mechanisms such as core-powered mass-loss (e.g, Ginzburg et al. 2018;Gupta & Schlichting 2019) or impactpowered mass-loss (e.g.Wyatt et al. 2020); late time gas accretion in the formation phase (e.g, Lee & Connors 2021;Lee et al. 2022), or separate populations of terrestrial planets and water worlds (e.g, Zeng et al. 2019;Luque & Pallé 2022).
Given this debate, we need to test photoevaporation models using direct observations of atmospheric escape.An unquantified uncer-★<EMAIL_ADDRESS>tainty when testing these models is the role that magnetic fields play in controlling the outflow and the mass-loss (e.g.Owen & Adams 2019).The photoevaporation models used to match the observed location of the radius valley have not included magnetic fields (e.g, Owen & Wu 2017;Wu 2019;Rogers & Owen 2021;Rogers et al. 2023a,b).Photoevaporative outflows are highly ionized, so they should be influenced by any planetary magnetic fields.The degree to which the planetary magnetic field controls the outflow is determined by the ratio of magnetic pressure to thermal/ram pressure of the outflow (e.g.Adams 2011).If the magnetic pressure dominates near the planetary surface, then the gas is constrained to the topology imposed by the planet's magnetic field.For this "magnetically controlled" case, analytic and numerical studies have demonstrated that in a region near the equator, the magnetic field lines are closed, and the gas is trapped in a "dead-zone" (e.g, Trammell et al. 2011;Adams 2011;Trammell et al. 2014;Owen & Adams 2014;Khodachenko et al. 2015;Matsakos et al. 2015;Arakcheev et al. 2017).Hence, the gas can only escape near the poles, where the gas pressure can open magnetic field lines and drive a transonic wind.Alternatively, if the thermal/ram pressure dominates everywhere, the gas will open all field lines (dragging the magnetic field with it).In this "hydrodynamic case", the outflow should represent those seen in purely hydrodynamic simulations (e.g, Stone & Proga 2009;Owen & Adams 2014;Khodachenko et al. 2015;Tripathi et al. 2015;Carroll-Nellenback et al. 2016;McCann et al. 2019;Debrecht et al. 2020;Carolan et al. 2021a).
In the magnetically controlled case, only a fraction of the planetary surface supports open field lines, and the mass loss rate is suppressed compared to the hydrodynamic case.2D MHD simulations have suggested that surface magnetic field strengths as low as ∼ 0.3 − 3 Gauss can suppress the mass loss rate of hot Jupiters by an order of magnitude (Trammell et al. 2014;Owen & Adams 2014;Khodachenko et al. 2015).The critical magnetic field strengths required to suppress mass loss rates are smaller or comparable to many of the planetary magnetic field strengths in our Solar System (e.g.Jupiter's dipole moment is ∼4 Gauss, Stevenson 2003;Connerney et al. 2022).Therefore it is important to consider the effects of magnetic fields on escape for exoplanets and understand how to determine their impact observationally.3D MHD simulations that include the stellar wind also found the planetary mass loss rate decreases with planetary magnetic field strength (Khodachenko et al. 2021).However, we note similar simulations run by Carolan et al. (2021b) show the opposite dependence, and the differences are yet to be fully understood.
For lower-mass planets, strong planetary magnetic fields that suppress mass loss could significantly affect their evolution.Owen & Adams (2019) studied the effect that magnetic fields would have on the location of the radius valley.If these planets had surface magnetic field strengths ≳ 3 Gauss at the age of a few hundred Myrs1 , then their cores would need to be ice-rich in order to match the observed radius valley.However, since the planets below the radius gap (that have well-constrained masses) generally have densities consistent with an Earth-like composition (Dressing et al. 2015;Dorn et al. 2018;Van Eylen et al. 2019), they argue that this implies that the lowmass, close-in exoplanet population do not possess strong magnetic fields.Specifically, Owen & Adams (2019) argued that the dynamogenerated fields in the hydrogen/helium envelopes of mini-Neptunes must be weak (B ≲ 0.3 Gauss) at young ages.
Unlike small planets, Hot Jupiters are thought to be stable to atmospheric mass loss independent of whether or not magnetic fields suppress escape (e.g., Hubbard et al. 2007;Owen & Wu 2013;Jin et al. 2014).However, the strength of their magnetic fields is still of much interest.Hot Jupiters have radii larger than expected from interior structure models (Laughlin et al. 2011;Thorngren & Fortney 2018).In order to explain this discrepancy, these planets require an extra source of energy that must be deposited into the interior (e.g, Bodenheimer et al. 2001;Guillot & Showman 2002;Arras & Socrates 2010;Youdin & Mitchell 2010;Batygin & Stevenson 2010;Komacek & Youdin 2017).One popular explanation for this extra energy is Ohmic dissipation, in which the interaction of zonal winds and the planetary magnetic field generate electrical currents that resistively heat the interior (Batygin & Stevenson 2010;Perna et al. 2010;Batygin et al. 2011;Menou 2012;Rauscher & Menou 2013;Ginzburg & Sari 2016).Current modelling suggests that explaining the hot Jupiter radius anomaly in this way requires hot Jupiters to possess sizeable magnetic fields.For example, using 3D MHD simulations of the atmosphere of HD209458b, Rauscher & Menou (2013) argue that the magnetic field must be greater than 3 − 10 Gauss for Ohmic dissipation to be responsible for its inflated radius.For a few hot Jupiters hosting stars, Cauley et al. (2019) found tentative evidence of magnetic star-planet interactions.From variations in the Ca II K line, they inferred these planets to have large magnetic fields ∼ 20-100 Gauss.However, we note that Cauley et al. (2019) caution the detected variability may be unrelated to the presence of a planet, and hence do not constrain planetary magnetic fields.Independently, using a scaling law relating the magnetic energy density to the density and intrinsic luminosity of bodies ranging from Earth to low mass stars, Christensen et al. (2009) estimated the magnetic field of hot Jupiters to have magnetic field strengths of order 20-50 Gauss.In contrast, Grießmeier et al. (2004) argued that tidally locked hot Jupiters should have a much smaller magnetic field than Jupiter due to their slow rotation rate.
To determine whether the effects of magnetic fields need to be accounted for in evolutionary modelling, we need to compare theoretical predictions of "magnetically controlled" and "hydrodynamic" outflows to direct observations of atmospheric escape.Additionally, these observations can be used to put constraints on the magnetic field strength of planets.The two most commonly used lines to directly observe atmospheric escape are Lyman- (e.g., Vidal-Madjar et al. 2003;Lecavelier Des Etangs et al. 2010;Kulow et al. 2014) and He I 10830 Å (e.g., Spake et al. 2018;Allart et al. 2018;Nortmann et al. 2018), which originates from the metastable 2 3 state of helium.Kislyakova et al. (2014); Ben-Jaffel et al. ( 2022) inferred the magnetic field properties of HD209458b and HAT-P-11b respectively, by matching the transit spectrum deduced from 3D outflow models to Lyman- transits.In general, this approach is challenging because absorption in the interstellar medium generally obscures the line core (e.g.Landsman & Simon 1993), and therefore Lyman- transits probe material at high radial velocities and distances far from the planet ≳ 10 where the outflow is affected by the circumstellar environment (e.g Owen et al. 2023).This causes observations to be degenerate with properties of the circumstellar environment, making it difficult to link observations to fundamental planetary quantities, such as the planetary magnetic field strength.
The helium line may provide a better tool in this endeavour.It probes material closer to the planet (e.g, MacLeod & Oklopčić 2022) and observations can be done at high spectral resolution (R≳ 30,000) since it is accessible from the ground.This provides a unique opportunity to study the kinematics of the outflow near the planet.Oklopčić et al. (2020) proposed using spectropolarimetry of this line to detect magnetic fields; however, this method is not currently feasible with 10m class telescopes but may be possible in the future with 30m class telescopes.
Here, we propose a method for detecting the presence of magnetic fields strong enough to control the topology of the outflow using transmission spectroscopy in the 10830 Å line of helium.Motivated by previous HD and MHD simulations, we suggest that "hydrodynamic" outflows (i.e.outflows where the planetary magnetic fields are dynamically unimportant) should lead to blue-shifted transmission spectra due to gas flowing from the hot day side to the cold night side (towards the observer during transit) as a result of pressure gradients from the anisotropic heating.In contrast, strong magnetic fields prevent the day-to-night outflow and cause the gas to move away from the observer during transit.As a result, we suggest that "magnetically controlled" outflows will produce transmission spectra that are (weakly) red-shifted.
THE EFFECT OF MAGNETIC FIELDS ON PHOTOEVAPORATION
In general, the planetary outflow is sensitive to both planet properties and the stellar environment.Following Owen & Adams (2014), one can coarsely partition the outflow into regimes based on the relative strength of the stellar magnetic field, stellar wind ram pressure, planetary magnetic field and planetary wind ram pressure.In this work, we consider regimes where either the ram pressure of the planetary wind or the planetary magnetic field pressure dominates the stellar components close to the planetary surface, where most of the helium absorption occurs.We can estimate whether the outflow is magnetically controlled by considering the dimensionless quantity Λ, which is the ratio of the ram pressure of the wind to the magnetic pressure of the planet.For a planet with a dipolar magnetic field, this ratio is equal to where is the outflow rate, 0 is the surface magnetic field strength of the body, is the radius of the body and is the distance to the centre of the body.When Λ p ≪ 1, the flow is magnetically controlled, and the outflowing gas is forced to follow magnetic field lines.When Λ p ≫ 1, the outflow will drag magnetic field lines and should follow a flow structure similar to a pure hydrodynamic outflow.Using an "energy-limited" outflow (e.g.Baraffe et al. 2004) where we estimate the threshold magnetic field such that magnetic pressure and ram pressure are equal (i.e Λ = 1).For fiducial parameters appropriate for a typical hot Jupiters, this is:
Post-processing existing simulations
To assess whether "hydrodynamic" and "magnetically controlled" outflows will produce blue-and red-shifted transit spectra, respectively, we want to compare the synthetic transit spectra produced from simulations.There are no 3D radiation-hydrodynamics photoevaporation simulations that include helium and magnetic fields.Given the complexity of post-processing simulations (as detailed in Section 2.2), which would be difficult in 3D, as a first step, we choose to use existing 2D photoevaporation simulations of a hot Jupiter presented in Owen & Adams (2014).These radiation-hydrodynamic simulations did and did not include magnetic fields for the same setup and are ideal for our experiment.To analyze the case where the ram pressure of the planetary outflow dominates over the magnetic field of the planet (Λ p > 1), we use a purely hydrodynamic simulation of the outflow.In the opposite case, where magnetic pressure dominates over ram pressure of the planetary outflow (Λ p < 1), we use MHD simulations of the outflow.
In these simulations, the planet has a dipolar magnetic field.The planetary dipole was chosen to be parallel to the orbital angular momentum vector of the planet.There are currently no constraints on the geometry of the magnetic field of exoplanets.However, since these planets are very close to their host stars, they are often assumed to be tidally locked.Thus, as a starting point, it is sensible to guess that any dynamo-generated magnetic field may be similar to Jupiter's or Saturn's and (almost) points along the rotation axis.
In some of the simulations, an additional component corresponding to the stellar magnetic field was added.This contribution is modelled as a constant field that points along the pole of the planet: where * controls the relative strength of the stellar and planetary magnetic fields.
The original simulations were performed on a spherical polar grid (, ), which we can convert to Cartesian coordinates (, ), where the axis points along the line joining the star and the planet and the axis is parallel to the planet's orbital angular momentum axis.It is important to state that atmospheric escape is generally a 3D process, and therefore, several assumptions need to be made in order to be able to perform sensible 2D simulations.Firstly, the planet needs to be tidally locked to have a constant day and night side.At the same time, rotational effects are ignored.Inside the Hill sphere, these effects are small (Murray-Clay et al. 2009) and can be ignored; however, they are important for shaping the flow on the largest scales (e.g, McCann et al. 2019).Since absorption by helium generally comes from close to the planet, this approximation is reasonable.
In the hydrodynamic case, these approximations mean that the outflow is rotationally symmetric around the line joining the planet to the sub-stellar point.The presence of this magnetic field stops this setup from being rotationally invariant, such that the flow is no longer globally axisymmetric.Therefore, the simulations required each cell to locally have the property that = 0. Since the magnetic field suppresses the azimuthal flow, this approximation is realistic.Thus, the simulation can be considered a thin slice of the domain in the x-z plane.Since the magnetic field predominately governs the outflow geometry, we expect the flow structure on the day side to be similar as we rotate around the dipole axis.On the night side, the outflow is suppressed since heat is not efficiently transported across magnetic field lines.As a result, only the day side of the atmosphere was simulated (see sec 5.1 of Owen & Adams 2014or Trammell et al. 2014)
Addition of Helium
The lifetime of the metastable triplet level in the outflow is determined by the rates of collisional, radiative and photoionisation processes that depopulate it.As radiative transitions into the ground state are highly suppressed (Drake 1971), depopulation of this state is dominated by transitions due to collisions and photoionization (e.g., Oklopčić & Hirata 2018;Oklopčić 2019).In the outflow, the lifetime of the metastable state may be comparable to the flow timescale (see Figure 4 of Oklopčić 2019).Thus, the helium metastable level is often not in local statistical equilibrium.This means traditional postprocessing techniques of simply using the gas' local properties to compute absorption cross-sections are inappropriate.Therefore, we must compute the level populations by solving the time-dependent radiative transfer equations for gas parcels moving in the outflow.
We assume that the gas is composed of solar abundances of hydrogen and helium; however, we note that the mass fraction of helium in the outflow can be adjusted depending on how efficiently helium is expected to be dragged in the predominantly hydrogen flow (e.g.Malsky & Rogers 2020).Helium is not expected to contribute to the cooling of the gas, and therefore the predominant way that it affects the outflow is by raising its mean molecular weight.For a solar abundance of helium, the sound speed of the gas can be modified ∼ 25%.Ultimately, though, adding helium should not have a large effect on the geometry of the outflow.Therefore, it is reasonable to consider the pure flow structure of the pure hydrogen outflow simulated by Owen & Adams (2014) as representative of a hydrogen-helium out-flow.Similarly, the distribution of helium ions and atoms in each electronic state has little impact on the outflow structure.
Helium Distribution in the Outflow
The first task is to calculate the distribution of helium atoms in the outflow.We track the fraction of helium in the neutral singlet and triplet states along streamlines in the outflow.We only track the number of neutral helium atoms in the ground state 1 1 and metastable 2 3 state since excited states quickly radiatively decay to one of these two states with a matching singlet/triplet configuration.Our problem has two parts: (i) determining streamlines in the flow and (ii) calculating the fraction of helium in the triplet state along these streamlines.
In the case of the pure hydrodynamic flow, we solve for the 2D position of a streamline as a function of length, , down the streamline using equations: We then derive the steady state distribution of helium atoms along streamlines using Equations 7 and 8, which account for the major transitions in and out of the singlet and triplet states of helium (Oklopčić & Hirata 2018).
1 and 3 are neutral helium's singlet and triplet fractions, respectively. and are the number density of electrons and hydrogen atoms. 1 and 3 are the photoionization rate for the singlet and triplet states, respectively.We calculate the photoionisation rate for the singlet and triplet states by assuming that all the photoionizing flux is concentrated at a photon energy ℎ of 24.6 eV and 4.8 eV respectively.
Based on the spectral energy distribution for late K stars from the MUSCLES survey (France et al. 2016;Loyd et al. 2016;Youngblood et al. 2016), we estimate the total singlet ionising flux to be 0.2 of the hydrogen ionizing flux and the total triplet ionising flux to be about the same hydrogen ionizing flux.As the simulations were run at high UV fluxes ∼ 10 6 erg cm −2 s −1 , we set = 2 × 10 5 and 10 6 erg cm −2 s −1 for the singlet and triplet states respectively.We varied these fluxes by an order of magnitude in both directions and found that it did not qualitatively affect our results. is the photoionization cross-section of the state.The photoionisation cross section for these photons are 5.48 × 10 −18 cm 2 (Hummer & Storey 1998) and 7.82×10 −18 cm 2 (Brown 1971) respectively.In this work, we set both 1 = 0 and 3 = 0. Since hydrogen also absorbs 24.4 eV radiation, we expect the = 1 surface to 24.4eV photons to be above the = 1 surface to 4.8eV photons (and close to the hydrogen ionization front in the simulation).Since the main pathway to populate the triplet state is by photoionization of the singlet state and then recombination into the triplet state, we expect that this will underestimate the triplet fraction below the = 1 surface to 24.4eV.However, since the hydrogen ionization front is deep in the atmosphere of the planet (in these simulations, it is ∼ 1.03 R ), the gas below this surface contributes minimally to the overall transit signal.
The recombination and collisional excitation/de-excitation coefficients depend on the temperature of the gas.As the simulations were run for high UV fluxes (≳ 10 6 erg cm −2 s −1 ), the gas is in the radiative-recombination regime (Murray-Clay et al. 2009) and therefore thermostats to ∼ 10 4 K above the ionization front.As a result, we use all coefficients at 10 4 K. 1 = 2.16 × 10 −13 cm 3 s −1 & 3 = 2.25×10 −13 cm 3 s −1 are the case A recombination coefficients for the singlet and triplet state respectively (Osterbrock & Ferland 2006).The electron density ( ) is calculated in the Owen & Adams (2014) simulations, which only accounted for electrons produced by the ionisation of hydrogen atoms since this is the dominant source. 31 = 1.272 × 10 −4 s −1 is the radiative transition rate between the metastable triplet state and the ground singlet state (Drake 1971).The transitions between the triplet and singlet states due to collisions with electrons are denoted by the temperature dependant coefficients: where is the energy difference between the states from Martin (1973), is the statistical weight of the state and Υ is the effective collision strength from Bray et al. (2000).At 10 4 K, the two transitions from the metastable state to singlet levels are given by 31 = 2.7 × 10 −8 cm 3 s −1 and 31 = 5.2 × 10 −9 cm 3 s −1 .The transition from the ground singlet state to the metastable state is given by 13 = 5.7 × 10 −19 cm 3 s −1 .Finally, transitions out of the metastable state due to collisions with hydrogen atoms via associative ionisation and Penning ionisation have a combined rate of 31 ∼ 5 × 10 −10 cm 3 s −1 (Roberge & Dalgarno 1982).
Equations 5-8 form a set of coupled first-order differential equations.The streamlines are initialised at a constant radius 1.02 R with an initial singlet and triplet fraction that is in equilibrium at 500 K with 1 = 3 = 0. We solve these differential equations numerically using the implementation of the LSODA method (Petzold 1983) provided in the scipy library (Virtanen et al. 2020).For numerical stability, we solve 7 and 8 as function of log().The absolute and relative tolerances used are 1 × 10 −13 and 1 × 10 −10 , respectively and the maximum step size chosen is 1 × 10 −3 .All gas properties required to solve these equations (e.g density, velocity etc.) were interpolated from the original simulations using bivariate spline interpolation over the polar grid.To calculate the metastable helium fraction at any point in the outflow, we interpolate the values for fifty streamlines, linearly spaced in angle, originating on the planet's day side.Testing indicates this is sufficient to calculate the transmission spectrum accurately.Figure 1 shows how streamlines originating on the planet's day side wrap around to the night side.
In the MHD case, we have to consider both the field centred on the poles where the magnetic field lines are opened by the outflow and a region centred on the equator where magnetic pressure dominates over thermal pressure so that the field lines are closed.Since the fluid Figure 1.Left: Streamlines of the outflow for the case of a planet without a magnetic field.The star is located to the right of the planet along the line z = 0.The planet is shown by a black circle centred on the origin.The orientation is the same for all subsequent 2D plots.Due to anisotropic heating, the day side is much hotter than the night side.The resulting pressure gradients cause the outflow to wrap around the planet, creating gas travelling towards an observer in transit in the transmission region.Right: Magnetic field lines for a planet with a surface magnetic field strength of 3 Gauss.Orange lines are closed field lines where the magnetic field is strong enough to confine the flow.The red lines are open magnetic field lines, which also are gas streamlines.The direction of the arrows shows the direction of the outflow.In this case, the magnetic field prevents the day-to-night side outflow and causes the gas to move away from the observer in transit in the transmission region.
is required to follow magnetic field lines (and magnetic field lines are dragged along by the fluid), we can determine the streamlines of our gas in the open region by tracing magnetic field lines.This is helpful because we can simultaneously determine the region in which the field lines are closed.These are calculated analogously to 5 and 6 replacing velocity with magnetic field: where and are the radial and angular components of the magnetic field, respectively.Figure 1 shows the magnetic field lines for a fiducial planetary magnetic field strength of 3 Gauss.We use this value of the planetary magnetic field to illustrate the magnetically controlled case throughout the rest of the paper.
In the regions where the streamlines are open, we solve the steady state distribution of helium atoms using Equations 7 and 8 as in the hydrodynamic case.In the regions in which they are closed, the gas is approximately in magneto-hydrostatic equilibrium, and as a result, the helium-level populations are in statistical equilibrium.Therefore we solve for the distribution of helium atoms using Equations 7 and 8 by setting the advection term (the left-hand side) to zero.The method used to solve these equations numerically is the same as the hydrodynamic one, with the addition of a step to check whether the field line is open or closed to decide to solve the non-equilibrium or equilibrium equation for the distribution of helium atoms.
In Figure 2, we show the number density profiles of helium in the triplet state for the hydrodynamic and magnetically controlled outflow.
Computing Synthetic Transit Profiles
Our goal is to calculate the absorption of stellar radiation at 10830 Å due to the presence of the escaping helium.To elucidate the basic principles, we produce synthetic transmission spectra at mid-transit for the planet transiting with an impact parameter of zero.Since the simulations used to generate the densities and velocities of the outflow are 2D, it's only at mid-transit and with an impact parameter of zero that our approximate symmetry is applicable.
With the assumptions described in 2.1, the outflow in the purely hydrodynamic case is rotationally invariant around the line connecting the star and planet (x-axis).This symmetry means that we can identify the z coordinate of our grid with the radial coordinate on the stellar disc.Therefore, the optical depth of the outflow for a ray emanating at any point on the stellar disc is only a function of z and is given by Equation 14.This can be integrated over the stellar disc to give the excess absorption caused by the escaping gas.
where 3 is the number density helium atoms in the metastable state, , are the radius of the planet and star, respectively and is the absorption cross-section of the helium to photons at a frequency , which is a function of the line of sight velocity of the gas, , and the temperature.The absorption cross-section is given by where and is the charge and mass of the electron, is the speed of light, is the oscillator strength of the transition and Φ is the Voigt line profile.The Gaussian part of the Voigt profile has a standard deviation of ; the Lorentzian part has a half-width at half-maximum of , and the line centre has been Doppler shifted to .
Here 0 is the rest frequency of the helium line, and = 1.022 × 10 7 s −1 is the Einstein coefficient taken from the NIST atomic database (Kramida et al. 2022).It is worth noting that the 2 3 → 2 3 transition in Helium actually consists of three distinct lines at 10830.34 Å, 10830.25 Å and 10829.09Å due to fine structure splitting.The first two of these transitions are blended; however, the third can be distinguished at the spectral resolution of current instruments.The oscillator strength for these transitions, taken from the NIST atomic database (Kramida et al. 2022), are given by 0.300, 0.180, and 0.060, respectively.The absorption cross-section at any wavelength is calculated by summing up the contribution from these three transitions.
Similar to Oklopčić & Hirata (2018), when computing the optical depth, we only took into account gas within the Hill sphere of the planet, which we choose to have a radius of 4 .This approximately corresponds to the Hill sphere radius of a Jupiter-mass planet around a solar-mass star at 0.04 AU.Beyond the Hill sphere, the outflow is significantly affected by the Coriolis force, the stellar tidal field and the stellar wind.Here, the model is no longer applicable.However, unless the stellar wind is very strong, these regions do not contribute greatly at mid-transit (e.g., MacLeod & Oklopčić 2022).If the planet is further away from the star or the magnetic field is strong, the planetary outflow may not be affected by the circumstellar environment at larger distances from the planet.In Appendix A, we provide transit spectra taking into account gas up to 8 to illustrate this effect.
To do this calculation, we construct a linearly spaced 300 × 600 cell grid in the (, ) plane, with x ranging from [−4 , 4 ] and z ranging from [ , 4 ].The integrals in 14 and 15 are computed with this grid using a middle Riemann sum.
The method we use in the magnetic case is different.As discussed above, the rotational symmetry around the line connecting the star and planet (x-axis) present in the purely hydrodynamic case is broken by the dipolar structure of the magnetic field.Therefore, we cannot simply compute the 3D density structure by rotating the 2D density distribution around the x-axis.The presence of a planetary magnetic field shuts off the outflow on the night side planet since heat cannot be transported effectively across magnetic field lines.Therefore, we set the density of helium on the night side of the planet to zero.On the day side, we build the 3D density distribution by rotating the 2D density of helium distribution around the z-axis (symmetry axis of the dipole).In order to account for the reduction of EUV flux incident on the atmosphere away from the substellar line due to geometrical effects, we scale the number density of helium in the triplet state in each 2D slice by the square root of the reduction of incident EUV flux at the base the slice.We do this because the simulations are done at high EUV fluxes such that the outflow is in ionizationrecombination equilibrium.In this regime, the temperature of the outflow thermostats to 10 4 K and the density at the base of the outflow is proportional to the square root of the incident EUV flux (e.g Murray-Clay et al. 2009;Owen & Alvarez 2016).
To perform synthetic observations, we create a 3D cylindrical grid from the Cartesian product of a 2D grid on the stellar disc and a 1D grid along the line connecting the star and planet.The 1D grid has 150 cells and ranges from [0, 4R ].The 2D grid on the stellar disc has 400 cells.We calculate the optical depth of the outflow at the centre of each cell on the stellar disc by integrating through the 3D density distribution using a middle Riemann sum.This can then be translated into an excess depth by integrating over the stellar disc using a double Riemann sum: where is the area of the ℎ cell.
RESULTS
Figure 4 shows the excess transit depth for the hydrodynamic and magnetically controlled outflows.As expected, we find the hydrodynamic outflow leads to a blue-shifted spectrum.Figure 3 shows that most of the absorption comes from the terminator, where gas is moving towards the observer in transit.The peak absorption is at −8.3 km s −1 , which is of order the sound speed of the outflowing gas.It is worth noting that we still see significant absorption in the red wing due to absorption from gas moving towards the star (away from the observer in transit), which has originated from low latitudes on the planet.As a result, the spectrum is asymmetric and broader than expected from thermal broadening.
In contrast, a planet with a 3 Gauss magnetic field shows a symmetric transit spectrum, with a peak absorption centred at 0.5 km s −1 .Most of the absorption comes from the dead zone, where the velocity of the gas is stationary, and an area near the poles, where the line of the sight velocity is small (see Figure 3).As a result, the spectrum only shows a small velocity shift.Furthermore, since almost all the absorption comes from gas at low velocities, the width of the spectrum is primarily set by the thermal velocity of the gas.
In Figure 5, we show how the transit spectrum varies with planetary magnetic field strength.Interestingly, the transit depth does not decrease monotonically with magnetic field strength, even though the fraction of open field lines (and the mass loss rate) decreases smoothly as a function of magnetic field strength.As the magnetic field strength is increased, the size of the region where the magnetic field lines are closed increases.This region then blocks more of the stellar disc, contributing more to the transit depth.At the same time, since the mass loss decreases, the column density of gas in the outflowing regions near the poles decreases, contributing less to the transit depth.The weight of these contributions has an opposite dependence on the strength of the magnetic field and results in the transit depth not being a monotonic function of the magnetic field strength.
The overall velocity shift of the transmission spectrum also varies as the magnetic field strength changes.At lower magnetic field strengths, gas pressure is able to open up field lines at lower latitudes.The gas coupled to these field lines has a higher line sight velocity, resulting in a red-shifted spectrum.In the 0.3 Gauss case, shown in Figure 5, this velocity shift is 3.8 km s −1 .
For the 3 Gauss case, we also include a situation where the stellar magnetic field is non-zero, and the ratio of stellar to planetary magnetic field strength is given by * = 0.001.For a planet at 0.04 AU around a solar radius star, this corresponds to a stellar surface magnetic field of 1 Gauss.In general, the stellar magnetic field is able to open out more field lines.As mentioned above, this can either increase or decrease the transit depth depending on how the contributions from the closed region and outflowing regions change.However, The number density of helium in the 2 3 state in the outflow for the hydrodynamic case (left) and the magnetically controlled case (right).In the hydrodynamic case, the density appears to be fairly spherically symmetric.In the magnetically controlled case, the region with closed magnetic field lines has a higher number density of metastable helium atoms than the outflowing regions.This is primarily because the total number density of gas in this region is higher, as it is in hydrostatic equilibrium.in this case, we actually see no change in the transit spectrum.This is because the opening of field lines is completely dominated by the outflow rather than the stellar field.This effect is explained in Figure 4 of Owen & Adams (2014).
DISCUSSION AND CONCLUSIONS
We have post-processed 2D HD and MHD simulations of a hot Jupiter undergoing photoevaporation and calculated in-transit absorption signal due to the 10830 Å line of helium.If the planet has no magnetic field, the transit signal is blue-shifted Δ = −8.3km s −1 due to the absorption from gas that flows from the hot day side to a cold night side.When a planetary magnetic field (between 0.3-10 Gauss) is included, the transit signal has either no apparent velocity shift or a very slight red-shift, as escaping gas from the day side is tied to magnetic field lines and moves away from the observer in transit.
At the current resolution of ground-based instruments, this difference in line-centre velocity is observable and may be used to test whether outflows are magnetically controlled or not and consequently put constraints on the magnetic field strengths of planets.As the observed velocity difference is due to large-scale changes in the geometry of the flow, this method should be fairly robust to the specifics of the photoevaporative flow (e.g.mass loss rate, temperature).However, further work is required to ensure that our results are still true in full 3D simulations over a more extensive region of parameter space and for various magnetic field configurations.We will discuss this in section 5.1.1.Excess Depth Figure 5. Calculated excess absorption as a function of wavelength for different planetary magnetic field strengths.For a planetary magnetic field of 3 Gauss, we also include a case in which the stellar magnetic field is non-zero.The velocity scale is the same as in Figure 4.
For magnetically controlled flows, observationally distinguishing between different planetary magnetic field strengths is a more difficult task.The clearest difference is that lower field strengths with more open field lines lead to a more red-shifted transit.More open field lines can also increase high-velocity gas absorption far from the planet, leading to asymmetric transits with an extended red wing.This is only visibly apparent in the 0.3 Gauss case when we consider gas within 8 instead of 4 (see Figure A1 in Appendix A).Observations of the escaping atmosphere of WASP-52b show that the transit spectrum is unshifted with respect to the planet's rest frame and has a slight asymmetry in the red wing.We speculate that this might indicate that the outflow is magnetically controlled.Overall, we caution that since the differences in these spectra are relatively minor, model and observational uncertainties most likely make different magnetic field cases indistinguishable at the resolution of current instruments.
Use of Owen & Adams (2014) simulations and 3D effects
In reality, atmospheric escape is a 3D dimensional process, and 3D radiation magneto-hydrodynamics simulations that include rotational effects are necessary to confirm the validity of this work.The Owen & Adams (2014) simulations are simplistic when compared to current state-of-the-art simulations; however, as discussed above, they are the only simulation set that included radiative transfer, models with and without magnetic fields and a reasonable parameter study within the same framework.This allowed us to make meaningful comparisons between simulations that did and did not include magnetic fields.However, it is important to discuss some of the limitations of these simulations so that our results are non-conflated with specific quantitative results arising from our post-processing of this simulation set.The Owen & Adams (2014) simulations were the first multidimensional simulations of photoevaporation that included on-thefly radiative transfer.As such, they used a simplified algorithm which assumed an ionization-recombination balance for the hydrogen and took the ionized gas to be at 10 4 K, where the temperature was computed based on the local ionization fraction (e.g.Gritschneder et al. 2009).This ionization-recombination balance is well known to be applicable to planets around young stars (e.g Murray-Clay et al. 2009;Owen & Alvarez 2016), yet most observations of helium outflows are for old planets where this approximation does not hold.The approximations of the Owen & Adams (2014) simulations are likely driving two quantitative results of our simulations: firstly, the high fluxes produce a high rate of recombinations, and thus, our helium transit depths are likely overestimated compared to older systems; secondly, the outflow velocity (and hence the magnitude of the velocity shift) depends directly of the gas' temperature, with higher temperatures giving higher outflow velocities.Outflows around lower-mass, older planets are more likely to be in the "energy-limited" regime (Owen & Alvarez 2016), where the outflow temperatures are necessarily lower, and hence we suspect our simulated ∼ 8 km s −1 blue-shifts are likely overestimates.However, as discussed above, based on the results of more recent 3D simulations, we are confident our qualitative conclusions about blue-shifts arising in the hydrodynamic cases are robust.Hence, the detection or lack of blue shift can be used to make inferences about the strength of any planetary magnetic field.
We also re-emphasize that we focus on a particular area of parameter space where the ram pressure of the planetary wind or the magnetic pressure of the planet dominates over the stellar magnetic pressure or stellar wind ram pressure close to the planet.Whether the stellar wind or stellar magnetic field has a larger influence on the orbit of the planet depends on the mass loss rate of the star, the magnetic field of the star, stellar wind velocity and the semi-major axis of the planet.For hot Jupiters, both the case where the stellar magnetic field or the case where stellar wind dominates is possible for sensible stellar and planetary parameters.As the magnetic pressure for a dipolar field drops ∼ −6 compared to the stellar wind ram pressure −2 , the stellar wind is more likely to dominate for sub-Neptunes which are generally further from their star than hot Jupiters.
The primary effect of increasing the strength of the stellar magnetic field is to open more planetary field lines.At low stellar magnetic field strengths, this can increase the mass loss rate.At high stellar magnetic field strengths, the flow cannot smoothly transition through the sonic point along these open field lines, and the mass loss rate can actually decrease (Owen & Adams 2014).These changes can vary the transit depth.However, we don't expect it to significantly change the velocity of the absorbing gas and, therefore, the velocity shift of the transit.
On the other hand, the stellar wind may have a significant effect on the velocity shift of the transmission spectrum.If the planet is outside the Alfvén radius of the star, the ram pressure of the stellar wind dominates over the magnetic pressure of the stellar wind and the magnetic field lines in the stellar wind follow the stellar wind plasma.
The effect of the stellar wind on the geometry of the planetary outflow has been studied in both hydrodynamic and MHD simulations.As the strength of the stellar wind is increased compared to the planetary outflow or planetary magnetic field, the stellar wind is able to confine the planetary outflow closer to the surface of the planet.On scales larger than the Hill sphere of the planet, the stellar wind will shape the outflow into a cometary tail that is being radially accelerated away from the star (e.g., Matsakos et al. 2015;Khodachenko et al. 2019;McCann et al. 2019).In essentially isothermal 3D hydrodynamic simulations including helium, MacLeod & Oklopčić (2022) showed that this could result in both blue-shifted absorption during the optical transit and significant blue-shifted post-optical absorption.This is also apparent in 3D radiation-hydrodynamic simulations performed by Wang & Dai (2021).
MHD simulations, including the radial magnetic field of the stellar wind, have also shown that the stellar wind causes planetary magnetic field lines to bend towards the night side (e.g., Kislyakova et al. 2014;Carolan et al. 2021b;Khodachenko et al. 2021;Ben-Jaffel et al. 2022).It is possible that this could result in a blue-shifted absorption signature during the transit of the planet, even if the planet had a reasonably strong magnetic field.However, it is likely that this would also cause a significant blue-shifted post-optical absorption like that seen in the hydrodynamic case.More 3D simulations are necessary to understand how degenerate observations are over the full parameter space.
Eccentricity Uncertainty
To definitively assert that the velocity shift is due to the outflow, we must know the planet's eccentricity.Since highly irradiated planets are close to their host stars, they are generally assumed to have undergone orbital circularization.Small deviations away from a circular orbit can cause an apparent velocity shift, Δ los , of: where is Keplerian velocity, e is the eccentricity and is the argument of periastron of the planet (e.g Murray & Correia 2010).Van Eylen et al. (2019) determined the eccentricities for a sample of Kepler planets through a combination of astroseismology and transit light-curve analysis.However, uncertainties in the eccentricity for individual planets were rarely < 0.1.For a planet with a typical orbital velocity of ∼ 100 km s −1 , an eccentricity uncertainty of ∼0.1 can lead to a velocity shift of ∼ 10 km s −1 , which is on the order of the velocity shift of due to the outflow geometry.This is also true when TTVs are used to determine eccentricities in multi-planetary systems (Van Eylen & Albrecht 2015).
The detection of a secondary eclipse can be used to measure cos of individual planets precisely (e.g Winn 2010), such that we can rule out this as a source of the observed velocity shift.Due to their high temperatures and large size, this is often possible for hot Jupiters (e.g, Charbonneau et al. 2005;Baskin et al. 2013).Previously, this has not generally been the case for sub-Neptunes.However, with JWST, detecting the secondary eclipse of some sub-Neptunes might be possible.
Implications for Real Systems
Ideally, we would be able to use helium observations to constrain these planets' magnetic fields.As highlighted in the previous section, eccentricity uncertainties generally mean that we cannot make claims about the magnetic field strengths of individual planets based on these observations.However, we speculate about trends in the population.
Table 1 lists spectroscopic detections of helium escape.For each planet, we quote the observed velocity shift of the absorption relative to the planet's rest frame, which is often determined by assuming a circular orbit if the planet is not known to be eccentric.These are not directly comparable to our models, as the observed transit depth is generally computed by integrating over all in-transit phases.We also estimate a critical surface magnetic field strength at which the flow transitions from "hydrodynamic" to "magnetically controlled" and no longer expect to see a blue-shifted transit spectrum.We approximate this value as the maximum surface magnetic field so that the outflow opens all field lines.Although, we caution that this is likely to be an overestimate.Since the outflow may be able to wrap around to the night side in the terminator region before all field lines are opened, 3D MHD simulations will be required to investigate this quantitatively.
This critical magnetic field strength is found by requiring that the thermal pressure exceeds the magnetic pressure for > .We estimate the thermal pressure by i) assuming the outflow is an isothermal Parker wind with a fixed sound speed and mass loss rate or ii) assuming the outflow is in photoionization equilibrium.In both cases, the detailed procedure used to calculate the critical magnetic field is explained in Appedix B. In Table 1, we give three critical magnetic field strength values.The first two are calculated assuming the isothermal Parker wind case, where the sound speed is 0.3 × 10 4 K/10 4 K, and the mass loss rate is given by the energylimited mass loss rate (Equation 2) with efficiency = 0.1.The final estimate is given by the photoionization equilibrium case, where the gas temperature 10 4 K.In general, this last method gives a slightly larger value for the critical magnetic field but is the least applicable to old planets.
To calculate the critical magnetic field strength, we needed to estimate the uv received by each planet.For this, we used values given in each of the cited detection papers (see Table 1).If no estimate was present, we used estimates from Kirk et al. (2022) in which they found the XUV flux by comparing to stars of similar spectra from the MUSCLES survey (France et al. 2016;Loyd et al. 2016;Youngblood et al. 2016).We note that the reconstructed XUV flux has significant uncertainties, and our methods for calculating the thermal pressure of the outflow are only approximate.Therefore we expect that our estimates for c may vary by an order of magnitude.
We also estimate the extent to which the stellar wind can confine the outflow.To effectively radially accelerate the planetary gas to produce a blue-shifted transit, the stellar wind must be able to penetrate close to the planetary surface.We approximately calculate the position of the bow shock produced when the stellar wind and planetary outflow collide.The bow shock is defined by the condition that the momentum fluxes of the stellar wind and planetary outflow balance.In the simple case, where outflows are spherical, we can approximate the radial distance of the bow shock from the planet : where * is the mass loss rate of the star, is the mass loss rate of the planet, * is the stellar wind velocity and is the semi-major axis of the planet.To calculate , we estimate the mass loss rate of the planet using the energy-limited mass loss rate (Equation 2) with efficiency = 0.1 and use solar-like wind values of * = 10 −14 ⊙ yr −1 and * = 150 km s −1 .If significant absorption occurs outside this radius, then the stellar wind is likely to play an important role in determining the shape of the transit.We also note that a strong planetary magnetic field can considerably increase the bow shock distance.In this case, the bow shock distance is approximated by the point at which the stellar wind ram pressure and magnetic pressure of the planetary field balance.Therefore, we also calculate the position of the bow shock considering the effects of a 1 Gauss surface magnetic field.
Here the expression for is given by: Interestingly, ten out of the twelve systems show blue-shifted absorption.Thus, it is highly unlikely that uncertainties in planetary motion can account for all of these.Therefore, it is likely that some of these blue-shifts can be attributed to the geometry of the outflow.There are two explanations for this.The first possibility is that these planets have relatively weak surface magnetic fields (B ≲ 0.1 Gauss), and day-to-night winds are responsible for the blue-shifted transit.For sub-Neptunes, this would agree with the findings of Owen (2019), which demonstrated that the suppression of atmospheric mass loss by the presence of surface magnetic fields ≳ 0.3 Gauss is not consistent with both the position of the radius valley and the rocky composition of planets below the valley.This is not inconsistent with dynamogenerated magnetic fields in the cores of sub-Neptunes being on the order of a few Gauss, like solar system planets, as a few percent by mass hydrogen/helium envelope can greatly increase the radius of the planet and particularly raise the upper atmosphere where atmospheric mass loss is taking place.Since magnetic field strength for a dipole field scales like −3 , the magnetic field can be much weaker in these regions.For hot Jupiters, this conclusion would favour the argument of Grießmeier et al. (2004) that slow rotation of hot Jupiters due to tidal locking leads to magnetic fields that are orders of magnitude smaller than Jupiters.This is in opposition to the argument of Christensen et al. (2009) which predicts magnetic field strengths an order of magnitude greater than Jupiters, based on a scaling relation between magnetic field strength and energy flux.
The other possibility is that stellar winds are able to penetrate close to the planetary surface and radially accelerate the gas to produce the observed blue-shifts.We note that a comet-like tail of escaping helium extending out to ∼ 7 planetary radii has been detected in the case WASP-107b (Spake et al. 2021) and to a lesser extent for WASP-69b (Nortmann et al. 2018), HAT-P-18b (Fu et al. 2022) and HD189733b (Guilluy et al. 2020).This may be evidence that the stellar wind plays an role in shaping the outflow of some these planets.We can set a lower limit on the extent of the outflow by assuming that the gas is optically thick to 10830 Å radiation, which can compared to the bow shock radius to assess the importance of stellar wind.We call this the "minimum absorption area", and calculate its value at mid-transit (see Table 1).We stress the gas is optically thin (e.g MacLeod & Oklopčić 2022) and therefore the actual extent can be significantly larger than this estimate.A good example of this is the transit of WASP-107b, which is found to a have a much smaller minimum absorption area ∼ 2 , than is temporally observed ∼ 7 (e.g, Spake et al. 2022).
In general, if the planets have weak magnetic fields, solar or moderately supersolar winds are required to set the bow shock radius close enough to the planet to disrupt the planetary outflow significantly.This is certainly feasible based on estimates of stellar wind mass rates from detections of astrospheric absorption (e.g., Wood et al. 2005Wood et al. , 2021)).However, if these planets are able to host significant surface magnetic fields ( ≳ 1 Gauss), then stellar winds with mass loss rates orders of magnitudes greater than the Sun would be required.Thus, we speculate that regarding the population of close-in planets for which helium observations have been taken, there is no evidence that these planets possess magnetic field strengths strong enough to control the topology of any photoevaporative outflow.
Table 1.Detections of helium atmospheric escape using high-resolution spectroscopy and the observed velocity shift of the outflow in the rest frame of the planet.We have provided uncertainties on the observed velocity shift, where given in the detection paper.These are generally of the order of a few km s −1 .We estimate the maximum surface magnetic field c such that all field lines are opened by the planetary outflow.The three numbers correspond to estimating strengths when we approximate the outflow to be i) an isothermal Parker wind at 10 4 K, ii) an isothermal Parker wind at 0.3 × 10 4 K and iii) in photoionization-recombination equilibrium.For the former two calculations, we estimate the mass loss rate of the planet using the energy-limited formula with an efficiency of 0.1.We also estimate the bow shock radii at which the stellar wind collides with the planetary outflow (Equation 20) and the magnetosphere corresponding to a planet with a surface magnetic field of 1 Gauss (Equation 21).The minimum absorption area is a lower limit on the extent of the outflow assuming it is optically thick.
Planet
Figure 2.The number density of helium in the 2 3 state in the outflow for the hydrodynamic case (left) and the magnetically controlled case (right).In the hydrodynamic case, the density appears to be fairly spherically symmetric.In the magnetically controlled case, the region with closed magnetic field lines has a higher number density of metastable helium atoms than the outflowing regions.This is primarily because the total number density of gas in this region is higher, as it is in hydrostatic equilibrium.
Figure 3 .
Figure 3. Colour map of the line of sight velocity of the outflowing gas in the hydrodynamic case (left) and the magnetically controlled case (right).Contours mark the line of sight optical depth integrated into a two-angstrom bin around the peak absorption wavelength.The black, magenta and cyan mark the = 0.01, 0.1 and 1 contours, respectively.The arrows show the velocity field.
Figure 4 .
Figure 4. Calculated excess absorption for a planet with i) no magnetic field (solid blue) and ii) a surface magnetic field of 3 Gauss (dashed red) at midtransit with an impact parameter of 0. The velocity scale is centred on the blended transitions at 10830.25 and 10830.34Å | 12,843.6 | 2023-02-21T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Prompt tuning is a parameter-efficient tuning (PETuning) method for utilizing pre-trained models (PTMs) that simply prepends a soft prompt to the input and only optimizes the prompt to adapt PTMs to downstream tasks. Although it is parameter- and deployment-efficient, its performance still lags behind other state-of-the-art PETuning methods. Besides, the training cost of prompt tuning is not significantly reduced due to the back-propagation through the entire model. Through empirical analyses, we shed some light on the lagging performance of prompt tuning and recognize a trade-off between the propagation distance from label signals to the inserted prompt and the influence of the prompt on model outputs. Further, we present Late Prompt Tuning (LPT) that inserts a late prompt into an intermediate layer of the PTM instead of the input layer or all layers. The late prompt is obtained by a neural prompt generator conditioned on the hidden states before the prompt insertion layer and therefore is instance-dependent. Through extensive experimental results across various tasks and PTMs, we show that LPT can achieve competitive performance to full model tuning and other PETuning methods under both full-data and few-shot scenarios while possessing faster training speed and lower memory cost.
Introduction
Pre-trained models (Devlin et al., 2019;Radford et al., 2019;Yang et al., 2019;Raffel et al., 2020;Lewis et al., 2020;Liu et al., 2022a;Qiu et al., 2020;Lin et al., 2021) have pushed most NLP tasks to state-of-the-art.Model tuning (or fine-tuning) is a popular method for utilizing PTMs on downstream tasks that needs to tune all parameters of PTMs for every task.Despite the welcome outcome, it leads to prohibitive adaptation costs, especially for supersized PTMs (Brown et al., The details can be found in Section 5. Wang et al., 2021a).Parameter-efficient tuning (PETuning) is a new tuning paradigm that can adapt PTMs to downstream tasks by only tuning a very small number of internal or additional parameters.
Prompt tuning (Lester et al., 2021) is a simple and popular PETuning method that prepends a sequence of soft prompt tokens to the input and only optimizes the prompt to adapt PTMs to downstream tasks.It has an absolute advantage in parameter efficiency and facilitates mixed-task inference, which makes the deployment of PTMs convenient.However, compared with other advanced PETuning methods, e.g., Adapter (Houlsby et al., 2019;Mahabadi et al., 2021), LoRA (Hu et al., 2022), andBitFit (Zaken et al., 2022), prompt tuning suffers from lower performance and convergence rate.Compared with full model tuning, although the number of trainable parameters in prompt tuning reduces by ∼17,000× (from 355M to 21K on RoBERTa LARGE ), the training speed only increases by ∼1.5×, and the memory cost only reduces by 29.8%. 1 P-tuning v2 (Liu et al., 2022b) improves the performance of prompt tuning by inserting soft prompts into every hidden layer of PTMs, but it is difficult to optimize and needs more training steps to attain competitive performance.
In this paper, we explore why prompt tuning performs poorly and find there is a trade-off between the propagation distance from label signals to the inserted prompt and the influence of the prompt on model outputs.The key to prompt tuning is to make the soft prompt carry task-related information through downstream training.The trained prompt can interact with text inputs during the model forward pass to obtain text representations with taskrelated information.Since the prompt is inserted into the input in prompt tuning, it has a strong ability to influence the outputs of PTM through sufficient interactions with text inputs.However, there is a long propagation path from label signals to the prompt.It leads us to ask the question: Does this long propagation path cause a lot of task-related information to be lost during propagation and thus affect performance?To verify the impact of the propagation distance on performance, we conduct pilot experiments by shortening it in Section 4 and find that the performance first increases then decreases with the shortening of the length.This finding inspires us to present the late prompt (i.e., inserting the prompt into an intermediate hidden layer of PTM).The late prompt not only receives more task-related information at each update due to the shorter propagation path of task-related information but also maintains the adequate ability to influence the outputs of PTM.Despite the higher performance and faster convergence rate of late prompt than prompt tuning, the hidden states produced by PTM before the prompt insertion layer are underutilized.To further improve performance and take full advantage of these contextual hidden representations, we introduce a prompt generator to generate the soft prompt (termed as instanceaware prompt) for each instance using the corresponding hidden states.
Based on the late and instance-aware prompt, we present Late Prompt Tuning (LPT) to improve prompt tuning.Since the soft prompt is inserted into an intermediate layer of PTM, we have no need to compute gradients for model parameters be-1 Refer to Section 6.5 for details.low the prompt insertion layer, and therefore speed up the training process and reduce memory costs.Extensive experimental results show that LPT outperforms most prompt-based tuning methods and can be comparable with adapter-based tuning methods and even full model tuning.Especially in the few-shot scenario with only 100 training samples, LPT outperforms prompt tuning by 12.4 points and model tuning by 5.0 points in the average performance of ten text classification tasks.Besides, it is 2.0× faster and reduces by 56.6% than model tuning in terms of training speed and memory cost on RoBERTa LARGE , respectively.Figure 1 shows an overall comparison between LPT and its counterparts.To sum up, the key contributions of this paper are: • We explore why prompt tuning performs poorly and find that it is due to the long propagation path from label signals to the input prompt and present a simple variant named late prompt tuning to address the issue.
• Combining the late and instance-aware prompts, we present LPT, which not only attains comparable performance with adapterbased tuning methods and even model tuning but also greatly reduces training costs.
• We verify the versatility of LPT in the fulldata and few-shot scenarios across 10 text classification tasks and 3 PTMs.Code is publicly available at https://github.com/xyltt/LPT.
Related Work
Adapter-based tuning.One research line of PETuning is adapter-based tuning (Ding et al., 2022) that inserts some adapter modules between model layers and optimizes these adapters in downstream training for model adaptation.
Adapter (Houlsby et al., 2019) inserts adapter modules with bottleneck architecture between every consecutive Transformer (Vaswani et al., 2017) sublayers.AdapterDrop (Rücklé et al., 2021) investigates the efficiency through removing adapters from lower layers.Compacter (Mahabadi et al., 2021) uses low-rank optimization and parameterized hypercomplex multiplication (Zhang et al., 2021) to compress adapters.Adapter-based tuning methods have comparable results with model tuning when training data is sufficient but don't work well in the few-shot scenario (Wang et al., 2022).
Prompt-based tuning.Another main research line of PETuning is prompt-based tuning that inserts some additional soft prompts into the hidden states instead of injecting new neural modules to PTMs.Prompt tuning (Lester et al., 2021) and Ptuning (Liu et al., 2021) insert a soft prompt to word embeddings only, and can achieve competitive results when applied to supersized PTMs.Prefixtuning (Li and Liang, 2021) and P-tuning v2 (Liu et al., 2022b) insert prompts to every hidden layer of PTM.BBT (Sun et al., 2022b) optimizes the inserted prompt with derivative-free optimization.Some prompt-based tuning methods, like prompt tuning and BBT, formulate downstream tasks as pre-training tasks (e.g., masked language modeling task) to close the gap between pre-training and downstream training (Sun et al., 2022a).There are also some prompt-based methods with instanceaware prompt.IDPG (Wu et al., 2022) uses the prompt generator with parameterized hypercomplex multiplication (Zhang et al., 2021) to generate a soft prompt for every instance.Contexttuning (Tang et al., 2022) uses BERT model (Devlin et al., 2019) as the prompt generator and focuses on NLG tasks.IPL (Jin et al., 2022) first calculates relevance scores between prompt tokens and inputs, then uses the scores to re-weight the original prompt tokens.But it tunes all parameters of PTM.All the above methods with instanceaware prompt have the same weakness because they need to encode the inputs using an extra encoder, which slows down the training and increases inference latency.There are also some other popular PETuning methods, such as BitFit (Zaken et al., 2022) which only tunes the bias terms, LoRA (Hu et al., 2022) which optimizes low-rank decomposition matrices of the weights within self-attention layers.
Problem Formulation
Given a PTM M, in the setting of model tuning, we first reformulate the inputs with single sentence as E([CLS] ⟨S 1 ⟩ [SEP]) and the inputs with sentence pair as E( where E is the embedding layer of M. The final hidden state of [CLS] token will be used to predict label.In the setting of prompt tuning, we insert a randomly initialized soft prompt p into word embeddings, and also modify the original inputs using different manual templates with a [MASK] token for different tasks.For example, the inputs with single sentence from a sentiment analysis task will be transform into concat(p, E([CLS] ⟨S 1 ⟩ It was [MASK].[SEP])).Then, we map the original labels Y to some words in the vocabulary V of M, which formulates downstream tasks as a language modeling task to close the gap between pre-training and downstream training.The final hidden state of [MASK] token will be used to predict label.
In the setting of our proposed method LPT, we use a prompt generator (PG) to generate an independent prompt p for every input.In addition, the layer that the prompt inserts into is an intermediate layer of PTM instead of word embeddings, and we refer to the layer as the prompt layer (PL).
Why Prompt Tuning Performs Poorly?
The workflow of prompt tuning is to make the inserted soft prompt carry task-related information through downstream training.In the inference phase, this prompt can interact with test inputs during layer-upon-layer propagation so that the hidden representations of these inputs also contain taskrelated information.There are strong interactions between the prompt and text inputs because prompt tuning inserts prompt into word embeddings.However, there is a long propagation path from label signals to the prompt.Therefore, we speculate that the poor performance of prompt tuning is due to the long propagation path of task-related information, which causes a lot of task-related information to be lost during propagation in the frozen model and thus affect performance.To verify this conjecture, we conduct some pilot experiments on TREC (Voorhees and Tice, 2000) and RTE (Dagan et al., 2005) datasets using RoBERTa LARGE (Liu et al., 2019).
Does shortening the propagation distance improve performance?
We start by considering a simple experiment setting where the soft prompt is inserted into different layers of RoBERTa LARGE then we look at how performance changes as the prompt layer changes.As shown in the left plots of Figure 2, we can observe that the performance first increases and then decreases with the rise of the prompt layer and obtain the highest performance when the prompting layer is in the range of 12 to 14.In addition, we also explore the convergence rates at different prompt layers.For simplification, we only consider three different prompt layers 1, 13, and 24.The middle plots in Figure 2 show that the model has the fastest convergence rate when the Middle: Comparison of convergence rates for different prompt layers.Right: The estimated mutual information between hidden states of each layer and label.'PL' denotes the prompt layer.'PL = 1' denotes the traditional prompt tuning (Lester et al., 2021).We show mean and standard deviation of performance over 3 different random seeds.
prompt layer is 13.The trend is consistent with the performance trend shown on the left plots.We can preliminarily identify that properly shortening the propagation distance can improve performance according to these results.However, the performance starts to degrade when we extremely shorten the propagation path of task-related information.We attribute this to the interaction between the prompt and inputs becomes very weak when we unduly shorten the propagation path, which leads to the slighter influence of the prompt on model outputs and the gradual decline of performance.
Task-related information in hidden states.To quantify the task-related information carried in the soft prompt, we follow Wang et al. (2021b) and adopt the mutual information I(h, y) between the hidden states and label of each input.The estimate method of I(h, y) is provided in Appendix A. The right plots of Figure 2 show the I(h, y) at different layers.We note that I(h, y) gradually increases with the forward pass of prompt (i.e., the effect of the prompt on the hidden states gradually increases) when the prompt layer is 13.And its I(h, y) in the last layer is the highest among the three different prompt layer settings, which means that the soft prompt carries more task-related information.The other two prompt layer settings all collapse, espe-cially on the RTE task, because there is no better trade-off between the propagation distance and the effect of prompt on hidden states.
The above observations suggest that our conjecture about the poor performance of prompt tuning is correct.The long propagation path of task-related information leads to poor performance and low convergence rate.And we find that properly shortening the propagation distance can improve performance.
LPT: Late Prompt Tuning
From the experiment results in Section 4, we observe that using late prompt can greatly improve the performance of prompt tuning.Moreover, late prompt can bring two other advantages: (1) No gradient calculation for model parameters below the prompt layer; (2) The hidden states produced by the model before the prompt layer can be used to generate a great independent prompt for each instance.Based on these advantages, we propose an efficient prompt-based tuning method LPT which combines late and instance-aware prompts.An illustration of LPT is shown in Figure 3.In this section, we will introduce two different prompt generators used in LPT and how to determine the prompt layer.
Prompt Generators
Naive prompt generator (NPG).The prompt generator is a simple feed-forward layer with bottleneck architecture.Assume the prompt length is l, then we can generate an independent prompt for each instance as below: where b 1 and b 2 are bias terms.d is the dimension of hidden states.Since m ≪ d, the prompt generator doesn't have too many parameters.However, the number of parameters within W 2 will increase with the prompt length l.
To tackle this problem, we propose the following pooling prompt generator.
Pooling prompt generator (PPG).PPG introduces a pooling operation between downprojection and up-projection operations, which directly obtains the prompt with length l through pooling on input sequences (i.e., pooling the input with length n to the prompt with length l).The generator is more lightweight to generate a prompt, and h ∈ R d×n here.n is the length of the original input.In this paper, we consider both Average Pooling and Max Pooling, referred to as APPG and MPPG, respectively.
How to Determine Prompt Layer?
Generating a good prompt needs a good contextual representation for the input.In this sub-section, we will explore how to choose the prompt layer to guarantee that LPT can attain a good trade-off between performance and efficiency through some pilot experiments on TREC (Voorhees and Tice, 2000) and RTE (Dagan et al., 2005) datasets.As shown in Figure 4, the performance of NPG has a significant decline when the prompt layer is in the range from 14 to 24.However, different from NPG, APPG and MPPG retain high performance as the prompt layer approaches the output layer, especially on TREC dataset.We believe that this is due to the hidden states from the higher layers can help generate a better prompt, while NPG only uses [CLS] token as the representation of the entire input when generating the prompt, which leads to the loss of information.According to the above observations, LPT with APPG and MPPG can achieve a better trade-off for both relatively simple (TREC) and difficult (RTE) tasks.But in this work, to ensure that all methods (NPG, APPG and MPPG) can achieve a good performance while maintaining a relatively low training costs, we simply choose the most intermediate layer of PTM as the prompt layer.That is, we choose the 13-th layer as the prompt layer for RoBERTa LARGE .
Evaluation Datasets
We evaluate our method on 5 single-sentence and 5 sentence-pair classification tasks, including 6 tasks from GLUE benchmark (Wang et al., 2019) and 4 other popular tasks include MPQA (Wiebe et al., 2005), MR (Pang and Lee, 2005), Subj (Pang and Lee, 2004) and TREC (Voorhees and Tice, 2000) tasks.All details about data statistics and splits can be found in Appendix B.
Experiment Settings
We evaluate our method in both full-data and few-shot scenarios on three PTMs, including RoBERTa LARGE (Liu et al., 2019), DeBERTa LARGE (He et al., 2021) and GPT2 LARGE (Radford et al., 2019).According to the conclusion from the Section 5.2, we choose the 13-th layer as the prompt layer for RoBERTa LARGE and DeBERTa LARGE , and the 19-th layer for GPT2 LARGE except special explanation.More implementation details are provided in Appendix C.
Baselines
We from original training sets.Besides, we randomly sample 1000 samples from the original training sets as development sets and there is no overlap with sampled training sets.For the tasks from GLUE benchmark (Wang et al., 2019), the original development sets are used as the test sets and the test sets remain unchanged for 4 other tasks.
Table 2 and 3 show the overall comparison of all the methods in the few-shot scenario.LPT w/ NPG outperforms all the baselines in two different fewshot settings.Especially when the training set has only 100 samples, LPT w/ NPG outperforms model tuning by 5 points and Adapter by 7.1 points.This indicates that our method has better generalization performance when the training data is very scarce.However, we note that LPT w/ MPPG and LPT w/ APPG don't perform as well in the few-shot scenario as they do in the full-data scenario.We speculate that this is owing to the optimal state of the pooling layer to retain only useful information, and sufficient training data is needed to achieve this state.Nevertheless, both LPT w/ MPPG and LPT w/ APPG are also superior to all the baselines when the training set has 100 samples.
Results on other PTMs
To verify the generality of our conclusion about why prompt tuning performs poorly and the versatility of the proposed method LPT, we also conduct experiments on two other popular PTMs, DeBERTa LARGE (He et al., 2021), and GPT2 LARGE (Radford et al., 2019).The results are shown in Table 4.Only using the late prompt to shorten the propagation path of taskrelated information (i.e., LPT w/o PG) is also far superior to the traditional prompt tuning method on these two PTMs.This result enhances the reliability of our conclusion.Moreover, LPT with different prompt generators further improves the performance, closing the gap with model tuning.
Efficiency Evaluation
We compare the efficiency of our method with all the baselines on RoBERTa LARGE (Liu et al., 2019) and GPT2 LARGE (Radford et al., 2019) such that model tuning method can fit the fixed budget of a NVIDIA GTX 3090 GPU (24GB) and other methods use the same batch size as model tuning.We set the length of all inputs to 256 and evaluate the accuracy in the few-shot scenario that the number of training data is 100 for all methods.
In Table 5, we report accuracy, tuable parameters, training speed (tokens per millisecond) and memory cost (GB) of each method.Our methods not only outperform all prompt-based methods considered in terms of efficiency and memory cost, but obtain the highest performance.Compared with AdapterDrop that has similar efficiency with LPT, our method LPT w/ NPG outperforms it by 20.1 and 7 points on RoBERTa LARGE and GPT2 LARGE , respectively.In addition, we also explore the impact of the choice of prompt layer on all efficiency metrics, and the specific experiment results are in Appendix D. Overall, given a large scale PTM with millions or billions of parameters, such as RoBERTa (Liu et al., 2019), DeBERTa (He et al., 2021), and GPT2 (Radford et al., 2019), higher training speed and lower memory cost is a paramount importance for practical applications.And LPT offers a better trade-off in terms of training budget and performance.
Analyses
Effect of prompt layer.To enhance the reliability of the conclusion (i.e., the most intermediate layer of PTM is the optimal choice of the prompt layer) from Section 5.2, we also conduct the same experiments with Section 5.2 on the other two PTMs that include DeBERTa LARGE (He et al., 2021) and GPT2 LARGE (Radford et al., 2019) layer is also the optimal choice of the prompt layer on DeBERTa LARGE and GPT2 LARGE models, especially for LPT w/ NPG.These results enhance the reliability of our conclusion that a better tradeoff between performance and efficiency can be achieved by selecting the most intermediate layer of PTM as the prompt layer.
Visualization of instance-aware prompt.We selected the subj dataset (Pang and Lee, 2004) with 1000 development samples for this analysis.For the sake of simplification, we only visualize the instance-aware prompt of LPT w/ NPG method.As shown in Figure 6, we use the same color to mark the samples that their representations are close.We can clearly observe that our method can generate similar prompts for the instances with relatively similar sentence representation.On the contrary, the independent prompts of instances with quite different sentence representations are also quite different.The visualization result indicates that our method learns a special prompt for each instance and can be aware of the important information of the instance to drive PTMs better.
Conclusion
In this paper, we explore why prompt tuning performs poorly and find there is a trade-off between the propagation distance from label signals to the inserted prompt and the influence of the prompt on model outputs.With this discovery, we present a more efficient and effective prompt tuning method LPT with late and instance-aware prompts.Experiment results in full-data and few-shot scenarios demonstrate LPT can achieve comparable or even better performance than state-of-the-art PETuning methods and full model tuning while having higher training speed and lower memory cost.
Limitations
Although we showed that our proposed method can greatly improve performance and reduce training costs for diverse NLU tasks on three different PTMs (i.e., RoBERTa LARGE , DeBERTa LARGE and GPT2 LARGE ), the larger PTMs with billions of or more parameters and NLG tasks were not considered.But our main thought of using late and instance-aware prompt is simple and can be easily transferred to other backbone architectures and different types of tasks.It would be interesting to investigate if our findings hold for other backbone models and types of tasks.And we will explore it in future work.
A Details for Mutual Information Estimation
Because the mutual information cannot be be calculated directly, we estimate it by training a new classifier using the hidden states h as inputs and the original labels of inputs as outputs.Then, we estimate I(h, y) using the performance achieved by the classifier.Since I(h, y) = H(y) − H(y|h) = H(y) − E (h,y) [−log p(y|h)] (Wang et al., 2021b), we can train a new classifier q ψ (y|h) to approximate p(y|h), such that we have Because H(y) is a constant, we are going to ignore it here.Based on the above conditions, we can use the loss of q ψ (y|h) (i.e., − 1 N [ N i=1 − log q ψ (y i |h i )]) as the estimate of I(h, y).Further simplification, we use the performance of this new classifier to estimate mutual information I(h, y).Because RoBERTa LARGE (Liu et al., 2019) has 24 layers totally except embedding layer, we can obtain 24 hidden states for each input.Hence, we need to train 24 new classifiers for each method.To speed up the training process, we use a 6-layer RoBERTa LARGE as q ψ .
B Datasets
For SST-2 (Socher et al., 2013), MNLI (Williams et al., 2018), MRPC (Dolan and Brockett, 2005), QNLI (Rajpurkar et al., 2016), QQP3 and RTE (Dagan et al., 2005) datasets which are from GLUE benchmark (Wang et al., 2019), we use their original data splits.For 4 other datasets, we select a certain number of samples from the training set as the development set, and the number of samples for each label is determined according to its proportion in the original training set.The dataset statistics after split are shown in Table 6
C Implementation Details
The search space of hyperparameters considered in this paper is shown in Table 7.As an additional note, we use the same number of training epochs or steps for all the methods.For adapterbased tuning methods, we set the down-projection size m to 16.We set the prompt length to 20 for prompt tuning (Lester et al., 2021) and P-tuning v2 (Liu et al., 2022b), and 5 for S-IDPG-PHM (Wu et al., 2022) and LPT w/ NPG.For LPT w/ MPPG and LPT w/ APPG, due to the number of tunable We use AdamW optimizer (Loshchilov and Hutter, 2019) for all the methods in this work.We use Pytorch (Paszke et al., 2019) and HuggingFace's Transformers (Wolf et al., 2020) libraries to implement all the methods in this work.All experiments are conducted on 8 NVIDIA GTX 3090 GPUs.We follow Gao et al. (2021) and show the used manual templates and label words in Table 8 and Table 9, respectively.Note that, since the vocabulary of the GPT2 model doesn't have the [MASK] token, we justly use it to represent the positions that are needed to predict.
D Efficiency Evaluation on Different
Prompt Layers.
We select the prompt layer in the range of {7, 13, 19} to explore the influence from different prompt layers for the trade-off between efficiency and performance.The experiment settings are consistent with those described in Section 6.5.
Table 10 shows the performance, the number of tunable parameters, training speed, and memory cost for LPT with three different prompt layers.
When the prompt layer is the 13th layer, both performance and training efficiency are better than when it is the 7th layer.When the prompt layer is the 19th layer, the efficiency is further improved while the performance degrades a lot.
Figure 1 :
Figure 1: Overall comparison between LPT and baselines of only 100 training samples for each task.All methods are evaluated on 10 text classification tasks using RoBERTa LARGE .The radius of every circle indicates training speed (tokens per millisecond).LPT w/ NPG and LPT w/o PG represent LPT with naive prompt generator and without prompt generator, respectively.The details can be found in Section 5.
Figure 2 :
Figure 2: Left: The performance achieved by inserting a soft prompt into different layers of RoBERTa LARGE .Middle: Comparison of convergence rates for different prompt layers.Right: The estimated mutual information between hidden states of each layer and label.'PL' denotes the prompt layer.'PL = 1' denotes the traditional prompt tuning(Lester et al., 2021).We show mean and standard deviation of performance over 3 different random seeds.
Figure 3 :
Figure 3: An illustration of LPT.Left: Naive (NPG) and pooling (PPG) prompt generators.Right: The forward and backward pass of LPT.
Figure 4 :
Figure 4: The change trend of performance with different prompt layers for three different prompt generators.The backbone model is RoBERTa LARGE .We show mean and standard deviation of performance over 3 different random seeds.
Figure 5 :
Figure 5: The change trend of performance with different prompt layers on DeBERTa LARGE (upper) and GPT2 LARGE (lower).We show mean and standard deviation of performance over 3 different random seeds.
Table 1 :
Overall comparison in full-data scenario.All the methods are evaluated on test sets except the tasks from GLUE benchmark.We report mean and standard deviation of performance over 3 different random seeds for all the methods except model tuning.The best results are highlighted in bold and the second best results are marked with underline.Prompt Tuning-256 indicates the prompt tuning method with prompt length 256.All the results are obtained using RoBERTa LARGE .
Table 2 :
Results in the few-shot scenario of 100 training samples.We report mean and standard deviation of performance over 4 different data splits for all the methods.Bold and Underline indicate the best and the second best results.All the results are obtained using RoBERTa LARGE .
Wu et al. (2022)mpt, that is S-IDPG-PHM.And we don't use supplementary training likeWu et al. (2022)to enhance performance.2https://github.com/thunlp/OpenDelta6.4 Main ResultsResults in full-data scenario.The overall comparison of the results in full-data scenario is shown in Table1.We can observe that: (i) Our method with only late prompt, that is LPT w/o PG can greatly improve the performance of the traditional prompt tuning under the same number of tunable parameters and even is comparable with P-tuning v2 which inserts prompts to each layer of PTM.(ii)Results in few-shot scenario We further evaluate our method in few-shot scenario.FollowingWu et al. (2022), we consider two settings where the number of training data is 100 and 500, respectively.We randomly sample the training samples
Table 3 :
Results in the few-shot scenario of 500 training samples.We report mean and standard deviation of performance over 4 different data splits for all the methods.Bold and Underline indicate the best and the second best results.All the results are obtained using RoBERTa LARGE .
Table 4 :
models.For each backbone, we select the largest batch size Results on two single-sentence and two sentence-pair tasks using DeBERTa LARGE and GPT2 LARGE models as the backbone.Bold and Underline indicate the best and the second best results.
models.As shown in Figure5, the most intermediate
Table 5 :
Comparison of parameter efficiency, training efficiency and memory cost for all the methods on two different backbone models.All methods are evaluated on RTE dataset.
Table 10 :
Trade-off between performance and training efficiency.'PL'denotes the prompt layer.Bold and Underline marks the best and the second best results, respectively.All methods are evaluated on RTE dataset using RoBERTa LARGE model.parametersbeinginvariable with prompt length changes, we also search the prompt length in the range of {10, 15, 20} for them.Besides, we set the down-projection size m of S-IDPG-PHM and LPT to 256 and 128, respectively.The hyperparameter r and α in LoRA are set to 8 and 16 on RoBERTa LARGE , 4 and 32 on GPT2 LARGE .For the batch size of GPT2 model listed in Table7, it refers to the number of samples in a single forward pass.Due to the large scale of GPT2 LARGE , we use gradient accumulation technique to avoid outof-memory, and the accumulation step is 2 or 4. | 7,064.4 | 2022-10-20T00:00:00.000 | [
"Computer Science"
] |
Facial Recognition of Human in A Real Time Video Using PCA Algorithm
—The human facial recognition has been attractive research area in computer vision. An existing facialrecognition system is based on still images, facing complex problem in discriminating foreground frombackground cluster without motion information. To overcome this problem the facial recognition invideo motion is projected. The execution work is based on haar cascading classification for facedetection and PCA (Principal Component Analysis) for Facial recognition. The database is trained with100 sample faces of 10 people with different poses, known as positive and negative example ofarbitrary image of the size 20 x20. The experimental results are compared with fisher face andLBP(Local binary pattern) but PCA provides 92.4% accuracy.
I. INTRODUCTION
In the field of computer vision, one of the most important and curial area is the facial recognition.The aim of facial recognition is to dynamically analyze and recognize human face from a video. The human recognition is one of the most powerful tool, that the human can identify the unique feature from their faces. Thus the recognition of face in human through facial recognition has become a great deal during the last decades. The ability to detect and recognize human faces from videos helps to monitor the behaviour of objects. The several applications based on surveillance, which are used in ATM centers, Railway stations, companies and so on. These applications are mainly used to prevent criminal activities, and it also helps to monitor state and behaviour of human. In recent studies, many algorithms are being focused on face recognition, age calculation and facial emotion classifications. In this context, we implemented face recognition system based on PCA algorithm, it consist of Reading real time video from webcam, Face Detection, Face Extraction, Face alignment and finally recognition and representation is done by using Database. Some of the drawbacks in face detection system is to detect the face in which the person wears glass, hat and having moustache on his face. The recognition work based on capturing a real time video using webcam and clipped into many frames. Each frame is referred by Haar cascade trained classifier. The Haar feature based classifier is an effective object detection method to detect the face from frame. The detected face is taken for face estimation. In this context, the feature alignment of detected face is constructed through the ratios of the facial features for instance nose, eyes, lips, and so on. The alignment value of detected face is used for recognisation, by comparing the detected face with the database image. The database is enclosed with set of human face with details like name. The details of the human face are saved by the training set and can also be deleted. This system can detect multi-faces. However it can recognize only one face at a time.
II. EXISTING SYSTEM An existing recognition system is based on still images, for recognition purpose the collection of still images are taken as input dataset. Each image is separately processed by eliminating background and foreground of an image. The designed descriptors for this facial recognition system are based on set of factors like relative positions, size and distance of faces for investigating human recognition 1. In current face detection algorithm provides less satisfactory for complex situations. For an example the view of face in different angle, illumination, occlusion, weather changes, facial appearance, shape, etc., at present the multi-view face detection is still quite challenging task [17]. Recognizing face from the video is difficult task, because detected face image is smaller than the frame size, for this the recognition needs a lot of improvement has to be made. For improvement various algorithms are used, which are Template Based approaches, Statistical approach, Model-based approaches, Wholistic approaches. The template Based approach is use an Adaptative Appearance Models. The Statistical approach of recognition algorithms are PCA (
CAPTURING VIDEO
The first phase of facial recognition system is to acquire real time video by using webcam, as an input video stream is used for the face detection process. The accuracy of the video depends on the distance of object. The reading video is processed by capturing a frame from the video.
FACE DETECTION
The face detection is a technique in computer vision. It is used to identify the human face in a real time digital video, which refers psychological process of locating and identifying human face for various applications which are surveillance, authentication, etc., In this context, each frame of real time video is converted into gray level image, then the genetic algorithm of Haar-Cascade Classifier is used to locate the face in a frame. The classifier is trained with few sample faces known as positive examples, they are assigned with a particular size 20 X 20 and negative examples will act as the arbitrary images of the same assigned size. After the classifier is trained, it can by allot to the desired region in an input image. If the result of classifier is '1', mean that the selected region is depicting to show the face or else it results as '0'.The design of the classifier is so efficient, that can be resized for better detection purpose. The detection region is processed by comparing the facial templates. The detection will depend on factors like distance, clarity, motion etc. Finally the detected face is highlighted using a rectangular box.
The key features of Haarcascade classifier are: Step 1: Calculate the Integral Image An Integral image is a process of calculating Summed Area Table (sum of values or pixel values). In this process every block is the summation of the previous blocks above it [2] as shown in the figure 2. Here the point of origin is the top left corner of the block and the previous blocks are those to the left of the as well as the above blocks. Its purpose is to allow quick computation of any area in an image with only four memory lookups and three addition or subtraction operations. These produces two equations, they are: (From Eq.1) Step 2: Apply Haar-Features The Haar Feature is a digital features, it is used for recognition purpose. They are simple rectangular features that achieve random accuracy and are mainly used for their simplicity and fast computation time [2]. It is known as difference calculation. The difference between the entire whites block and the black blocks i.e, feature = Sum(White)-Sum(Black). There are numerous haar-like features available. Choose a subset of 5 more basic features. They are: Based on the integral features, the edge features havesix memory lookups (two points at bottom, two points in middle and two points at the top).The line features have eight memory lookups(two points at the bottom, four points in the middle and two points at the top).The four-rectangular features have nine memory lookups (three points at bottom, two points in middle, one point at the centre and three points at the top). These variables are programmed with adjustable height and width dimensions for scaling [2]. Step 3: Use Adaboost Learning Algorithm It stands for adaptive boosting and is essentially the training algorithm that selects the best features out of the enormous over complete set of features available and creates a strong classifier. This algorithm is an aggressive approach which disregard the majority of features and results a fewer features. This provides a weak classifier (Wj)which consists of Features (Fj), Threshold (Ɵj), Polarity (Pj).
The positive images are only trained. Set the threshold value based on the average pixel intensities and standard deviation. This helps in faster implementation of code and require less data.
Step 4:Apply Cascade Filter This process essentially discards all the negative sub-windows as earily as possible to reduce the over all computational time and focus more on possible positive windows. This is a binary tree created with all strong features from the adaboost method. This process allows possitive matches only to be revaluated at any point anything that registers its value as negative sub-window gets immediately rejected and exists. It reduces computation time spend on false windows by rejecting them. The threshold values can be adjusted to allow certain accuracy. Lower thresholds yeild higher detection rates but increase the occurance of false positives.
TRAINING SET CREATION
The extracted face of Haar-Cascade Classifier is dynamically stored in a database, which are used as training sets for recognition process. The training set contains details about the extracted face feature.
FACE RECOGNITION
Facial recognition is a biometric method, to identify an individual face by comparing live capture or digital image data with the database for that person. Facial recognition system is commonly used for security purposes but is increasingly being used in a variety of other applications. The facial recognition system is done by using Principal Component Analysis (PCA) algorithm. PCA is used for analyzing data (feature)and it is used to reduce the dimensionality of the feature. The major advantage of PCA is to reduce noise sensitivity, and increase efficiency. PCA facial recognition was performed by construction of Euclidean distance between feature vectors of detected face. PCA algorithm use covariance matrix and eigenvector for recognition. Eigenvector of the covariance matrix of the set of face image β (a,b), where an image with N x N pixel is considered a point in N2 dimensional space [7]. Step 1: Training process Here the Training face are represented as T1,T2,T3,………TM with m images of size N x N are represented by vector size N2. Each face is represented by a variable β1, β2, β3,……….βm. The transferred face vector are placed into a training set Z. Z contains {β1, β2, β3,……….,βm} Step 2: Compute average face vector To calculate average face vector (δ) use the formula, δ = 1 n Step 3: Covariance matrix (C) Subtract average face vector from each face vector and stored it in a variable £Ito calculate Covariance matrix (C). £i = β1 -δ(5) C = S S T (N 2 x N 2 matrix) (6) Where S=[£1,£2,£3,………,£M] (N2 x M matrix).
Step 4: Calculate the eigenvector and eigen values of the covariance matrix The dimensionality of the covariance matrix is N2 x N2. Here the total number of eigenvector will be greater than the total number of training set. The final result of this derivation is that S ri is an eigenvector of C = SST. yi forms the eigenface of M eigenvector of C' = STS Step 5: Represent each face image a linear combination of all K eigenvector. Each face from training set can be represented a weighted some of the k eigenface and the mean face because the removed average face vector must be added. If percentage of eigenvector is multiplied with the total eigenvector the percentage of resemblance of the original face can be obtained.
IV. RESULTS AND DISCUSSION The facial recognition system of human in a real-time video using PCA algorithm was implemented in C#.Net. The result of PCA Algorithm is shown in the figure 9,10,11,12,13. Here sensitivity to variations in pose of face and illumination changes is still a challenging problem. The performance evaluation of the facial recognition system of identification rate (%) as shown in the Table 1. The training set is created by assigning Authentication names to each detected face. The training set image is uploaded in the database for identification. The database contains 100 image, few of them are shown in figure 13. if any detected face match with the database image, it will display name at top the face else it will display unknown.
V. CONCLUSION The human face is detected and recognized by using Haar Cascade Classifier algorithm and PCA algorithm. The database is trained with few sample faces known as positive example and negative example which will act as the arbitrary image of the same size 20 x 20. The database contains 100 images of 10 people with different poses. Some features where not detected, like the person wearing glass, hat and having moustache on his face. Advantage of PCA is to reduce the dimensionality of features by using covariance matrix and eigenvector. Here the human faces are recognized with the training set where the images are stored in the database. This system helps to detect multiple faces. Comparative study is done with LBP and Fisher face algorithm, the experimental result of PCA provides 92.4% accuracy. | 2,864.4 | 2017-04-30T00:00:00.000 | [
"Computer Science"
] |
Concordance Probability for Insurance Pricing Models
The concordance probability, also called the C-index, is a popular measure to capture the discriminatory ability of a predictive model. In this article, the definition of this measure is adapted to the specific needs of the frequency and severity model, typically used during the technical pricing of a non-life insurance product. For the frequency model, the need of two different groups is tackled by defining three new types of the concordance probability. Secondly, these adapted definitions deal with the concept of exposure, which is the duration of a policy or insurance contract. Frequency data typically have a large sample size and therefore we present two fast and accurate estimation procedures for big data. Their good performance is illustrated on two real-life datasets. Upon these examples, we also estimate the concordance probability developed for severity models.
Introduction
One of the main tasks of an insurer is to determine the expected number of claims that will be received for a certain line of business and how much the average claim will cost. The former is typically predicted using a frequency model, whereas the latter is obtained by a severity model. The multiplication of these expected values then yields the technical premium (for more information, we refer to Frees (2009); Ohlsson and Johansson (2010)). Alternatively, one can also model the frequency and severity jointly (Shi et al. 2015). Predictive analytics are a key tool to develop both frequency and severity model in a data-driven way. Note that insurers also use a variety of predictive analytic tools in many other applications such as underwriting, marketing, fraud detection and claims reserving (Frees et al. 2014(Frees et al. , 2016Wuthrich and Buser 2020). The main goal of predictive analytics is typically to capture the predictive ability of the model of interest. Important aspects of the predictive ability of a model are the calibration and the discriminatory ability. Calibration expresses how close the predictions are to the actual outcome, while discrimination quantifies how well the predictions separate the higher risk observations from the lower risk observations (Steyerberg et al. 2010). Even though both calibration and discrimination are of utmost importance when constructing predictive models in general, the discrimination probably is considered to be slightly more important in the context of non-life insurance pricing. The technical premium should first and foremost capture the difference in risk that is present in the portfolio, which is exactly captured by discriminatory measures. The concordance probability typically is the most popular and widely used measure to gauge the discriminatory ability of a predictive model.
In case we have a discrete response variable Y, it equals the probability that a randomly selected subject with outcome Y = 0 has a lower predicted probability than a randomly selected subject with outcome Y = 1 (Pencina and D'Agostino 2004). Here, π(X) equals P(Y = 1|X), with X corresponding to the vector of predictors. In other words, the concordance probability C can be formulated as: (1) Furthermore, in a discrete setting and in the absence of ties in the predictions, this concordance probability equals the Area Under the ROC Curve (AUC) (Reddy and Aggarwal 2015). This ROC curve is the Receiver Operating Characteristic curve, suggested by Bamber (1975). It represents the true positive rate against the false positive rate at several threshold settings. The AUC is a popular performance measure to check the discriminatory ability of a binary classifier, as can be seen in the work of Liu et al. (2008) for example.
Even if definition (1) looks very promising to assess the discriminatory ability of frequency models, it assumes that the outcome variable is a binary rather than a count random variable. Moreover, since the policy runtime or exposure of an insurance contract typically is included as an offset variable in the frequency model, definition (1) needs to be extended to accommodate the presence of such an offset variable.
When dealing with a continuous outcome Y, this basic definition is typically adapted as: We say that the pairs (π(X i ), Y i ) and π(X j ), Y j are concordant when sgn(π(X i ) − π(X j )) = sgn(Y i − Y j ). Hence, the probability that a randomly selected comparable pair of observations with their predictions is a concordant pair, is another way of formulating the definition of the concordance probability. Note that definition (2) is a very popular measure in the field of survival analysis, where the continuous outcome corresponds to the time-to-event variable (Legrand 2021). For the severity model, it can be argued whether it is important to discriminate claims for which the observed cost hardly differs, hence an extension of definition (2) will be considered. Since the estimation of any definition of the concordance probability is time-consuming for larger datasets, we will also consider time-efficient and accurate estimation procedures.
In this paper, we will focus on the concordance probability applied to frequency and severity models used to construct a technical premium P for an insurance contract. This technical premium typically corresponds to the product of the expected probability of occurrence of the event (E(Y N )) times the expected cost of the event (E(Y S )). Note that these expectations are often conditional on some variables, such that the technical premium corresponds to with X N and X S the set of variables that are used to model each random variable. From here on, (Y N , X N ) will be referred to as the frequency data and (Y S , X S ) to as the severity data.
First, we introduce in Section 2 the real datasets that will be used throughout this article, together with the frequency and severity models based on them. Section 3 covers the required changes of the general concordance probabilities (1) and (2), such that they can be applied in an insurance context. Next, we develop several algorithms that calculate these new definitions in an accurate and time-efficient way. These algorithms will be introduced in Section 4, where they are immediately applied to the introduced models. Finally, the conclusion is given in Section 5.
Datasets and Models
In this section, we first introduce some real datasets. Next, we explain the frequency and severity models using these datasets.
Datasets
The datasets explained in this section, are all obtained from the pricing games of the French Institute of Actuaries, which is a game that can be played by both students and practitioners. First, we discuss the one of the 2015 pricing game and next we consider the ones of the 2016 pricing game. Both datasets are publicly available in the R-package CASdatasets 1 and contain data on which both a frequency and severity model can be applied.
2015 Pricing Game
The pg15training dataset was used for the 2015 pricing game of the French Institute of Actuaries organized on 5 November 2015 and contains 100,021 third-party liability (TPL) policies for private motor insurance. Each observation pertains to a different policy and a set of variables has been collected of the policyholder and the insured vehicle.
For reasons of confidentiality, most categorical levels have an unknown meaning. This dataset can be used for the frequency and severity model, and the selected and renamed variables are explained in Appendix A. The two most important ones are claimNumb and claimCharge, which will be the dependent variables of the frequency and severity analysis respectively. The variable claimNumb shows the number of third-party bodily injury claims. For policies for which more than two claims were filed during the considered exposure, the value was set to 2. This adaptation is needed for the measures that are presented in Section 3. The variable claimCharge represents the total cost of third-party bodily injury claims, in euro. Finally, exposure will be used as an offset variable during the analysis of the frequency data. It is the percentage of a full policy year, corresponding to the run time of the respective policy. Note that 72.58% of the observations have an exposure equal to one.
2.1.2. 2016 Pricing Game pg16trainpol and pg16trainclaim are two datasets that were used for the same pricing game of the French Institute of Actuaries one year later, in 2016. Both of them can be found in the R -package CASdatasets. The first dataset contains 87,226 policies for private motor insurance and can be used for the frequency model. The pg16trainclaim dataset contains 4568 claims of those 87,226 TPL policies and combined with the pg16trainpol dataset, the severity model can be constructed. Policies are guaranteed for all kinds of material damages, but not bodily injuries.
Once again, most categorical levels have an unknown meaning for reasons of confidentiality. The selected and renamed variables of the pg16trainpol and pg16trainclaim dataset are explained in Appendix A. The two most important ones are claimNumb and claimCharge, which will be the dependent variables of the frequency and severity analysis respectively. The variable claimNumb shows the number of claims. The policies for which more than two claims were filed during the considered exposure, the value was once again set to 2. This adaptation is needed for the measures that are presented in Section 3. The variable claimCharge represent the claim size. Moreover, exposure will be used as an offset variable during the analysis of the frequency data. It is the percentage of a full policy year, corresponding to the run time of the respective policy. In this dataset, 14.16% of the observations have an exposure equal to one.
Note that we only selected the 3969 observations that had a strictly positive claim, to construct the severity model. Finally, we could merge the pg16trainclaim and the pg16trainpol datasets based on the their policy number, begin date, end date and license number.
Models
In this subsection, we construct the frequency and severity models based on the aforementioned datasets. It is important to know that the interest of this paper is not really on the construction of the models, but on the calculation of the concordance probability of the models once the predictions are available. For both models, we first split the required dataset in a training and a test set. The training set is obtained by selecting 60% of the observations of the entire dataset. The remaining 40% of the observations represent the test set.
Frequency
In order to obtain predictions of the frequency model, we consider a basic Poisson model where the variable claimNumb is the response variable. The exposure is used as an offset variable, and all other variables of the training set, apart from claimCharge, are considered as predictor variables. Applying the frequency model on the test set of the 2015 (2016) pricing game, we obtain 40,008 (34,890) pairs of observations and their corresponding predictions. However, the goal of this paper is to calculate the concordance probability of these frequency models for big datasets. Therefore, we will also consider a bootstrap of these pairs of observations and predictions, resulting in 1,000,000 pairs for each dataset.
Severity
In order to obtain predictions for the severity, we consider a gamma model where the ratio of claimCharge over claimNumb is the response variable, and the weights are equal to the variable claimNumb. This is a popular approach for severity models, as explained in Appendix B, based on the book of Denuit et al. (2007). All other variables of the training set, apart from exposure and claimNumb, are considered as predictors. Applying the severity model on the test set of the 2015 (2016) pricing game, we obtain 1837 (1588) pairs of observations and their corresponding predictions. However, the goal of this paper is to calculate the concordance probability of these severity models for big datasets. Therefore, we will also consider a bootstrap of these pairs of observations and predictions, resulting in 1,000,000 pairs for each dataset.
Concordance Probability in an Insurance Setting
In this section, the general definitions (1) and (2) of the concordance probability will be modified to the use for frequency and severity models.
Frequency Models
The general definition of the concordance probability will in this section be modified to a concordance probability that can be used for frequency models. The basic definition (1) requires the definition of two groups, based on the number of events that occurred during the duration of the policy. However, non-life insurance contracts typically have an exposure of maximum one year. Hence, it is unlikely that more than two events will take place during this (short) period. Therefore, three groups will be defined: policies that experienced zero events, one event, and two events or more, respectively represented by the 0-, 1-and 2-group. These groups result in the following three definitions of the concordance probability for frequency models: where π N (·) refers to the predicted frequency of the frequency model and Y N to the observed claim number. The set of definitions (3) has several interesting interpretations. First of all, C 0,1+ (C 0,2+ ) evaluates the ability of the model to discriminate policies that did not encounter accidents from policies that encountered at least one (two) accident(s). Furthermore, C 1,2+ quantifies the ability of the model to discriminate policies that encountered one accident from policies that encountered multiple accidents. In other words, C 1,2+ quantifies the ability of the model to discriminate clients that could just have been unfortunate versus clients that are (probably) accident-prone. However, these concordance probabilities do not take the concept of exposure into account. This is the duration of a policy or insurance contract, and plays a pivotal role in frequency models. In order to make sure that the pair is comparable, the definition of the concordance probability needs to be extended to deal with the concept of exposure as well. As such, two main possibilities can be imagined which ensures comparability of the given pair. For the first possibility, the member of the pair that experienced the most accidents needs to have an exposure that is equal to or lower than the exposure of the other member of the pair. These pairs are sort of comparable since the member of the pair that experienced the most accidents did not have a longer policy duration than the member of the pair that experienced the fewest accidents. The set of definitions (3) can then be altered as: where λ i corresponds to the exposure of observation i. However, the above set of definitions (4) runs into trouble for pairs where there is a considerable difference in exposure. In order to understand why this is the case, we need to have a look at the structure of the predictions of a Poisson regression model, which corresponds for observation i to π N (X i ) = λ i exp(βX i ). This reveals that the prediction is mainly determined by the exposure λ i and the linear predictor βX i . Therefore, when the predictions of a Poisson regression model of a pair of observations are compared, two possibilities can occur when the pair is comparable according to the above set of definitions (4). One member of the pair can have a higher prediction than the other member due to a difference in risk, as expressed by the linear predictor and as is desirable, or due to a mere difference in exposure, which would obscure the analysis. A possible solution would be to set the exposure values of all observations equal to 1 when making predictions, such that one only focuses on the difference in risk between the different observations. However, this is undesirable as we would like to evaluate the predictions of the Poisson model that are used to compute the expected cost of the insurance policy, and for this the exposure is a key ingredient. In other words, the set of definitions (4) are of little practical use within the domain of insurance and will no longer be considered. For the second possibility, the exposure λ of both members of the pair need to be more or less the same, in order to ensure their comparability. Incorporated in the set of definitions (3), we get: Here, γ is a tuning parameter representing the maximal difference in exposure between both members of a pair that is considered to be negligible.
All former definitions are global measures, meaning that the concordance probability is computed over all observations of the dataset, where comparability is considered as the sole exclusion criterion for a given pair. The following definitions show a local concordance probability, by taking a subset of the complete dataset based on the exposure: In the above set of definitions, λ is the parameter corresponding to the exposure value for which the local concordance probability needs to be computed. In practice, C ≈ .,..+ (γ) ≈ C ≈ .,..+ (1, γ) because the main mass of the data is located at a full exposure. The appealing aspect of this set of definitions is that it allows the construction of a (λ, C(λ, γ)) table, i.e., an evolution of the local concordance probabilities in function of the exposure. However, the disadvantage of this plot is that one has to choose the values of λ and γ. Assume one takes γ equal to 0.05 and λ ∈ {0.05, 0.15, . . . , 0.95}. In this case, observations with for example exposure 0.49 and 0.51 will not be comparable, although their exposures are very close to each other. To eliminate this issue, we first define two groups: • O-group: group with the largest number of elements, hence the group with the smallest number of events, • 1-group: group with the smallest number of elements, hence the group containing the largest number of events. When we consider for example C ≈ 1,2+ (λ, γ), the O-group consists of the elements with Y N = 1 and the 1-group of the elements with Y N ≥ 2. Next, we apply following steps to construct a better (λ, C(λ, γ)) plot: 1.
Determine the pairs of observations and predictions belonging to the O-group and the ones to the 1-group.
2.
Define the number of unique exposures λ within 1 and apply a for-loop on them: • Select the elements in 1 with exposure λ i .
, the concordance probability on these two subsets. • Define m i , the number of comparable pairs used to calculate C(λ i , γ).
3.
The global concordance probability C(γ) can be rewritten as: where n equals the number of observations, n 0 (n 1 ) the number of observations in O (1), n 1 the number of unique exposures in 1 and Since the loop iterates over all unique exposures in the 1-group, which is the smallest one, the x-axis can have a rather rough grid. Therefore, one can also easily adapt the previous steps by looping over the unique exposures in the O-group, resulting in a plot with an x-axis that has possibly a finer grid. In Figures 1 and 2, both the rough and the fine version of the (λ, C ≈ 0,1+ (λ, γ)) plot are constructed for the test sets of the 2015 and 2016 pricing game respectively. We choose γ to be 0.05, which is approximately equal to the length of one month. For the test set of the 2015 (2016) pricing game, the maximal weight w i is 0.96 (0.32) for the observations with exposure 1.
However, the plots are hard to interpret, since there are large differences depending on which group is iterated. Especially in Figure 2, we see that for example C(0.08, 0.05) is much larger when iterating over the O-group (fine grid), than when iterating over the 1-group (rough grid). For the fine grid version, we use the elements of the O-group with exposure equal to 0.08, together with the elements of the 1-group with an exposure between 0.08 and 0.13. This subset leads to a high value for C(0.08, 0.05), meaning that the selected elements of the 1-group have in general a higher prediction than the ones of the O-group. However, for the rough grid version, we use the elements of the O-group with an exposure between 0.08 and 0.13, together with the elements of the 1-group with an exposure equal to 0.08. This is yet another subset, and this time we often see higher predictions for the elements in the O-group, leading to a small value for C(0.08, 0.05). Considering different subgroups, leads to a difficult interpretation of these plots. However, it is important to know that both versions of this local plot lead to the same global concordance probability, based on equality (7). A solution to the lack of interpretability of both local plots (fine and rough grid), is to consider a weighted mean of them, with the weights based on the number of comparable pairs. This weighted-mean-plot is constructed for both datasets and can be seen in Figure 3. For the interpretability, it is important to see that the weighted-mean-plot is equivalent with applying the following two steps:
1.
For every observation i, construct C(λ i , γ), with λ i the exposure of the considered element.
2.
For every considered exposure λ i , determine the weighted mean of C(λ i , γ), where the weights are based on the total number of comparable pairs.
Severity Models
The general definition (2) of the concordance probability will in this section be modified to a concordance probability that can be used for severity models. Since it might be of little practical importance to distinguish claims from one another that only slightly differ in claim cost, the basic definition can be extended to a version introduced by Van Oirbeek et al. (2021): where ν ≥ 0. Furthermore, π S (·) refers to the predicted claim size of the severity model and Y S to the observed claim size. In other words, the claims that are to be considered are those of which the claim size has a difference of at least a value ν. Hereby, pairs of claims that makes more sense from a business point of view are selected. Also, a (ν,C(ν)) plot can be constructed where different values for the threshold ν are chosen, as to investigate the influence of ν on (8). Interestingly, C(0) corresponds to a global version of the concordance probability (as expressed by definition (2)), while any value of ν > 0 results in a more local version of the concordance probability. Focusing on the datasets introduced in Section 2, we determine the value of ν such that x% of the pairwise absolute differences of the observed values is smaller than ν, with x ∈ {0, 20, 40}. Note that ν equal to zero is not a popular choice in business, since they are not interested in comparing claims that are nearly identical. The size of the considered test sets still allow to consider all possible pairs between the observations in order to determine the absolute differences between observations belonging to the same pair. However, this is no longer the case for the bootstrapped versions, since this would result in 499,999,500,000 pairs and corresponding differences. Since the observations are all sampled from the original test sets, we know that the number of unique values is much lower than 1,000,000. Hence, we can use the technique discussed in , resulting in a fast calculation of the values of ν represented in Table 1. As can be seen, the difference between the values for ν determined on the original test set or on the bootstrapped dataset is very small. Therefore, we will from here on only focus on the bootstrapped versions of the test sets.
Time-Efficient Computation
For a sample of size n, the general concordance probability is typically estimated as: corresponding to the ratio of the number of concordant pairs n c over the total number of comparable pairs n t . The value π c ( π d ) refers to the estimated probability that a comparable pair is concordant (discordant) respectively and I(·) to the indicator function. Note that the extra condition π(x i ) = π(x j ) is added to the denominator to ensure that no ties in the predictions are taken into account (Yan and Greene 2008).
Since this estimation method is not possible for large datasets, Van Oirbeek et al. (2021) introduced several algorithms to approximate the concordance probability in an accurate and time-efficient way. We also refer to that article for detailed information and an extensive simulation study. However, new algorithms need to be developed for the frequency setting to approximate the concordance probability dealing with the exposure, and this will be the subject of Section 4.1. For the completeness, we apply the original algorithms of Van Oirbeek et al. (2021) on the severity models in Section 4.2.
In this section, the approximations will be applied to the concordance probability for the models discussed in Section 2.2. More specifically, we will use the bootstrap version such that we have 1,000,000 pairs of observations and predictions to consider.
Frequency
The goal of this section is to approximate the concordance probability C ≈ 0,1+ (0.05), as defined in (5), in a fast and accurate way. This will be done for the frequency models of Section 2.2, using the 1,000,000 bootstrapped pairs of observations and predictions. Note that the same reasoning can be used for the other concordance probabilities defined in (5).
Before we can determine the bias of the concordance probability estimates, we need to know its exact value. This can be determined by first splitting the considered dataset in the O-group and the 1-group, as defined in Section 3.1. For the rough grid approach, we iterate over the elements of the 1-group. In each iteration, we count the number of predictions in the O-group that are smaller than the prediction of the considered element of the 1-group. Summing up all these counts, divided by the number of considered pairs, results in the exact concordance probability. Contrarily, we iterate over the elements of the O-group for the fine grid approach. In each iteration, we count the number of predictions in the 1-group that are larger than the prediction of the considered element of the 1-group.
Summing up all these counts, divided by the number of considered pairs, results in the exact concordance probability.
In Table 2, one can see the timings that were necessary to calculate the exact value of C ≈ 0,1+ (0.05), which is 0.6670 (0.5905) for the bootstrap version of the 2015 (2016) pricing game test set. The same was done for C ≈ 0,1+ (0.10), and hence, we can compare both to see the effect of the parameter γ on the run times. We cannot precisely draw a conclusion on the effect of γ on the exact value of the concordance probability, since the exact value of C ≈ 0,1+ (0.10) equals 0.6658 (0.5925) for the bootstrap version of the 2015 (2016) pricing game test set. However, for the run times we see clearly larger run times when γ is 0.10. This can be explained by the fact that a larger value for γ implies that we allow more pairs to be compared. Moreover, the run times for the dataset of the 2015 pricing game are clearly larger than the ones for the dataset of 2016. This can be explained by the fact that 73% of the 2015 dataset are observations with an exposure equal to 1. Hence, these observations belong to many comparable pairs. For comparison, only 14% of the observations of the 2016 pricing game dataset have an exposure equal to 1. This is confirmed by Table 3, which shows the number of comparable pairs. From this table, one can also see that the number of comparable pairs for the rough and fine grid approach are equal to each other. This was expected since both approaches result in the exact same global concordance probability. A final note on Table 2 is that it also contains the time to construct the weighted-mean-plot for C ≈ 0,1+ (γ). Since this plot is constructed as the weighted mean of the fine and the rough grid plot, the time to construct it equals the time to construct both the fine and rough grid plot. (7). A similar reasoning can be used to obtain a marginal approximation for the rough grid approach. Hence, combining both as explained in Section 3.1 results in the weighted-mean-plot approach.
Such a marginal approximationĈ ≈ M,0,1+ (0.05) takes advantage of the fact that the bivariate distribution of the predictions for considered elements of the O-group and the 1-group, F π,π 1 (π O , π 1 ), is equal to the product of F π O (π O ) and F π 1 (π 1 ). Hence, when a grid with the same q boundary values τ = (τ 0 ≡ −∞, τ 1 , . . . , τ q , τ q+1 ≡ +∞) for the marginal distribution of both groups is placed on top of the latter bivariate distribution, the probability that a pair belongs to any of the delineated regions only depends on the marginal distributions F π O (π O ) and F π 1 (π 1 ). Important to note is that Van Oirbeek et al.
(2021) took the same q boundary values for each group. These boundary values were a set of evenly spaced quantiles of the empirical distribution of the predictions of both the Ogroup and the 1-group jointly. An extension on this idea is that we allow to have different boundary values for each group. Hence, the boundary values of the O-group (1-group) equal the quantiles of the empirical distribution of its predictions. This way of working allows to consider the distribution of each group separately, but the disadvantage is that it will increase the run time. The reason for this increment is that it will be more difficult to determine which region of the grid contains concordant pairs, as can be seen in Figure 4. Therefore, we will compare the original and the extended marginal approximation of the concordance probability C ≈ 0,1+ (0.05) for the frequency models of Section 2.2, using the 1,000,000 bootstrapped pairs of observations and predictions. Table 4 shows the results of the original marginal approximation, hence using the same boundary values for the considered Oand 1-group when calculatingĈ ≈ M,0,1+ (0.05). The bias clearly decreases for a higher number of boundary values, but, of course, this coincides with a larger run time. Remarkably, the bias and run time for the marginal approximation of C ≈ 0,1+ (0.05) on the bootstrap of the predictions and observations of the 2016 pricing game dataset, are lower than the ones on the 2015 pricing game dataset. A final conclusion on the run times is that, compared to the results in Table 2, the original marginal approximation reduces the run time with at least 50%. Table 5 shows the results of the extended marginal approximation (weighted-meanplot approach), hence allowing to have different boundary values for each group. In Appendix C, we see similar results in Tables A1 and A2 for the fine and rough grid approach respectively. A first conclusion is that when each group has the same number of boundary values, the biases are higher than the ones of the original marginal method. Figure 4 reveals a possible cause, since we clearly see an increase of regions containing incomparable pairs for the extended approach. As a result, the concordance probability is based on fewer comparable pairs, which is confirmed in Table 6. In this situation, we also notice that the run times for the extended marginal approach are comparable with the ones for the original marginal approach, as long as the number of boundary values is smaller than 5000. For a larger number of boundary values, the extended marginal approximation has a higher run time than the original one. In general, we may conclude from Tables 5, A1 and A2 that the bias decreases for a higher number of boundaries, which coincides with a higher run time. Finally, we also construct an approximation of the weighted-mean-plot for C ≈ 0,1+ (λ, 0.05) based on the original and extended marginal approximation, respectively shown in Figures A1 and A2 while using the number of boundary values that resulted in the lowest bias (in case of multiple scenarios, the one with the lowest run time). Comparing these plots with the original ones shown in Figure 3, we see that both the original and the extended marginal approximation give a weighted-mean-plot that is almost the same as the exact one. Based on these plots, the bias and the run time, we have a slight preference for the original marginal approximation where we use the same boundary values for the O-group and the 1-group. Table 5. Bias and run time (s), the latter between brackets, for the extended marginal approximation of C ≈ 0,1+ (0.05) on the 2015 and 2016 pricing game dataset. This is given for the weighted-mean-plot approach and for several different numbers of boundary values for the Oand 1-group.
k-Means Approximation
Another approximation for C ≈ 0,1+ (0.05) is based on the k-means approximation for discrete variables of . More specifically, when we focus for example on the fine grid version, we approximate each local concordance probability C ≈ 0,1+ (λ i , 0.05) by its k-means approximation, with λ i representing the unique exposures of the O-group. These local approximations are denoted byĈ ≈ p,kM,0,1+ (λ i , 0.05), such that the first approximation for the global concordance probability C ≈ 0,1+ (0.05) is obtained byĈ ≈ p,kM,0,1+ (0.05) = ∑ i w iĈ ≈ kM,0,1+ (λ i , 0.05), with w i representing the same weights as used in (7). A similar reasoning can be used to obtain a k-means approximation for the rough grid version. Hence, combining both as explained in Section 3.1 results in the weighted-mean-plot approach.
Such a k-means approximationĈ ≈ kM,0,1+ (0.05) applies within both groups a k-means clustering algorithm on the considered predictions. Once the clustering algorithms are applied, only the cluster centroids are used to determineĈ ≈ kM,0,1+ (0.05). Hence, a more precise estimate will be obtained as k increases. Important to note is that took the same number of clusters for each group. An extension on this idea is that we allow to have a different number of clusters for each group. The results of this extended approximation can be found in Table 7 for the weighted-mean-plot approach. In Appendix D, Tables A3 and A4 show the results for the fine and rough grid approach respectively. A first conclusion regarding the bias is that it is very low for all considered number of clusters, since a maximum bias of 0.14% was observed over all considered scenarios. This is clearly lower than the comparable bias of the original marginal approximation. However, due to the randomness and the very small values, we do not always see a lower bias for a higher number of clusters. The run time, however, clearly increases for a higher number of clusters. Moreover, these run times are much higher than the ones of the original marginal approximation. Sometimes, they are even higher than the run times to exactly calculate the concordance probability. Despite the rather high run times, the weighted-mean-plots are very close to the exact ones as can be seen in Figures 7 and A3, the latter in Appendix D. A final approximation for C ≈ 0,1+ (0.05) is denoted byĈ ≈ ep,kM,0,1+ (0.05) and is constructed to have an approximation based on the k-means approximation for discrete variables of , without having the high run times as forĈ ≈ kM,0,1+ (0.05). These high run times were the result of applying two k-means clustering algorithms for each considered exposure λ i . To determine this new approximationĈ ≈ ep,kM,0,1+ (0.05), a k-means clustering algorithm is only applied twice within both groups: first on the expo-sures and afterwards on the predictions. Hence, only four k-means clustering algorithms are applied. Finally,Ĉ ≈ ep,kM,0,1+ (0.05) is obtained by applying Equation (7) on the cluster centroids instead of on the exact exposures and predictions. The results of this third approximation can be found in Table 8 for the weighted-mean-plot approach. In Appendix D, Tables A5 and A6 show the results for the fine and rough grid approach respectively. Table 7. Bias and run time (s), the latter between brackets, for the approximationĈ ≈ kM,0,1+ (0.05) on the 2015 and 2016 pricing game dataset. This is given for the weighted-mean-plot approach and for several different numbers of clusters for the Oand 1-group. A first important remark is that there are only 275 (93) unique exposures in the 2015 (2016) pricing game dataset. Hence, for a larger number of clusters on the exposures, we have no gain in the run time since we are looping again over all unique exposures. Due to the randomness of selecting the clusters, there is not always a lower bias for a larger number of clusters. Nevertheless, the bias for all considered approximations is very low. More specifically, it is slightly higher than the bias of the correspondingĈ ≈ kM,0,1+ (0.05) approximation, but still smaller than the one of the original marginal approximation. Finally, we do see an increase in the run time for a larger number of clusters. These run times are clearly smaller than the ones of the correspondingĈ ≈ kM,0,1+ (0.05) approximation, but still larger than the ones of the original marginal approximation. The weighted-meanplots are shown in Figures 8 and A4, the latter in Appendix D. Most of these approximations are very close to the exact weighted-mean-plot, apart from the one shown in Figure 8a. There we see that the values around an exposure equal to 0.8 are a bit higher estimated than they should be.
Since the bias of the original marginal approximation is already very low, we do not recommend the k-means algorithm resulting in a lower bias but coinciding with a larger run time. Another important reason for this recommendation, is the fact that more boundary values imply a lower bias for the original marginal approximation, which is not the case for the k-means approximation and its clusters.
Severity
The goal of this section is to approximate the concordance probability (8) in a fast and accurate way for the severity model of Section 2.2, using the 1,000,000 bootstrapped pairs of observations and predictions.
Before we can determine the bias of the concordance probability estimates, we need to know its exact value. This can be determined by looping over all observations and selecting each time the rows with an observation strictly larger than the considered observation added up with ν. In each iteration, we store the number of selected rows in u. Next, v represents the number of predictions in this selection that are larger than the prediction of the considered element. Finally, the exact concordance probability can be obtained by dividingv byū. Important note for this way of working is that we cannot take advantage anymore of the small number of unique values in the observations, since their predictions can differ.
For all considered values of ν, the exact concordance probability is calculated and represented in Table 9 together with its run time. As can be seen for larger values of ν, the concordance probability increases, but the run time decreases. The latter can be explained by the fact that a larger value for ν coincides with fewer comparable pairs. A general conclusion is that it takes a tremendous amount of time to precisely calculate the concordance probability, which is why we will try to approximate these values in a faster way.
Marginal Approximation
A first approximation is the marginal approximation, where a grid is placed on the (Y S , π(X)) space. The q boundary values τ = (τ 0 ≡ −∞, τ 1 , . . . , τ q , τ q+1 ≡ +∞) are evenly spaced percentiles from the empirical distribution of the observed values for Y S and the same set of boundary values is used for dimension π(X). As explained by , the marginal approximation of the concordance probability (8) can be computed as: ) equals the number of concordant (discordant) comparisons for region τ ij , and n τ ij ,τ kl is the product of the number of elements in regions τ ij and τ kl .
k-Means Approximation
Another approximation introduced by , is the k-means approximation. For this approximation, the dataset is reduced to a smaller set of clusters that are jointly constructed based on their observed outcomes and predictions. As a result, (8) can be approximated as: where y S,l and π l are the observed outcome and the prediction of the representation of the l-th cluster respectively; which is the centroid in case of k-means. w l is the weight of the l-th cluster that is determined by the percentage of observations that pertain to the lth cluster.
The results of the aforementioned approximations can be found in Table 10. There is clearly a smaller bias for a larger number of boundary values or clusters. The disadvantage is that this coincides with a larger run time. There is no considerable connection between the bias and the chosen value for ν. Nevertheless, we do see a shorter run time for higher values of ν, which was already noticed during the exact calculations of the concordance probability and can be explained by the smaller number of comparable pairs. For severity models, we prefer the k-means approximation due to a much smaller run time, combined with a very small bias.
Conclusions
Various discrepancy measures and extensions thereof have already been presented in the actuarial literature (Denuit et al. 2019). However, the concordance probability is seldom used in actuarial science, although it is very popular in the machine learning and statistical literature. In this article, we extend the concordance probability to the needs of the frequency and severity data in an insurance context. Both are typically used to calculate the technical premium of a non-life insurance product. For the frequency model, we adapt the concordance probability with respect to the exposure and the fact that the number of claims is not a binary variable. For the severity model, we made sure that claims that are nearly identical in claim cost are not taken into account. The concordance probability measures a model's discriminatory power and expresses its ability to distinguish risks from each other, a property that is particularly important in non-life insurance. Since it is very time consuming to estimate the above measures for the sizes of frequency and severity data that are typically encountered in practice, several approximations based on computationally efficient algorithms are applied. For the frequency models, we prefer the so-called original marginal approximation, since it has the smallest run time. For these frequency models, it is also possible to visualize the introduced concordance probability in function of the exposure in the so-called weighted-mean-plot. For the severity models, we prefer the k-means approximation due to a small run time combined with a very small bias. Table A3. Bias and run time (s), the latter between brackets, for the approximationĈ ≈ kM,0,1+ (0.05) on the 2015 and 2016 pricing game dataset. This is given for the fine grid approach and for several different numbers of clusters for the Oand 1-group. Table A6. Bias and run time (s), the latter between brackets, for the approximationĈ ≈ ep,kM,0,1+ (0.05) on the 2015 and 2016 pricing game dataset. This is given for the rough grid approach and for several different numbers of clusters for the Oand 1-group. | 10,176.6 | 2021-10-08T00:00:00.000 | [
"Business",
"Mathematics"
] |
Systematic heat transfer measurements in highly viscous binary fluids
Investigations of flow boiling in highly viscous fluids show that heat transfer mechanisms in such fluids are different from those in fluids of low viscosity like refrigerants or water. To gain a better understanding, a modified standard apparatus was developed; it was specifically designed for fluids of high viscosity up to 1000 Pa∙s and enables heat transfer measurements with a single horizontal test tube over a wide range of heat fluxes. Here, we present measurements of the heat transfer coefficient at pool boiling conditions in highly viscous binary mixtures of three different polydimethylsiloxanes (PDMS) and n-pentane, which is the volatile component in the mixture. Systematic measurements were carried out to investigate pool boiling in mixtures with a focus on the temperature, the viscosity of the non-volatile component and the fraction of the volatile component on the heat transfer coefficient. Furthermore, copper test tubes with polished and sanded surfaces were used to evaluate the influence of the surface structure on the heat transfer coefficient. The results show that viscosity and composition of the mixture have the strongest effect on the heat transfer coefficient in highly viscous mixtures, whereby the viscosity of the mixture depends on the base viscosity of the used PDMS, on the concentration of n-pentane in the mixture, and on the temperature. For nucleate boiling, the influence of the surface structure of the test tube is less pronounced than observed in boiling experiments with pure fluids of low viscosity, but the relative enhancement of the heat transfer coefficient is still significant. In particular for mixtures with high concentrations of the volatile component and at high pool temperature, heat transfer coefficients increase with heat flux until they reach a maximum. At further increased heat fluxes the heat transfer coefficients decrease again. Observed temperature differences between heating surface and pool are much larger than for boiling fluids with low viscosity. Temperature differences up to 137 K (for a mixture containing 5% n-pentane by mass at a heat flux of 13.6 kW/m2) were measured.
Introduction
The demands on polymer products concerning quality and environmental legislation have significantly increased during the last years. To reach high qualities and to meet the environmental regulations, undesirable side products of the polymerization process must be separated from the product by extruders or evaporators. In food technologies, falling film evaporators and other technologies are used to thicken highly viscous intermediate products. The design of corresponding apparatuses is based on specific experiments and on experience. Both, in polymer and in food technologies, the boiling highly viscous mixtures typically consist of at least one low and one very high boiling component, whereby the high boiling component is temperature sensitive in most cases. Attempts to develop predictive relations for flow boiling of such highly viscous mixtures showed that the foundations are still missing -yet there is no correlation available that describes pool boiling of highly viscous mixtures and that properly considers all effects relevant for this most fundamental form of boiling of highly viscous mixtures [1]. In correlations for convective boiling the heat transfer coefficient α usually depends on Grashof and Prandtl number; with a number of simplified assumptions both parameters can be calculated for the mixtures considered in this article. Compared to experimental data and depending on the assumptions made, the results are in the right order of magnitude. However, relations for nucleate boiling of mixtures commonly require heat transfer coefficients for nucleate boiling of the pure components at given temperature and / or reduced pressure. This information is not available for common highly viscous components. Nucleate boiling of these pure components cannot be observed for reasons of chemical stability and reduced temperatures and pressures cannot be defined, because critical parameters are unknown. Consequently, experimental results could not be correlated properly, see e.g. [2]. To establish a data base for the development of such correlations, systematic heat transfer measurements of pool boiling heat transfer in highly viscous mixtures are required. Against this background, a modified standard apparatus was developed and set up [3] at the thermodynamics institute of Ruhr University Bochum to improve the fundamental understanding of heat transfer processes and to establish a database for correlation needs. We describe the apparatus and the experimental procedure in detail in an article [4] published in parallel to this article, which is focused on the experimental results. The heat transfer measurements discussed in the present paper were carried out with binary mixtures of polydimethylsiloxane (PDMS) 1 of three viscosities and n-pentane over a wide range of heat fluxes and temperatures. The influence of the temperature, the composition of the mixture, the surface roughness of the horizontal copper test tube, and of the different types of PDMS (with a significant variation in viscosity) on the heat transfer coefficient are systematically investigated and evaluated. With this, the impact of different parameters on the heat transfer can be separated from each other and assessed.
In literature, studies on heat transfer at pool boiling in refrigerants with small mass fractions of oil can be found. The effects of lubricant mass fraction in the refrigerant R143a, its viscosity and miscibility on the heat transfer coefficient was investigated in twelve mixtures [5]. Small lubricant mass fraction, high lubricant viscosity and large critical solution temperature improve the heat transfer coefficient. Nucleate pool boiling of mixtures of R134a and oil on two heating surfaces was investigated [6]. Two types of oil were used, medium and high viscosity, with a content up to 5% mass fraction. The heat transfer coefficients in mixtures with more than 3% mass fraction oil are always lower than in pure R134a.
The effect of oil on pool boiling of oil/R245fa mixtures up to 5% lubricant mass fraction was investigated [7]. At pool boiling, the heat transfer coefficient for the smooth tube decreases with increasing oil concentration. On the other hand, the heat transfer coefficient increases for the fin tube with 0.4 mm fin high because of a more significant bubbly foam.
Apparatus description
The so-called standard apparatus [2,3] was modified to carry out heat transfer measurements in highly viscous mixtures, see [4] for details on the used apparatus and on experimental procedures. Mixtures of PDMS of significantly different viscosities 2 (PDMS M10k with η PDMS M10k = 9.7 Pa•s, PDMS M100k with η PDMS M100k = 97 Pa•s, and PDMS M1000k with η PDMS M1000k = 970 Pa•s at T = 298.15 K) and n-pentane (η n-pentane = 0.22 mPa•s) were selected as model fluids. The volatile component, n-pentane, evaporates from the surface of an exchangeable test tube inside the evaporator and is subsequently condensed. By means of a dosing pump, the n-pentane is pumped to the static mixer. Inside the evaporator, the depleted PDMS flows over a guide plate and is pumped through the static mixer by a gear pump, where it is mixed with the condensed n-pentane. Downstream of the static mixer, the refractive index of the mixture is measured. The composition is calculated applying a correlation, which considers the refractive index and the temperature [4]. Ultimately, the mixture is pumped back into the evaporator. The temperature difference between the tube surface and the bulk fluid as well as the temperature distribution in the pool are measured with thermocouples. The modified standard apparatus is mounted inside a thermal chamber. Characteristic temperatures are measured with nine platinum resistance thermometers. The relative combined expanded uncertainty in measured heat transfer coefficients is 16% (k = 2), see the more detailed discussion on uncertainty of the measured values given in [4]. Pure n-pentane was used to validate the apparatus. The influence of the flow in the pool caused by remixing of the components on the heat transfer coefficient was studied in pretests [4].
The heat transfer coefficient α is defined as α = q / ΔT = Q / (A • ΔT) with the heat flow Q, the heated area A and the superheat ΔT (temperature difference between heated wall and bulk liquid). The heated area is calculated with A = d o • π • l where d o is the outer diameter of the tube and l its heated length. The heat flux q is calculated with and with where R 1 and R 2 are high-precision resistors. The current I tube is calculated with where R 3 is a third high-precision resistor. The resistance R cold causes a voltage drop at the inactive end of the heating element and is given by the manufacturer with 0.0345 Ω. The superheat ΔT exp = (T surface − T fluid ) is measured directly with each of 12 thermocouples in the test tube, which each have a reference junction in the bulk of the fluid below the heating tube. The experimental superheat values ΔT exp = (T surface − T fluid ) are calculated with the arithmetical mean of the voltages measured for all 12 thermocouples, corrected by the voltage measured in equilibrium at q = 0 (ΔT exp = 0). ΔT exp is corrected for radial heat conduction within the tube from the groove to the surface by ΔT = ΔT exp − ΔT r,th with the temperature correction factor ΔT r,th . This factor depends on the construction of the tube and the thermal conductivity λ of its material (λ copper = 360 W/mK, λ magnesium oxid = 50 W/mK, λ Inconel = 15 W/mK, λ thermal grease = 3 W/mK). It is calculated with The thickness of every layer is defined by the outer and inner diameter d i and d o of each layer. The copper tubes have a correction factor of r th,copper = 5.88·10 -5 m 2 K/W.
Nucleate boiling heat transfer in highly viscous fluids
The photographs in Fig. 1 show the difference in nucleate boiling in mixtures with 84 wt.-% (row A) and 95 wt.-% PDMS M10k (row B), i.e., with viscosities different by a factor of about three and with different content of n-pentane. The experiments were conducted using a polished copper test tube at similar temperatures (T = 323 K and 321 K), and the heat flux was varied from q = 2 kW/m 2 to q = 16 kW/m 2 For the mixture with 84 wt.-% PDMS M10k, the superheat ∆T (the temperature difference between heated wall and bulk liquid) decreases from ∆T = 94 K to 22 K with decreasing heat fluxes. Visual video and photo observations show that nucleate boiling remains stable down to q = 2 kW/m 2 . At q = 2 kW/ m 2 , bubbles formed at the heating surface recondense in the pool before they reach the phase boundary. Up to q = 8 kW/ m 2 the number of nucleation sites increases with increasing heat fluxes. From q = 10 kW/m 2 to 16 kW/m 2 , the number of bubbles does not increase further; small areas of the test tube surface are covered by gaseous n-pentane. With increasing heat fluxes, the shape of bubbles changes from spherical to drop shaped. At each heat flux, some bubbles collapsed before they reached the phase boundary. Coalescens was observed at heat fluxes higher than 4 kW/m 2 . At heat fluxes between 14 kW/m 2 and 16 kW/m 2 , small bubbles occurred in the fluid away from the heated surface, which is probably caused by spontaneous evaporation when hot depleted mixture from close to the test tube mixes with fluid still rich in n-pentane.
Superheat temperatures ∆T in the mixture with 95 wt.-% PDMS M10k (Fig. 1, row B) are even higher; they increase up to ∆T = 129 K at q = 16 kW/m 2 . The predominant heat transfer mechanism changes from convective heat transfer to nucleate boiling at heat fluxes between 1 kW/m2 and 2 kW/ m 2 . In comparison to row A, the photographs in row B show a significantly lower number of bubbles, especially at higher heat fluxes. For the more viscous mixture containing less n-pentane, the shape of the bubbles is continuously spherical with a larger diameter. With increasing heat fluxes, the mass flow of evaporating n-pentane increases. The observed phenomena indicate that the turbulence in the pool is not sufficient to compensate for this effect in highly viscous mixtures -close to the heating surface the concentration of n-pentane is reduced, the concentration of PDMS increases.
Just like for refrigerant mixtures or other mixtures with low viscosity, convective boiling and nucleate boiling can be distinguished for boiling highly viscous mixtures. The heat and mass transfer mechanisms responsible for the characteristics of boiling are the same, but due to high viscosity turbulent mixing of the pool is limited. Concentration and temperature gradients are larger. And the viscosity of the mixture is influenced not only by the chosen PDMS, but also by temperature and by the n-pentane concentration, see [2]. Effects related to n-pentane depletion close to the heating surface will likely by less pronounced for higher average n-pentane concentrations in the pool. And boiling research for less viscous mixtures suggests that influences by surface structure and material of the heating surface have to be considered as well. The goal of this work is to establish an experimental data base that allows to distinguish between effects of the mentioned factors and to quantify the effects that are most relevant for heat transfer in boiling highly viscous mixtures.
Results and discussion
The results reported here address the variation of the heat transfer coefficient with increasing heat fluxes with a focus on the influence of the evaporator temperature (Sect The dependence of the viscosity on the mixture composition and the temperature are discussed in detail in an article published in parallel in this journal [4].
Influence of the temperature
In fluids of low viscosity, the influence of the reduced pressure p* on boiling heat transfer.
was investigated [8]. The reduced pressure is commonly calculated as with p c the critical pressure of the boiling fluid. At convective heat transfer, the variation of all parameters, e.g., reduced pressure, test tube diameter, surface structure and heat flux, changes the heat transfer coefficient only by a factor of two [9]. However, at nucleate boiling, the heat transfer coefficient increases significantly with increasing reduced pressure. The critical pressures of the mixtures studied in the present work are unknown; for PDMS the critical temperature is much higher than the limit of its chemical stability. Thus, reduced pressures cannot be calculated. As a corresponding property, the temperature in the evaporator was chosen. The influence of the temperature on the heat transfer coefficient in highly viscous mixtures was investigated over a temperature range from 319 to 380 K. All parameters were kept constant except for the temperature in the evaporator to separate different influences from each other. The temperature is regulated by four heating elements in a ventilation conduit, which is installed in the climatic chamber [4]. In Fig. 2, results of a measurement series in a mixture with 20 wt.-% M100k are plotted. The temperature was varied in ≈10 K steps from 319 to 379 K. Nucleate boiling is the main heat transfer mechanism for all data points. The general observation is, that at all temperatures, the heat transfer coefficient increases with increasing heat flux. From a specific temperature on (351 K in the case shown in Fig. 2, the heat transfer coefficient reaches a maximum and decreases again at higher heat fluxes. Below this temperature, the increase in the heat transfer coefficient declines at the highest observed heat fluxes. From 0.15 kW/m 2 to 10 kW/m 2 , the evaporator temperatures show an ascending order, which is in agreement with results on heat transfer in refrigerants [10]. At heat fluxes above 10 kW/m 2 , the temperature order changes because the α(q) maximum is reached first for the highest temperatures. At this point, all nucleation sites are activated, and the test tube is covered with bubbles; the heat transfer by nucleate boiling is hindered. For high evaporator temperatures, this stage is reached at lower heat fluxes than for low temperatures. In Fig. 3a, results are shown for heat transfer measurements in a mixture with 73 wt.-% M100k at three different temperatures. Compared to Fig. 2, the maximum heat transfer coefficients, and the slope of the α(q)-trends are lower because of the larger PDMS fraction. The ascending order of the evaporator temperatures holds from 0.5 kW/m 2 to 4 kW/ m 2 , the formation of a maximum can be observed again. For the measurement series at T = 379 K, all nucleation sites are activated at 4 kW/m 2 , and the heat transfer coefficient decreases again with further increasing heat flux, whereas the maximum heat transfer coefficient at T = 316 K is only reached at 8 kW/m 2 . At 0.15 kW/m 2 the data points at 352 K and 379 K indicate a reversed order, but at this extremely low heat flux the difference between the data points is still considered within the mutual experimental uncertainty. Figure 3b presents the heat transfer coefficient as a function of the heat flux at three different temperatures for mixtures with 95 wt.-% PDMS M10k on a polished copper test tube. The transition from free convection to nucleate boiling is at approximately 1 kW/m 2 ≤ q ≤ 2 kW/m 2 . The evaporator temperatures show an ascending order, whereby a significant temperature influence is observed only in the nucleate boiling regime. A flattening of the α(q)-trend cannot be observed in the range of measured heat fluxes. Experiments The influence of temperature on the heat transfer coefficient in a mixture with 95 wt.-% M1000k was studied as well (see Fig. 3c). Here, the heat transfer coefficient increases only slightly with increasing heat fluxes; the temperature's influence is visible but less pronounced. At low temperatures, the heat transfer coefficient is lowest, and it increases slightly with increasing temperature at constant heat flux.
The experimental data show that the heat transfer coefficient is larger at higher temperatures when being measured at constant heat flux; all data confirm this relation within their experimental uncertainty. Exceptions are possible for heat fluxes beyond the observed maximum in the heat transfer coefficient. Since this maximum is reached earlier at higher temperature, it may result in a reversed temperature dependence of the heat transfer coefficient. The evaporator temperature influences the nucleation and the bubble density on the surface of the test tube depending on the heat flux. The maximum observed for the heat transfer coefficient depends on the temperature and on the composition of the mixture. When all nucleation sites are activated and the heating surface is increasingly covered by vapor, the heat transfer coefficient does not increase further but decreases despite further increasing heat flux. At high PDMS concentrations, the impact of temperature on the heat transfer coefficient is lower. For high concentrations of PDMS, in particular for the highly viscous PDMS M1000k, the temperature related decrease in viscosity seems to become the main reason for the increase of the heat transfer coefficient with temperature.
Influence of the mass fraction of the volatile component
The composition of the mixture has a large impact on the heat transfer coefficient; with a larger n-pentane fraction, the viscosity of the mixture decreases significantly. Convective heat transfer is improved by a lower viscosity, and the bubbles must overcome less resistance while they nucleate and rise. A higher nucleation rate leads to a more turbulent boundary layer above the test tube, the effect of n-pentane depletion close to the heating surface is reduced. Therefore, a larger n-pentane fraction leads to a higher heat transfer coefficient. In In mixtures with M10k and n-pentane, heat transfer measurements for three different compositions were conducted at T ≈ 354 K (Fig. 5)
Influence of the surface structure of the test tube
The influence of the surface structure on the heat transfer coefficient can be calculated for pure fluids of low viscosity such as water or other refrigerants with empirical correlations given, e.g., in [10]. However, there are no published data and therefore no empirical correlations available for highly viscous mixtures. For this research, a sanded and a polished surface structure of copper test tubes were produced and analyzed by Prof. A. Luke and her team at University Kassel. The surface roughness can be described by the parameter P a , which represents the arithmetic mean of the absolute values of the profile height within a single measuring section in relation to the primary profile [11]; for a more detailed description of the used profiles see [4]. Experiments with these test tubes were conducted in mixtures with PDMS M10k and M100k. In Fig. 5, the measured heat transfer coefficients are plotted over the heat flux for a mixture with 84 wt.-% M10k at T ≈ 327 K (a) and T ≈ 359 K (b) for a sanded and a polished test tube.
As expected, the surface structure has no significant influence on the heat transfer for convective boiling; differences between both surfaces are stochastic and within the combined uncertainty of the measured values. The transition to nucleate boiling was observed at heat fluxes 0.5 kW/ m 2 ≤ q ≤ 1 kW/m 2 . At nucleate boiling, the heat transfer coefficient on the sanded copper tube is larger than on the polished tube for T ≈ 327 K. At heat fluxes 10 ≤ q ≤ 16 kW/ m 2 , the difference between the polished and the surface is largest with Δα = 16 W/m 2 K (10%). At a higher evaporator temperature (T ≈ 359 K, Fig. 6b), the effect of the surface structure becomes smaller. The reinforcing effect of the sanded surface can be observed for q ≥ 4 kW/m 2 , but less strong than at T ≈ 327 K.
The influence on the surface structure was investigated for a mixture with 95 wt.-% M10k at three different temperatures as well. The results are plotted in Fig. 7a-c. At all three temperatures, the heat transfer coefficients measured for both surfaces at convective boiling (q ≤ 2 kW/m 2 ) agree within the uncertainty of the measurements. For nucleate boiling, the heat transfer coefficient on the sanded copper tube is significantly higher at T ≈ 323 K and T ≈ 355 K. At T = 380 K, the surface structure appears to have no influence on the heat transfer coefficient. For T ≈ 323 K, the maximum effect of the surface structure was determined at q = 8 kW/m 2 and q = 10 kW/ m 2 . The maximum difference in the heat transfer coefficient is Δα = 16 W/m 2 K. At T ≈ 355 K, the effect shifts to lower heat fluxes with a maximum of Δα = 18 W/m 2 K at q = 4 kW/m 2.
In Fig. 8 the influence of the surface structure is shown based on two measurement series with mixtures with 94/95 wt.-% M100k at T = 321 K.
The sanded surface improves the heat transfer, which results in larger heat transfer coefficients. Especially at nucleate boiling between 4 kW/m 2 ≤ q ≤ 10 kW/m 2 , the enhancement is Δα = 16 to 18 W/m 2 K, which is equivalent to findings for mixtures with PDMS M10k. For very high heat fluxes, the effect is reduced because the test tube is completely covered with insulating bubbles.
The discussed observations suggest that even in highly viscous mixtures the surface structure has an influence on heat transfer. For nucleate boiling, the higher surface roughness of the sanded copper test tube improves the heat transfer coefficient in all fluids (84 wt.-% and 94/95 wt.-% M10k and M100k, respectively) and at various evaporator temperatures (322 K ≤ T ≤ 380 K). The absolute enhancements of the heat transfer coefficient are lower in highly viscous mixtures than in refrigerants [10], but the relative enhancements of the heat transfer coefficient are significant with a maximum of 19%.
Influence of the viscosity of the different types of PDMS
To investigate and quantify the influence of viscosity on the heat transfer coefficient, mixtures with PDMS M10k, M100k, and M1000k, but with the same or a similar fraction of n-pentane were prepared, and heat transfer measurements were conducted. In Fig. 9 the α(q)-trends for similar mixtures with 84 wt.-% M10k and 88 wt.-% M100k at a) T ≈ 324 K and at b) T ≈ 357 K are plotted. In Fig. 9, there is only a small difference between the measured values at convective boiling up to q ≤ 1 kW/m 2 . With the onset of nucleate boiling and with increasing heat flux, the heat transfer coefficient increases significantly more for the mixture with M10k than for the mixture with M100k. The difference increases constantly with increasing heat fluxes from Δα = 36 W/m 2 K for q = 4 kW/m 2 to Δα = 73 W/ m 2 K for q = 12 kW/m 2 . A flattening or even a decrease in the α(q)-trends can be recognized for the last three measuring points of each measurement series. The maximum heat transfer coefficient in the mixture with M100k was measured at lower heat fluxes than in the mixture with M10k.
The overheating ΔT between the test tube and the fluid is ΔT = 124 K at q = 12 kW/m 2 for the mixture with M100k. In contrast, the overheating for the fluid with M10k is ΔT = 92 K at q = 16 kW/m 2 . The higher viscosity inhibits the formation of bubbles, inhibits the mass-transport processes and thus reduces the heat transfer coefficient.
The results of heat transfer measurements in these mixtures (84 wt.-% M10k and 88 wt.-% M100k) at T ≈ 357 K are shown in Fig. 9b. The trend of the measurement series in Fig. 9b resembles the one in Fig. 9a. The flattening in the range of high heat fluxes is also well visible. This measurement series indicates that lower viscosity results in larger heat transfer coefficients for convective boiling as well. An effect that was expected, but the difference hardly exceeds the combined uncertainty of the data for both mixtures. The transition from convective boiling to nucleate boiling takes place between 1 kW/m 2 ≤ q ≤ 2 kW/m 2 . Compared to an evaporator temperature of T ≈ 324 K, the difference between the heat transfer coefficients is slightly lower with Δα = 55 W/m 2 K at q = 12 kW/m 2 . The difference Δα = 42 W/ m 2 K at a heat flux of q = 4 kW/m 2 is in a similar order of magnitude as the one observed for T ≈ 324 K. When the fraction of PDMS is increased from 84/88 wt.-% to 94/95 wt.-%, respectively, at T ≈ 323 K the transition from free convection to nucleate boiling shifts to higher heat fluxes, namely to 2 kW/m 2 ≤ q ≤ 4 kW/m 2 (Fig. 10). Over the entire measuring range, the heat transfer coefficient for the mixture with M10k is higher than for the fluid with M100k. For q = 12 kW/m 2 , the difference of the heat transfer coefficients is Δα = 30 W/m 2 K; for q = 4 kW/m 2 it is Δα = 20 W/m 2 K. Measurements in a mixture with 95 wt.-% M1000k were carried out as well. For M1000k, all measuring points are in the range of nucleate boiling. At low heat fluxes, a slight increase of α with increasing heat flux can be Fig. 11.
In Fig. 12, 13, 14 further data series are presented. The mixtures have the same composition, but the kind of PDMS (PDMS M10k, M100k and M1000k) varies. All diagrams confirm that the heat transfer coefficient significantly decreases with a higher viscosity at all heat flux densities, temperatures, and compositions. The trend of the measured data is similar, especially the flattening of the α(q)-trend at high heat fluxes, which is observed for several measurement series.
In Fig. 12a) two measurement series for the polished copper tube are plotted additionally to the data measured for the Fig. 9a). The trends are very similar, and the measuring points agree within their reproducibility. Although the temperature and the compositions are not the same, the influence of the viscosity clearly prevails the influence of the surface structure.
The heat transfer coefficient in these mixtures was investigated at T ≈ 352 K as well and compared with data for the polished tube. The results are plotted in Fig. 12b.
For 4 kW/m 2 ≤ q ≤ 10 kW/m 2 , the difference of the heat transfer coefficient between the mixture with M10k and the mixture with M100k on the polished tube is Δα = 46 W/ m 2 K. The temperature increase from T ≈ 326 K to 354 K has hardly any influence on the measurement results (not shown graphically). At nucleate boiling, the influence of the temperature is negligible in this case. Comparing the measurement series on the polished tube with those on the sanded tube (Fig. 12b), the statement made with reference to Fig. 12a is confirmed. The α(q)-trends are very similar. The influence of the viscosity on the heat transfer coefficient is significantly higher than the influence of the surface structure.
Heat transfer measurements were conducted in mixtures with 95 wt.-% M10k, M100k and M1000k at T ≈ 323 K (Figs. 10 and 13) and 354 K (Fig. 13). A reduction of the heat transfer coefficient with increasing nominal viscosity of the PDMS can be clearly observed. We conclude that the evaporator temperature has only a slight influence on the heat transfer coefficient for all heat fluxes in mixtures with a high fraction of PDMS. Effects such as higher pressure and density in the bubbles are at least almost negligible in this case. The remaining influence of the pool temperature is related to its influence on viscosity and the influence of viscosity on the heat transfer coefficient.
The heat transfer was additionally investigated in mixtures with 7 wt.-% PDMS M10k and M100k at an evaporator temperature of T ≈ 350 K (Fig. 14). In mixtures with a large n-pentane fraction, nucleate boiling occurs at much lower heat fluxes. For the mixture containing M10k, nucleate boiling was observed for q > 0.4 kW/m 2 ; for the mixture containing M100k, nucleate boiling was observed over the entire range of heat fluxes. The transition to nucleate boiling shifts towards lower heat fluxes with increasing viscosity. A clear maximum of the heat transfer coefficient, α max , is formed for both fluids. At higher heat fluxes the heat transfer coefficient becomes smaller again. The offset of the maxima is clearly based on the different viscosities of M10k and M100k. The measured heat transfer coefficients in the fluid with M100k are expected to be below those in the fluid with M10k at all heat fluxes. The atypical shift at 0.4 kW/m 2 ≤ q ≤ 6 kW/m 2 could be due to different settings of the condenser temperature and not to an effect of the viscosity.
Conclusion
Heat transfer measurements at pool boiling conditions in highly viscous mixtures over a wide temperature range were carried out utilizing a modified so-called standard apparatus. A comprehensive set of data was generated. The focus was set on the systematic investigation of the influence of the temperature, the surface structure, the viscosity and the fraction of the volatile component on the heat transfer coefficient. All measurement series show that a higher evaporator temperature enhances the heat transfer coefficient at constant heat flux. In general, α(q) increases with the heat flux. However, the α(q)-trend forms a maximum and declines with further increasing heat fluxes again in particular for mixtures with low PDMS content. It was observed that the characteristics of the maxima depend on the temperature and the composition. The impact of the temperature on the heat transfer coefficient is smaller in mixtures with a large PDMS fraction. To evaluate the influence of the fraction of the volatile component, six different binary mixtures were prepared with M10k and M100k. The measurement series show that a lager n-pentane fraction generally leads to a higher heat transfer coefficient. The α(q)-trend is significantly influenced by the composition of the mixture. Changing the fraction of PDMS has a much higher impact on the heat transfer coefficient in mixtures with mainly n-pentane. Measurement series on sanded and polished test tubes show that the surface structure has an influence on the heat transfer coefficient in highly viscous fluids. The absolute enhancements by the sanded surface is lower than in mixtures of lower viscosity, but the relative enhancements are significant. To evaluate the influence of the viscosity, mixtures of the same composition but with different nominal viscosities were produced. All measurement series confirm that a larger viscosity reduces the heat transfer coefficient, but the reduction seems to be limited. Overall, the heat transfer coefficient is significantly more influenced by the viscosity than by the surface structure. | 7,694.4 | 2021-06-03T00:00:00.000 | [
"Engineering",
"Physics"
] |
Gallium-Enhanced Aluminum and Copper Electromigration Performance for Flexible Electronics
Wide range binary and ternary thin film combinatorial libraries mixing Al, Cu, and Ga were screened for identifying alloys with enhanced ability to withstand electromigration. Bidimensional test wires were obtained by lithographically patterning the substrates before simultaneous vacuum co-deposition from independent sources. Current–voltage measurement automation allowed for high throughput experimentation, revealing the maximum current density and voltage at the electrical failure threshold for each alloy. The grain boundary dynamic during electromigration is attributed to the resultant between the force corresponding to the electron flux density and the one corresponding to the atomic concentration gradient perpendicular to the current flow direction. The screening identifies Al-8 at. % Ga and Cu-5 at. % Ga for replacing pure Al or Cu connecting lines in high current/power electronics. Both alloys were deposited on polyethylene naphthalate (PEN) flexible substrates. The film adhesion to PEN is enhanced by alloying Al or Cu with Ga. Electrical testing demonstrated that Al-8 at. % Ga is more suitable for conducting lines in flexible electronics, showing an almost 50% increase in electromigration suppression when compared to pure Al. Moreover, Cu-5 at. % Ga showed superior properties as compared to pure Cu on both SiO2 and PEN substrates, where more than 100% increase in maximum current density was identified.
■ INTRODUCTION
One of the prominent goals of modern electronics advancement is to address the need for flexible devices in various sectors of daily life, starting with avoiding the replacement of forever-broken mobile phone displays and finishing with using thin sensor foils in direct contact with human skin. 1−3 Progress in this field is vital for development of health monitoring devices, soft implants, soft robotics, and an overall notable increase in plastic bioelectronics. 4−9 Such applications make use not only of the flexibility but also on the ability to stretch the substrates leading to extreme mechanical gradients. 10−13 A smart step in the further evolution of stretchable and soft electronics is defined by the use of liquid metals such as Ga. 14−17 The required flexibility of these devices originates primarily from the use of thin films in fabrication of active and passive components combined with the employment of flexible polymeric substrates such as polyethylene naphthalate (PEN). 18 −20 Considering the bidimensional nature of thin and ultrathin films, even low electrical currents flowing through a device may induce electromigration in the conducting path through which the electron flux passes. 21−23 This type of atomic migration is well-known from the appearance of first electronic circuits and is continuously present in modern thin film-based devices. 24,25 As an extension of Joule's law grounded in the electron− phonon interaction, high electron fluxes lead to a direct displacement of atoms forming the conducting path, disregarding the nature of the substrate on which such a path is patterned. As a result, the conducting path is eventually interrupted by atomic migration in the direction of the electron flow and surface void migration in the opposite direction, leading to a total device failure. 26 This type of device failure is currently recognized mainly in high current/power electronics. However, the broadening of the flexible device application range (due to the continuous trend of replacing Si-based devices with flexible ones) combined with the increased use of ultrathin films may demand, in the near future, the overcoming of the limitations regarding the maximum current densities applicable in flexible electronics.
Both Al and Cu are largely used in the electronic industry for interconnecting various parts of circuitry. 27−29 Their low electrical conductivity and cost define their technological relevance. Mixing both metals in search for more stable interconnects was studied in both bulk and thin film forms, and no exceptional enhancements of ability to withstand electromigration emerged. 30−32 An increase in the electrical resistivity of Al−Cu alloys raised concerns requiring a trade-off between resistivity and electromigration reliability to be found. 31 Alternatively, different alloying elements may be used in combination with Al and/or Cu for improving the electrical behavior of conducting lines. Species with large atomic sizes (such as rare earths) may be considered in an attempt to increase the entropy of Al or Cu thin film alloys, decreasing the total grain boundary area on the surface and thus restricting void formation. 32 Gallium can also be such an alloying element. Pure Ga has a very low melting temperature of 29.8°C and is highly soluble in Al, up to 20 wt % (9 at. %) under standard conditions. 33 It influences the grain boundaries of polycrystalline Al 34,35 by a rather unique mechanism described as grain boundary wetting. 36,37 From this point of view, the use of Ga as an alloying element represents a conceptual advance for these special alloys from the point of stability of polycrystalline metal films at continuous bending. Although Ga is in principle known to deteriorate the corrosion resistance of Al, for low to moderate concentrations of Ga, a reasonable corrosion resistance is achieved. 38−40 The Cu−Ga system seems to be promising as a low-temperature Pb-free solder 41 and is a crucial part of Cu−In−Ga−Se thin film solar cells. 42 Because Ga and most of its liquid metal alloys do not display any known toxic reactions, 43,44 their application in flexible and skin-contact electronics is already under study. 45−47 The electromigration effects in these materials when used as liquid interconnects already came into scientific focus. 48,49 Gallium shows an exceptional phase transformation behavior as it has an extremely low melting point and a very high boiling point due to the formation of Ga-dimers which are also responsible for the density anomaly. 50 The latter one can compensate the alloying partner (Al or Cu) and reduce the temperature dependence of the density.
The current study focuses on the identification of Al-and Cu-based solid alloys with superior electrical properties, obtained by mixing them with Ga, employing high throughput screening of binary and ternary thin film combinatorial libraries on SiO 2 substrates. A direct current−voltage analysis of lithographically pre-patterned test wire thin film alloys enables the access to the maximum values of current density and voltage before total electrical failure as a measure of ability to withstand electromigration. The alloys with the best performance identified in the combinatorial screening on SiO 2 substrates (according to the Si technology) are applied on PEN substrates, and initial tests are performed evaluating the possibility of their future implementation in flexible electronics.
■ EXPERIMENTAL SECTION
Lithographical Patterning of Test Wires. For electrical screening of thin film combinatorial libraries, test wires were patterned within an area of 5 × 5 mm 2 on the entire available surface of the substrates. The size of each wire was defined as 0.2 × 1 mm 2 coupled to larger pads for electrical connections (see Figure S1). The pattering of the wires was performed using two methods. The use of rigid 4″ SiO 2 wafers as substrates allowed a lithographic process while testing selected alloys on flexible PEN substrates involved a direct writing process. Both lithographic steps were performed before metallic thin film deposition.
An EVG101 (EVG, Austria) spin coater was used at 2000 rpm for coating SiO 2 wafers for 60 s with a negative photoresist (AZnLOF 2070). The chosen parameters resulted in an uniform resist film with an approximate thickness of 9.5 μm. Definition of the test wire structures for the lift-off process was done using a foil-based mask. The photolithographic process was performed on an EVG620 (EVG, Austria) mask aligner. Dedicated tooling for the foil based mask was used to accomplish defined negative side wall angles and a proximity gap of 20 μm. The resist was exposed to UV light at 365 nm with a fluence of 300 mJ cm −2 . A post exposure heat treatment at 110°C for 60 s was applied in order to finalize the crosslinking induced by the UV irradiation. Developing the resist was done in an AZ 726 MiF bath for 240 s. More details about the wire sizing and design were previously reported. 51 Following this route, more than 300 wires were pre-patterned uniformly distributed on each processed SiO 2 wafer (see Figure S1). After thin film deposition, the lift-off procedure was performed using NG101 (MicroChemicals, Germany) etchant/ removal solution.
A direct writing approach was chosen for patterning of test wires on PEN substrates, mainly due to the difficulty of uniformly spin coating these flexible surfaces. For this purpose, a permanent Edding marker with a tip diameter below 0.5 mm was attached to an automatized XYZ stage. A force sensor attached to the Z axis was responsible for controlling the pressure of the marker tip on the polymer foil during drawing. Using LabVIEW, the stage was programed to draw the contour of multiple test wires on areas in the size of a microscope slide (25 × 76 mm 2 ). Additional information can be found elsewhere. 51 After metallization, the lift-off procedure was performed using ethanol.
Thin Film Deposition and Characterization. Binary Al−Ga and Cu−Ga as well as the ternary Al−Cu−Ga thin film combinatorial libraries were deposited on SiO 2 and PEN substrates by physical vapor deposition at room temperature for avoiding thermally activated interspecies diffusion. Additionally, selected binary alloys and pure metals were separately deposited. The SiO 2 wafers were used directly after developing the photoresist, without further processing. The PEN substrates were first degreased and then rinsed with ethanol and deionized water before direct writing patterning. High purity Al, Cu (99.995%, Alfa Aesar), and Ga (99.999%, Alfa Aesar) were loaded into three independent W evaporation boats. Co-evaporation from these sources allowed formation of combinatorial alloys. Evaporation from single sources was performed as well for pure metal samples. The deposition chamber had a base pressure in the range of 10 −5 Pa. Each of the W boats was monitored by a quartz crystal microbalance (QCM-INFICON) for in situ feedback regarding individual evaporation rates. This information was used when defining the compositional spreads in binary and ternary libraries as well as the selected binary compositions. In order to obtain improved thickness uniformity and a single composition, selected binary alloys were deposited while rotating the substrate with 5 rpm. All libraries were deposited without rotation. For the main species Al and Cu in the libraries, deposition rates between 0.7 and 1 nm s −1 were used while lower amounts of Ga were controlled by using deposition rates at least 10 times smaller. During library deposition, the evaporation rates of constituent species were always kept constant for ensuring local indepth compositional uniformity. Overall, the deposited thin film thickness was in the range of 300 nm for alloys and pure elements on different substrates.
After deposition, each thin film library was moved in vacuum by a robotic arm (Kurt J. Lesker) from the deposition chamber to a selfdeveloped scanning energy dispersive X-ray spectroscopy (SEDX) chamber. The system is designed to scan through the entire surface automatically and map the compositional spread on the substrates. IDFix software (remX GmbH) was used for quantitative analysis of the test wires uniformly distributed across the substrate. A Si drift detector (SDD, remX GmbH) was used for the detection of X-rays resulting from locally irradiating the surface with 20 keV electrons. Due to the large irradiation spot (500 μm), the compositional errors are in the range of ±0.5 at. %. The SEDX compositional mapping was used for identifying individual test wires on each SiO 2 wafer.
The thickness of individual test wires after metallization was measured by contact profilometry (Dektak XT Vision 64, Bruker). The surface morphology of Ga-based alloys and pure elements on different substrates was evaluated by scanning electron microscopy (SEM) using a ZEISS CrossBeam 1540 XB microscope with in-lens detection. Films deposited on SiO 2 were imaged using acceleration voltages up to 10 kV. For avoiding permanent substrate damage, films deposited on PEN were imaged using 1 kV acceleration voltage only. Crystallographic characteristics of individual alloys across the Galibraries were studied by X-ray diffraction (XRD). For this purpose, a Philips X'Pert Pro system was used in the Bragg−Brentano geometry. For the binary libraries, a spot size of 5 × 20 mm 2 was employed with the short length parallel to the direction of the concentration gradient and the long one perpendicular to it in order to maximize the volume from which the signal is generated. For the ternary system, a spot size of 5 × 5 mm 2 centered on a wire was used, which also includes parts from the contacting pads found above and below. This was the minimum area which led to a good signal to noise ratio from the thin film, while ensuring a minimal lateral concentration variation. To increase even more the signal, a very low acquisition speed was chosen with a step size of 4.2 × 10 −3°a nd a time per step of 80 s which summed up to 182 min of measurement time per diffraction pattern. In total, 25 diffractograms were acquired along and across the ternary library.
For deeper insights into the samples morphology and components distribution, we have rigorously utilized high resolution (HR) transmission electron microscopy (TEM) in conjunction with scanning (S)TEM EDX elemental mapping. The chosen Al−Ga and Cu−Ga alloy specimens with the best performance synthesized on the PEN substrate were first thinned down to electron transparency. This challenging step was performed by primary dimpling the sample from the PEN substrate side (Dimple Grinder II, Gatan). While final thinning was gently conducted by Ar ion sputtering with an incident angle of 5°(precision ion polishing system PIPS 691, Gatan). The applied approach allowed achieving broad regions of free-standing Al−Ga (Cu−Ga) films suitable for in-depth TEM characterization. The investigation was fulfilled in a JEOL JEM-2200FS TEM equipped with an in-column Ω-filter and operated at 200 kV. Images were recorded applying zero-loss filtering. The microscope was fitted with an EDX detector from Oxford Instruments. Elemental maps were constructed via Aztec software. To distinguish possible beam influences, we have routinely compared the structure of the investigated film area with unexposed neighboring regions after EDX map acquisition. The assessed concentration measurement error was estimated with ±1 at. %. 52 In the long run, the obtained data on the film structure and components distribution peculiarities were used for the explanation of the observed performance enhancement.
■ RESULTS AND DISCUSSION
Thin Film Combinatorial Libraries for Electromigration Testing. In order to assess the dynamic behavior of Al, Cu, and their alloys under high electron fluxes, a standardized design of the test wire is needed to be implemented. 51 In this way, statistical measurements and reproducibility tests may be easily performed and a comparison between different alloys ability to withstand electromigration becomes straightforward. Different aspect ratios of test wires were explored, and the most suitable design was selected for this study. A projected area of 0.2 × 1 mm 2 provides a stable and reproducible surface, decreasing the patterning errors and optimizing the electrical resistance for attaining electromigration at reasonably low voltages. Using this design, test wires were patterned by a lithographic step preceding the metallic alloy deposition. The entire available surface of the substrate (e.g. rigid 4″ SiO 2 wafers) was covered during lithography for maximizing the final number of test wires with an identical shape obtained in one library. Once the lithographic step for test wire definition is performed, the substrate is ready for the next step of combinatorial thin film deposition.
Co-evaporation from up to three different sources is schematically described in part (a) of Figure 1. The drawing describes the principles of thermal co-evaporation where three W boats are connected to external high current power supplies through thick Cu rods for providing the necessary power for evaporation of each metal in vacuum. The sources are positioned with their center tangent to the substrate in order to maximize the compositional gradient for a larger library. Through the vapor phase mixing of all contributing species, a continuous compositional gradient is achieved. This is directly related to the thickness gradient naturally obtained from a single source due to the cosine law governing thermal evaporation. Schematically, the thickness uniformity dependence on deposition angle φ is described in part (b) of Figure 1. One can clearly see that directly above each source, the film thickness d increases as compared to the substrate edges, following the cosine law. In an obvious manner, the cosine of the deposition angle is related to the deposition distance r for various locations along the substrate. When used independently (one at a time), each source produces a 3D thickness profile on the substrate surface and a cross-section through this profile is, in particular, visible in Figure 1b. When used concomitantly, all deposition sources mix their individual profiles in a manner suggested by the simulated 3D surfaces in Figure 1c. The through-thickness composition at a given location remains constant, as defined by the individual evaporation rates and deposition angle, while the lateral composition across the library changes. For simplicity reasons, the exemplary surfaces were calculated here with identical deposition rates, identical deposition distances, and a cosine coefficient n = 1 describing an ideal point source.
In summary, Figure 1 describes the entire co-deposition process and evidences the fact that alloys obtained by mixing two or three species will have different thicknesses, a fact which cannot be directly controlled during the deposition process but needs to be evaluated afterward. During co-deposition, only the deposition rate is controlled via the electrical power supplied to each source and this allows a compositional control across the library. However, for electromigration testing, the maximum current density passing through one wire before its electrical failure is relevant. For this reason, each electrically tested wire from libraries in this study has its thickness measured by contact profilometry in order to obtain the accurate cross sectional area.
An optical image of a SiO 2 wafer with patterned Al−Ga test wire alloys is presented as Supporting Information in Figure S1. Additionally, an enhanced view is given for better observing the details of each wire together with the contact positions of the 4-point electrical testing (current density j vs voltage U) measuring head. High throughput experimentation is achieved by automatically positioning the j−U head using an automatized XYZ stage combined with LabVIEW programming and data acquisition. Since the in-plane dimensions of one test wire are much larger than its thickness, all wires may actually be considered as bidimensional (2D), which is a prominent feature of any modern electronic device where conducting paths are in the form of thin and ultrathin films.
Screening of Gallium-Based Binary and Ternary Libraries on SiO 2 Substrates. In this study, two technically relevant metals for interconnecting lines in electronic circuitry (Al and Cu) were used in combination with Ga for searching alloys with improved ability to withstand electromigration. For this purpose, Al−Ga, Cu−Ga and Al−Cu−Ga thin film combinatorial libraries were deposited on lithographically pre-patterned SiO 2 wafers. The first mapping performed is always the compositional one. Using scanning EDX, each library is mapped before electrical measurements. In this way, each surface of a 2D test wire having a given XY set of coordinates is directly linked to a certain composition allowing further mapping of properties.
In the Supporting Information, Figure S2 presents a typical set of j−U electrical measurements performed on the Al−Ga library along the compositional gradient. For each tested wire, the current is increased until open circuit conditions settle in. This electrical failure occurs due to the interruption of the electron conducting path caused by electromigration of metallic species on the surface. For all j−U curves, a typical shape may be observed. At low current densities, the j−U relationship remains linear according to Ohm's law and no electromigration can be detected. The linear regime usually depends on wire composition and may indicate a current density threshold for a safe electrical use of a given alloy. The shape of j−U curves changes due to the Joule effect when the Research Article current density increases beyond the threshold. Increasing the temperature of the wire leads to an increase in the electrical resistance, and the curves start to flatten. Electromigration starts to affect the wire capability of transporting electrons, and eventually, the wire fails. The failing moment gives the maximum values for j and U that are used to characterize the wire ability to withstand electromigration in a compositional mapping. Such linear mapping is presented as an example also in Figure S2 for the Al−Ga library. Screening such mapping refers to identifying a composition where, for example, the j max value is the highest. However, since the thickness of the test wires plays a crucial role during the electrical testing and the previously discussed cosine law affects the thickness profile, a surface mapping across the entire wafer is desirable. In a complete surface mapping, the image obtained will be formed by successive line mappings similar to those described in Figure S2.
The results of the properties screening obtained in the Al− Ga thin film library are presented in Figure 2 as surface color maps. Identifying the composition of the test wires across the surface of the Al−Ga library is easily performed using the EDX mapping presented in Figure 2a. The position of the evaporation sources used can be observed as Ga on the lower-right corner and Al on the upper-right corner of the image. The minimum Ga content in the library was slightly below 4 at. %. Complementarily, the Al amount ranges between 96 and 71 at. %. A total compositional spread of 25 at. % was obtained along the library, which gives a convenient compositional resolution of 0.25 at. % mm −1 across the entire wafer. Because each 2D test wire is patterned within 5 × 5 mm 2 , compositional identification of one single wire can be safely done with a precision of ±0.5 at. %, as typical in such studies. This precision also matches well the precision of the EDX analysis, and it will be considered for all values further reported in this study.
High throughput measurements of test wire thicknesses performed by contact profilometry allowed the thickness mapping presented in Figure 2b. Here, the position of the material sources is even better evidenced through the increase in the film thickness. Close to each evaporator (ideally directly above it, as Figure 1b shows), the cosine law indicates the thickest films and the surface mapping confirms this. In the vicinity of the Ga source, the library thickness reaches 500 nm while in the vicinity of the Al source, the films are even thicker. Interaction between both species in the vapor phase results in the combined surface presented in the figure, with a thickness around 350 nm at the center of the wafer. On the left side of the Al−Ga library, at the furthest distance from both sources, the thin film thickness drops to 200 nm. This natural thickness profile obtained during the film formation is beneficial to the current study, allowing to trace and discuss the effect of film thickness on achievable test wire performance.
In Figure 2c, the maximum current density (j max ) that each tested 2D wire withstands before total failure is mapped across the Al−Ga library surface. The smallest values are observed close to the Ga source. Alloys with the Ga content above 20 at. % are easily affected by electromigration effects with their j max values dropping below 0.5 MA cm −2 . These values are of no real interest for actual high current applications because a current density as high as 1.3 MA cm −2 could be achieved on pure Al test wires. 32 However, diagonally across this region, values of j max exceeding 1.1 MA cm −2 are observable in the mapping. The Al−Ga alloy with the highest ability to withstand electromigration was identified as Al-8 at. % Ga, having a j max value of 1.15 MA cm −2 . On observing the compositional mapping in Figure 2a, it can be easily seen that there are other positions across the library that also have the same alloy concentration of Al-8 at. % Ga, but j max measured is smaller, falling well below 1 MA cm −2 closer to the evaporation sources.
The reason for this behavior may be found in the thickness mapping previously discussed. The increase of j max with decreasing film thickness for the same composition can be linked with a transition from 3D to 2D effects valid for the selected test wire design (Figure 2b,c). It may be inferred that only below approximately 250 nm, the thin film Al-8 at. % Ga behaves as a bidimensional entity. In such case, there are only surface effects to be considered when discussing electromigration at nanoscale. As opposite to this, above 250 nm, the films are thick enough for additional volume effects to settle in. Multiple slip planes in the depth of the film may accelerate the atomic movement in the electron flux direction resulting in a premature electrical failure. In any case, the dependence of Al− Ga thin film electrical performance on its thickness is a clear fact that needs to be acknowledged for further implementations in real life applications. Overall, according to our observations, when deposited on SiO 2 substrates, Al-8 at. % Ga remains more susceptible to electromigration as compared to pure Al.
Mapping of the maximum voltage U max at the moment of wire failure across the Al−Ga library allows to indirectly visualize the alloy conductivity. This mapping is presented in Figure 2d. An increase in the maximum voltage may be interpreted as a decrease in conductivity, as directly suggested by the shape of j−U curves previously discussed in Figure S2. Typically, after the Joule effect occurs, the current density almost goes into a plateau while U increases. The highest conductivity in the Al−Ga alloys may be suggested close to the Al source likely due to the direct influence of Al conductivity which is higher when compared to Ga. However, the lowest Al−Ga conductivity is not found in the vicinity of the Ga source but further away from it along the compositional gradient. This may again be related to the thickness profile of the library, when thinner films will result in decreased conductivities by increased wire resistances. In addition, combining Figure 2c,d, one can estimate the maximum power density that the Al−Ga wire can withstand before electromigration failure.
In a similar manner to the one used for Al−Ga, the properties of Cu−Ga test wire alloys were mapped, and the obtained results are shown in Figure 3. In part (a) of the figure, the compositional mapping is presented as provided by scanning EDX. The positions of the deposition sources are hinted as Ga on the left side and Cu on the right. With Ga concentration approximately ranging from 2 to 29 at. %, a total compositional spread of 27 at. % is concluded. Similar to the Al−Ga case, this spread safely allows for compositional precisions of ±0.5 at. % when selecting one particular Cu− Ga alloy.
The mapping of Cu−Ga library thickness is presented in Figure 3b. When mixing Cu with Ga, only one strong thickness gradient may be immediately observed, with the thickest films (above 500 nm) in the vicinity of the Cu deposition source. On the left side of the mapping close to the Ga source, the library thickness raises slightly above 200 nm. The thinnest films across the Cu−Ga library are found diagonally opposite to the Cu source with values below 150 nm. Similar to the previous library, the Cu−Ga compositional spread thickness in the middle of the wafer is in the range of 350 nm. The upperleft side of the thickness mapping shows a thickness decrease below 250 nm, a threshold which was previously identified as 3D-2D transition for Al−Ga films.
For concluding if this fact remains valid for the Cu−Ga library, the j max mapping presented in Figure 3c needs to be analyzed. More than half of the entire Cu−Ga library surface indicates alloys with a maximum current density below 1 MA cm −2 , independent on their thicknesses. The smallest j max value is found around 0.5 MA cm −2 in the lower part of the mapping, while the highest is observed at the top. Here, a rather narrow region on the Cu−Ga library reaches j max values in an excess of 1.6 MA cm −2 , while the value measured on pure Cu test wires remains slightly below 1.2 MA cm −2 . 51 This increase of the ability to withstand electromigration can again be linked to the same 2D transition threshold discussed before for alloys with thickness below 250 nm. Screening the j max mapping resulted in identification of Cu-5 at. % Ga that is more resistant to electromigration effects when its thickness remains below 250 nm. Similar to the previous case of the Al−Ga library, when increasing the thickness of this selected alloy by moving vertically down across the Cu−Ga library, the j max value for thicker Cu-5 at. % Ga alloys decreases as well. Consequently, this alloy is identified as a possible replacement for pure Cu conducting lines in thin and ultrathin film high current applications. Figure 3d shows the U max mapping of the entire Cu−Ga thin film library. Close to the Cu evaporation source (on the right side), values slightly above 1 V indicate good electrical conductors. This is to be expected when comparing the one order of magnitude higher electrical conductivity of pure Cu with the conductivity of pure Ga. The values of U max generally increase with the Ga amount, reaching values above 3.5 V for the highest Ga concentrations. A region that does not follow any of the previously observed trends may be identified at the bottom of the U max mapping where the values remain very high independent on thickness or composition variations. However, in this zone, the library thickness remains above 250 nm, thus 3D effects need to be considered. It is very common to encounter synergetic effects when screening thin film libraries, and this may be a good example of such a situation likely due to thickness changes. However, this compositional region of the Cu−Ga library does not present too much interest from the point of view of ability to withstand electromigration as indicated by the current density mapping. The Ga concentrations remain above 10 at. % for the entire lower part of the wafer, which is most likely too much for a good conducting alloy with high endurance under electrical current stress.
In order to complete the entire series of screening for possible alloys to be used in future electronic applications based on Al and Cu mixed with a liquid metal, the ternary Al− Cu−Ga was also fully analyzed and mapped in a similar manner to the presented binaries. The results are summarized in Figure 4. Due to mixing three elements, the compositional mapping is now split in three color-coded images corresponding to each element presented in Figure 4a−c. The positions of each deposition source are immediately observable, matching the configuration presented in Figure 1a. Across the library, the Al concentration varied between 35 and 90 at. %, Cu varied between 1 and 61 at. %, while Ga amount changed between 3 and 22 at. %. Even though the composition spread of Ga falls below the ranges obtained in the binary libraries, requirements for applying the proposed compositional precision of ±0.5 at. % are still met.
Thin film thickness mapping across the surface of the Al− Cu−Ga library was performed as before, and the results are shown in Figure 4d. The influence of each deposition source on the thickness variation is easily observable. The strongest influence is attributed to the Ga source in this case, test wire thicknesses reaching 550 nm being measured in the vicinity of the source. The second strongest influence on the film thickness is attributed to the Al source. Close to it in the lowerright corner of the mapping, values above 400 nm are observable. The weakest influence on the total thickness profile in the Al−Cu−Ga system is attributed to the Cu source. In its vicinity, the test wires showed thicknesses slightly above the 2D threshold of 250 nm. However, the thinnest films were obtained at a location slightly away from the Cu source position toward the top of the wafer, where the film thickness dropped below 200 nm. The behavior of the thickness profile is based on a complex vapor phase interaction between all three species, combined with surface adsorption/desorption phenomena during film formation. Test wires that may be considered bidimensional cover almost one-third of the entire library between Cu and Ga sources.
The ability to withstand electromigration was also tested in the ternary library, and the resulting mapping is provided in Figure 4e. Most of the ternary alloys studied here showed j max values ranging between 0.6 and 0.7 MA cm −2 , thus behaving poorer as compared to both Al and Cu test wires. An increase in the electromigration effects is observed directly close to the Ga source, matching the same trend previously observed in both Al−Ga and Cu−Ga binary libraries. Additionally, on the left side of the wafer, a compositional region may be observed where electromigration effects are again stronger and j max decreases toward 0.4 MA cm −2 . Here, the amount of Ga in the library ranges between 3 and 8 at. %, a compositional range that proved to have a positive influence toward increasing the ability to withstand electromigration in the binary libraries. Unfortunately, in this region of the ternary library, the thickness of the wires remained mainly above the 2D threshold, possibly affecting the electromigration effects. Even if no ternary alloy with improved ability to withstand electromigration may be clearly identified in this study, a positive Ga influence can be concluded based on all libraries investigated.
Mapping of the maximum voltage across the ternary Al− Cu−Ga is provided in Figure 4f. Mixing all three elements resulted in a larger U max span (as compared to the binary libraries) between 1 and 5 V. The lowest U max indicating the highest conductivity is found in the vicinity of the Al source. The ternary alloys with the highest U max are suggested for the lowest amount of Al below 40 at. % in the library. Interestingly, here, the amount of Cu is above 45 at. % but Ga concentration is also high reaching up to 15 at. %. However, few alloys at the left edge of the wafer show high values for both maximum current densities and voltages. On this edge, an impressive maximum power density (before electrical failure) close to 3.5 MW cm −2 can be easily calculated. This may suggest that such ternary alloys can be considered for applications requiring high power densities rather than high currents, such as thin film heating elements.
XRD measurements were performed on all three thin film libraries. Supporting Information Figure S3 presents the diffractograms for the binaries Al−Ga and Cu−Ga together with the patterns from the database of the constituent metals. For both systems, the only diffraction peaks detected belong to Al and Cu, respectively, both having a fcc structure. As the Ga content increases, there is a shift of the peaks toward lower 2θ angles caused by the formation of substitutional Al−Ga and Cu−Ga solid solutions. The shift (clearly observable for the (111) peak) is more pronounced in the Cu−Ga system because the size of the replacing atom (187 ppm for Ga) is Figure S4 presents the diffractograms acquired on the ternary library. From the comparisons along the change of Ga concentration ( Figure S4a−c), it is evident that the main intermetallic phases are CuAl 2 and CuAl, which are present together with pure Al for the Al-rich areas. For a better visualization of Ga influence onto the phase formation, independent graphs were analyzed for constant Cu concentrations but with variable Al and Ga contents ( Figure S4d−f) and for constant Al concentrations but with variable Cu and Ga contents ( Figure S4g−i). Additionally, the intermetallic patterns containing Cu and Ga were also plotted (CuGa 2 , Cu 3 Ga, and Cu 9 Ga 4 ) both in Figures S3 and S4d−f, while Figure S4j provides the location and nomenclature information for the XRD measurement spots. As indicated by the Al−Ga phase diagram, no Al−Ga intermetallic phases exist. 33 The appearance, disappearance, and change from one Al−Cu intermetallic compound to another compound are related to the ratio of main elements, respectively Cu and Al. 55 No intermetallic phases containing Ga were found, not even for regions where the Ga concentration was relatively high (>10 at. %). For example, for a concentration of Cu of approximately 17 at. %, the phase found was CuAl 2 and Al peaks also emerged for Al concentrations above 70 at. %. For approximately 36 at. % Cu, independent of Al (and Ga content), only a mixture of CuAl and CuAl 2 was identified, whereas for regions with approximately 51 at. % Cu, mainly the CuAl intermetallic was found, similar to the patterns for regions with 45 at. % Al. For these areas, the Ga content was up to 10 at. %. Similarly, for regions with Al contents higher than 45 at. %, independent of Ga concentration, only CuAl and CuAl 2 were identified.
Upon Joule heating, it is expected that structural changes occur inside the alloys. The literature reports on the precipitation of intermetallic Cu−Ga phases even at room temperature, when liquid Ga is brought into contact with a Cu plate. With additional heating up to 200°C, the formed CuGa 2 intermetallic was found to be very stable, and its properties (lower hardness and Young's modulus as compared to Cu−Sn alloys) make it extremely attractive for interconnects. 56 In a similar study, where the Ga/Cu system was allowed to react for a long time, above approximately 260°C, a very thin Cu 9 Ga 4 intermetallic was also found, which was formed by the decomposition of CuGa 2 . 57,58 In consonance with the binary phase diagram, in the case of the Al−Ga system, upon heating and subsequent cooling, because no intermetallic phases form, the most probable outcome is phase/material segregation. In the case of the binary Cu−Ga system, upon Joule heating, it can be expected that intermetallic phases form. For the Al−Cu−Ga library, Al− Cu intermetallic phases are already present immediately after the deposition; therefore, it can be concluded that as a function of composition, some mixed intermetallics containing Al−Cu− Ga or Al−Cu and Ga segregation might occur during testing. In the current study, test wires were deliberately exposed to extreme current densities until a mechanical failure occurred. From the SEM examination, in many cases, it seemed that the material was melted at the region of failure. Consequently, the influence of the intermetallic phases on the failure mechanism is difficult to be accurately assessed postfailure because of the present design of experiments imposed by the objective of the study.
An approach to make some predictions based on the intermetallic phase that might develop during Joule heating is to overlap the compositional maps with current density maps in Figures 2−4 and simultaneously to analyze the phase diagrams and diffractograms. For both binaries, the values for j max are found up to a Ga maximum concentration of 15 at. %. For alloys with higher Ga content, the failure occurs at lower current densities. Since for the Al−Ga system no intermetallics are present, it could be expected that at this Ga concentration, its segregation and wetting of the polycrystalline Al film grain boundaries take place, suppressing electromigration effects. 33−37 When the Ga content is higher though, the Ga exodus from the Al matrix and accumulation at grain boundaries upon heating might be excessive and detrimental, leading to a mechanically unstable structure due to grain "sliding" and eventual continuous film dewetting. In the case of the Cu−Ga system, a Ga content to 17 at. % is soluble in Cu according to the phase diagram at room temperature and up to 1000°C. For these alloys, upon heating and cooling, most probably no or a very limited volume of intermetallics form. For higher concentrations of Ga, the precipitation of intermetallic phases might occur, phases which upon cooling might experience phase transformations from high temperature to room temperature crystal structures. Formation of intermetallic phases and crystalline structure transformations lead to material embrittlement, which in turn may affect detrimentally the resistance to electromigration. 48 For the ternary Al−Cu−Ga system, the highest current density is found in a broad concentration range of Cu, up to 31 at. %, and Al content higher than 50 at. %, whereas Ga seems not to have a dramatical effect. Analyzing the XRD patterns, within these compositional ranges, a mixture of the CuAl 2 and CuAl intermetallics is already present in the as-deposited samples, with CuAl 2 being the predominant phase. For higher Cu contents, where the failure occurs at lower j max , the main phase is CuAl. Upon heating above ≈560°C (which might occur during the Joule heating of the wires), a change in the crystalline structure occurs that might lead to mechanical damage of the wire, embrittlement, and eventually early failure.
Compared to previously studied Al−Cu alloys, 32 using Ga as an alloying element for either Al or Cu improved the electromigration resistance of test wires. Studying Al−Cu thin film libraries, maximum current density (j max ) values up to 0.9 MA cm −2 were found on similar test wire designs. 32 In the present work, both Al−Ga and Cu−Ga have showed better performances with j max values above 1 and 1.5 MA cm −2 , respectively. However, the values obtained from the Al−Cu− Ga ternary library screening reached only a maximum of 0.7 MA cm −2 .
Electromigration in Selected Gallium Alloys on Flexible Polymer Substrates. Following the screening of Al−Ga and Cu−Ga thin film combinatorial libraries on SiO 2 , the most promising alloys (Al-8 at. % Ga and Cu-5 at. % Ga) were deposited on PEN substrates for testing their electromigration behavior in flexible electronics. No ternary alloy was selected in this study mainly due to their low j max values as compared to pure Al or Cu. The surface microstructure of these alloys was compared to the microstructure of Al and Cu, and the results are presented in Figure S5. Both Al and Cu microstructures on SiO 2 are well researched in the literature,
ACS Applied Materials & Interfaces www.acsami.org
Research Article nowadays being considered as common knowledge in thin film communities. As expected, when thermally evaporated on SiO 2 , both Al and Cu films show a fine grain structure in the 100 nm range, and the Al one also contains secondary larger grains protruding from the surface. Addition of Ga changes these microstructures. In case of the Al alloy, Ga leads to formation of a rougher surface and a complete disappearing of the secondary larger grains, while Ga alloyed with Cu leads to a smoother surface decorated with slightly larger secondary grains. When deposited on PEN, both Al-8 at. % Ga and Cu-5 at. % Ga thin film formation is affected by the much lower surface energy of the polymer as compared to the SiO 2 . As a result, very smooth surfaces are observed in both cases in Figure S5; the Cu alloy additionally shows secondary grains with a size of approximately 1 μm. Also, the fine scratches common to the PEN surface are reproduced in the metallic alloy microstructure confirming previous observations on pure Al. 59 In order to observe the influence of Ga at atomic scale, TEM was performed on both selected Ga alloys. In Figure 5 a brightfield (BF) TEM image obtained on the surface of Al-8 at. % Ga is presented. Several grains are observable in part (a) of the figure with distinct grain boundaries. Compositional analysis mapping of the selected region fulfilled via EDX is presented in part (b) of the figure as a color-coded image. Even though Ga atoms are distributed over the entire analyzed area, slight Ga enrichment is observed predominantly at grain boundaries. The boundary regions (marked in the figure with I) contain most of the Ga present in the TEM specimen, while the Ga amount within the metallic grains (marked with II) decreases, values around 2.5 ± 1 at. % being obtained from quantitative analysis. This spatial distribution of Ga matches the previous conclusion regarding nonexistent Al−Ga intermetallics. During co-deposition, Al and Ga atoms condense together on the PEN surface, and during film growth, grain development leads to a weak accumulation of Ga in grain boundaries even at low (room) temperatures. This accumulation may be responsible for the noted increase in electromigration resistance since the presence of Ga may inhibit the continuous grain growth under high electron fluxes, as previously imaged in pure Al films. 32 In a similar manner, the Cu-5 at. % Ga thin film deposited on PEN was imaged via TEM, and the results are summarized in Figure 6. In part (a), a BF image is provided where grains and grain boundaries are visible. The area selected is color- , and very small amounts may be hinted within metallic grains (regions II). Also in this case, the absence of intermetallics suggested by XRD studies is confirmed. Together with the Ga segregation at low (room) temperature, this likely is the reason for the observed enhancement in the electromigration resistance. HR TEM of the same specimen is presented in Figure 6c, and the selected area is compositionally mapped. Representative regions I and II are color-coded, and the presence of Ga mainly in the grain boundaries is evidenced. Quantitative analysis indicated that Ga concentration in region I locally reaches 11 ± 1 at. % while in region II, it is negligible (<0.5 at. %). Identical to the case of the Al−Ga alloy, the Cu and Ga surface distribution likely occurred during film nucleation and growth, with Ga effectively pinning down the Cu grains and thus inhibiting their electron flux-induced modifications during electromigration testing. The microstructure changes induced by high electron fluxes in Al-8 at. % Ga were investigated by imaging the damaged areas of the test wires deposited on both SiO 2 and PEN substrates. These details are together presented in Figure 7. The electron flow direction is from right to left as indicated in the figure. The test wire deposited on SiO 2 shows the features typical to atomic displacement under high electron fluxes. 21−25 In the central failure zone responsible for the final open circuit condition, the film completely migrated revealing the underlying substrate. Droplets of the molten material can be observed around the failure line, most likely as a result of the local temperature increase by the Joule effect. 51 Both anodic (on the left side) and cathodic (on the right side) regions are clearly visible, each having its own damage front more accentuated toward the middle of the test line. The magnified image of the anodic damage front presents a snapshot into dynamics of the grain evolution occurring during the electromigration. On the right side of the image, very long grain boundaries are observable that further define elongated grain domains when approaching the damage front. Beyond the front line, almost round grain domains may be seen forming the remaining surface of the Al alloy. This behavior is schematically modeled in the center drawing of Figure 7.
The surface grain evolution is triggered by both the force associated to the electron flux Φ e (also termed "electron wind" force) and the force associated to the gradient of the atomic concentration n perpendicular to the current flow direction. 21−23 The first force is stronger in the middle of the test line, where the electron density is higher and its intensity decreases as approaching the wire edges. Since electrons prefer the 'easiest' path (i.e. the middle of the wire), the wire edges will always show an increased local resistance due to boundary effects. In the direction perpendicular to the current flow, an atomic concentration gradient appears due to the higher number of dislodged atoms at the middle of the wire (under interaction with higher electron fluxes) as compared to the edges. The same behavior is observed during electromigration in pure Al. 32 The combined influence of these two forces changes the growth direction of the grain boundaries as schematically described in Figure 7. However, unlike the case of pure Al, the presence of Ga triggers a fragmentation of the grain boundaries leading to the currently modeled grain behavior. The segregation of Ga in the grain boundaries is likely linked to the formation of elongated domains along the damage front in the current flow direction. 34−37 This may inhibit the overall mobility of the grain boundaries leading to enhanced ability to withstand electromigration.
The atomic displacement in the direction of electron flow leads to hillocks and consequent void formation. 21−25 Usually, hillocks are better observable on the anodic side while void formation can be found in the magnified cathodic front image presented in Figure 7a. The front line is clearly visible, and a thickness change is also apparent due to atomic movement toward the anodic side. When changing the substrate from SiO 2 to PEN, the electromigration behavior of Al-8 at. % Ga thin film alloy test wire also changes. Both anodic and cathodic damage fronts are not directly visible anymore as indicated in Figure 7b. Here, only the central failure zone is visible, and no special surface morphology was observed when analyzing the wire after electrical testing. Additionally, on PEN, the Al alloy film delaminates during electromigration and film buckling and wrinkling may be observed in the figure. Because the film adhesion is poorer on PEN as compared to SiO 2 , the delamination is likely created by the increase in the Joule heating temperature. Once the film starts to lose its adhesion to the substrate, a new surface is created that was originally pinned by the substrate. Consequently, that allows for a localized atomic rearrangement in the vicinity of the failure zone, leading to the disappearance of anodic and cathodic damage fronts. Additionally, because the temperature increase may be in an excess of 800 K, delamination of the alloy film prevents thermal destruction of the PEN by allowing a faster cooling of the metallic surface. 51 Since grain changes characteristic to electromigration (arrow head, elongated grains, voids and hillocks) are observable in the tested wires, significant thermal effects are likely very localized in the breakdown region.
The microstructure changes during electromigration in Cu-5 at. % Ga were also investigated as a function of the substrate. In Figure 8, images of the test wires deposited on SiO 2 and PEN substrates are provided. In the case of the selected Cu alloy, the use of both substrates led to very aggressive film failures. Electromigration is accompanied in both cases by strong film delamination. If delamination on PEN may be understood from the point of view of poorer film adhesion, as discussed previously for the Al alloy, the delamination on SiO 2 may be attributed to stresses built up during film formation in the vacuum phase. The stress is released in the electrical failure moment when the wire completely explodes from the surface. Similar to the Al alloy case, the film delamination prevents a good observation of anodic and cathodic damage fronts. During electromigration tests on pure Cu on SiO 2 , these fronts were poorly observable even in the absence of delamination. 51 The more aggressive failure of the Cu-5 at. % Ga can be attributed to the Ga addition to Cu leading to grain boundary wetting and faster deterioration of mechanical properties.
Magnified images of anodic and cathodic regions presented in Figure 8a for Cu-5 at. % Ga wires show rather similar surface structures. A mixture of voids and hillocks of varying densities is observable on both sides. Denser microstructure features are present on the cathodic side with sizes constantly increasing toward the failure zone. The influence of Ga is suggested in both anodic and cathodic regions by a lack of grain structuring, when compared to behavior of pure Cu. 51 When tested on PEN substrates, Cu-5 at. % Ga shows even less surface features. A high number of hillocks with a size of approximately 10 μm may be observed in the anodic region in the magnified parts of Figure 8b. Additionally, sub-μm darker spots/grains are visible on both sides randomly distributed on the surface.
Apart from withstanding tremendous current densities when deposited on PEN, the Al and Cu alloys selected by screening the libraries should be mechanically stable enough for being suitable for flexible electronics circuitry. Since it was shown that stress build-up plays a role in the wire failure during electrical testing, the wire behavior during additional mechanical stress should be investigated. In order to assess this aspect, a simple bending test was performed before electromigration testing. However, the bending had a rather extreme nature since the PEN foil was bent 180°in the middle of the test wire with a bending radius in the range of the PEN foil thickness. A schematic of the bending procedure is presented in Figure 9 together with details of the electromigration damage after bending on both Al-8 at. % Ga and Cu-5 at. % Ga test wires. Similar to the previous cases, the current flow direction is indicated in the figure to ease the identification of anodic and cathodic regions.
Observation of the electromigration behavior after performing the bending test of the Al alloy deposited on PEN ( Figure 9a) reveals a microstructure similar to the one presented in Figure 7b. However, the presence of larger dark spots on the surface is evidenced here. Unfortunately, local chemical analysis of these spots by EDX/SEM was not possible due to the PEN that does not allow bombardment with electrons with energies in excess of 2 keV. The number of these large spots is higher in the anodic side, and they show circular symmetries suggesting molten droplets of the material. This is easily observable in the magnified image of the anodic zone. In part (b) of Figure 9, the Cu-5 at. % Ga wire is imaged after the bending procedure and electromigration testing. Here, periodic lines along the test wires may be observed that suggest a transversal delamination initiated probably during the bending procedure. Such delamination was not identified in the absence of supplemental mechanical stress, as shown in Figure 8b. In the magnified image of the anodic side, fine cracks are better visible and their mechanical nature may be concluded. Obviously, these cracks along the test wire will affect the electrical performance of the Cu alloy under extreme mechanical stress.
Even though the Cu alloy showed supplemental microstructural damage due to the bending procedure, in most reallife applications of flexible electronics, the substrates will not need such extreme bending. Mild mechanical movements, for example, such in the case of skin electronics, are the normal requirements. 18 Such situation is illustrated as a proof of principle in Figure S6 together with an optical image of the PEN substrates immediately after test wire patterning by direct writing. A Cu-5 at. % Ga row of wires is bent at different angles following the natural shape of a human finger. This may stress one test wire under angles in excess of 90°but with very large bending radii in the 1 cm range.
A visual summary of the present study is given in Figure 10 where the j−U curves measured on the selected Al and Cu alloys are compared to pure Al and Cu on different substrates. On the rigid SiO 2 substrate, pure Al thin films are preferred against Al-8 at. % Ga due to the approximately 18% higher maximum current density able to be transported through Al wires before failure. However, when applied to flexible electronics (i.e. PEN substrates), the situation changes in favor of Al-8 at. % Ga. Because the nature of the substrate is the only difference between the two cases, this approximate 47% increase in the maximum current density of the alloy on PEN must be mainly due to the notoriously poor adhesion of Al on polymers. If the thin film can easily delaminate from the substrate during the electromigration testing, the back side of the film becomes also an active surface for atomic migration, thus enhancing the overall effect in the test wire leading to premature damage. The importance of free surfaces in electromigration resistance was previously emphasized by using anodic oxides on top of the test wires. 32 Adding Ga improves Al adhesion, by avoiding premature delamination and keeping (for longer time) only the top surface active for electromigration, thus increasing the maximum current density values for flexible electronic applications. As compared to the performance of the pure Cu 2D wire on SiO 2 , the identified Al-5 at. % Cu is superior, having a maximum current density increased by approximately 25%. This increase is related to the discussed grain boundary wetting with Ga presented in Figure 6c. Since no adhesion promoters of any kind were applied in this study prior to thin film deposition on PEN, the pure Cu test wire on the polymer delaminated immediately after deposition. No electrical testing was possible on pure Cu due to its poor adhesion to the substrate, the film being removed directly in the lift-off phase of test wire patterning. However, addition of 5 at. % Ga changed this situation. Not only that on PEN substrates the Cu alloy was more adherent but its electrical performance was superior to that of the Al-8 at. % Ga or of pure Al films.
■ CONCLUSIONS
This study offers access to electrical performances of a wide range of Al−Ga, Cu−Ga, and Al−Cu−Ga alloys for future use in electrical interconnects of electronic devices. This is achieved by screening binary and ternary thin film combinatorial libraries obtained by co-evaporation of Al, Cu, and Ga in vacuum on rigid SiO 2 substrates. Lithographically patterning 2D test wires, combined with a personalized 4-point measurement head and LabVIEW automation, enabled electrical high throughput experimentation by applying current and measuring the potential drop across each test wire. Screening for alloys resistant against electromigration was performed by observing the maximum current density and voltage at the electrical failure moment for a given alloy. Two specific binary alloys (Al-8 at. % Ga and Cu-5 at. % Ga) emerged as promising for future use. Throughout the entire study, Ga concentrations in the range of 3−8 at. % were identified as having a positive influence on improving the ability to withstand electromigration. Analysis of the surface morphology of the selected alloys after electromigration testing allowed modeling the processes occurring during atomic displacement. The grain boundary dynamics during electromigration is attributed to a resultant between the force corresponding to the electron flux density and the force corresponding to the atomic concentration gradient perpendicular to the current flow direction. Thickness profiles mapped across the entire surface of libraries indicated that for this combination of materials, a transition from 3D to 2D phenomena occurs for films thinner than 250 nm. This reveals a prominent potential of using conducting lines with thicknesses below this limit for real-life applications. However, it is still vital to highlight the effect of size (or thickness in our case) on the observed materials' properties, which also could play a crucial role in achieving supreme performance. This research is in a frame of upcoming investigations.
Both Al and Cu alloys identified through the screening were deposited on PEN flexible substrates. Their electrical characteristics indicated that Al-8 at. % Ga is a better choice for conducting lines in flexible electronics as compared to pure Al. Moreover, Cu-5 at. % Ga showed superior properties as compared to pure Cu on both SiO 2 and PEN substrates. The film adhesion to PEN is positively affected by alloying Al or Cu with Ga. After additional extreme mechanical stressing of test wires on PEN, the Al alloy behaved similarly to wires before stressing while the Cu alloy suffered a transversal delamination as compared to the nonstressed situation. Overall, for flexible electronic applications using PEN substrates, future devices may consider Al-8 at. % Ga and Cu-5 at. % Ga alloys for replacing pure Al due to their improved ability to withstand electromigration by almost 50% and more than 100%, respectively. SEDX, scanning energy dispersive X-ray spectroscopy PEN, polyethylene naphthalate XRD, X-ray diffraction TEM, transmission electron microscopy | 14,086.8 | 2021-01-25T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Research on Statistical Innovation Based on the Game between Central Government and Local Government
Much of state statistical work is done by the local statistical organizations. The reliability of local statistical data is the core of statistical innovation. But, the care of local government and enterprises may be not identical with the target of statistical work. Statistical products must affect utilities of some people, so they may have motives to interfere with statistical affairs. The paper started with dominant rationality, and some game models were built based on the relationship between central government and local government. After analysis on these models, it points out that the new statistical system should be built to make benefit-related people separated from statistical work. That the benefit-related people have no access to interfering with statistical work is essential to ensure the statistical data quality.
Introduction
In a game, if players always choose dominant-strategy, they have dominant rationality. In the real society, dominant rationality is often used by players. That is why people would fall into the dilemma. is called dominant-equilibrium. Traditional game theory states that "if all the players have dominant-strategy, dominant-equilibrium is the only equilibrium that can be forecast."
Dominant-strategy and dominant-equilibrium
In fact, dominant-equilibrium is often not efficiently. Choosing dominant-equilibrium is not the certain behavior of people with complete rationality. They will not have so short sight that makes them into a dilemma. This is a kind of dominant rationality.
Dominant rationality and complete rationality
Every equilibrium is a kind of game state that players reach by using some rationalities. So, dominant equilibrium is a game result inducted by dominant rationality of the players. uncertain strategy i s − .In the real society, dominant rationality is often used by players. That is the main reason why people would fall into the dilemma.
Many researches think dominant rationality as complete rationality. In fact, it is not. In new classical economics and game theory, that people have complete rationality is a basic assumption under the ideal state.
Definition 4(complete rationality) In a game, if the player i can always find the strategy that leads the highest utility, * arg max ( , , , ) , this player has complete rationality in this game. Complete rationality must have the following conditions : (1)the player can know the game environment completely; (2)the player can analyze the information correctly, and choose the right strategy; (3)the player can reach the optimal game result; The conditions are very difficult to be met. In "prisons dilemma" game, the rationality of prisons is no complete rationality, because the game result is so bad, we can not believe that the prisons have full rationality.
Obviously, dominant rationality is not complete for the players to choose strategies. It is often used under the condition that the dominant strategy exists. But when the players have no dominant strategy, how to make decisions in a game? It is a important question needing further study.
Impendent-dominant-strategy and impendent-equilibrium
To the following game, D is a dominant strategy for player 1. But player 2 has no dominant strategy. , cheating behavior of local government is based on local benefits . The effect of cheating is negative for the whole society, so cheating behavior of local government brings more loss to the central government than benefits to the local government.
Case 1: : : :d1>d2, and d3>d4 This is the most common case: (1) 1 2 d d > : Punishment of Local government must be higher than Income of local government by cheating. Otherwise, punishment can do no limitation on local government. ( :When central government is cheated by local government, its loss must higher than cost of central government for supervising. Only in this way can supervising be efficient. Otherwise, if cost of central government for supervising is high enough, the central government will choose NotSupervise, and the local government will choose cheating. It may leads a Nash equilibrium (AC2, AL1), which is not fair and not efficient.
From the point of central government, (AC2, AL2)is best for the whole society. But at this point, local government has the motivation of deviation.
So, in this case, both players of this game have no dominant strategy. Case 2: : : :d1<d2, and d3>d4 If the punishment for cheating is lower than income, local government will have strong motivation to take cheating behavior. Cheating will be the dominant strategy for local government. Central government is forced to take the action of supervising. (AC1, AL1)becomes an impendent-dominant-equilibrium. Case 3: : : :d1>d2, and d3<d4 When the cost of central government for supervising is larger than the loss of central government because of being cheated by local government, central government is like to take the action of NotSupervise.
Case 4: : : :d1<d2, and d3<d4 To central government, the cost of central government for supervising is higher than the loss of central government when being cheated, it will choose NotSupervise as dominant strategy. And, to local government, cheating punishment is lower than cheating income, cheating is an optimal choice. The game will get a dominant equilibrium (AC2, AL1).
Information Technology for Manufacturing Systems III
Case 5: : : :d2<0 d2 can be divided into two parts, d2=d21-d22, here, d21 is cheating income and d22 is cheating cost. If d22>d21, d2<0. That's to say, though cheating is not found by central government, local government can benefit nothing by cheating. The result is that local government doesn't cheat, while central government will not supervise.
Conclusion
In general condition, local government tends to do some cheating on statistical data. This leads to some statistical data faults. To realize the optimal equilibrium (AC2, AL2), d2 must lower than zero. In this way, local government can refuse to cheat in statistical data processing.
Here, we have two suggestions: (1)Raise the cheating technical cost of local government. A efficient way is to form a local statistical cooperating system, data processing and submitting is done by a team instead of only one person or an organization. Thus, it will reduce the possibility of cheating success, and make it easy for cheating to be found.
(2) Reduce cheating income of local government efficiently. The benefit-related people should be avoided to interfere local statistical affairs. So, in the process of statistical data forming, operators will have no motivation to cheat. | 1,472.8 | 2012-09-01T00:00:00.000 | [
"Economics"
] |
In silico analysis of single nucleotide polymorphism (rs34377097) of TBXA2R gene and pollen induced bronchial asthma susceptibility in West Bengal population, India
Introduction Prevalence of asthma is increasing steadily among general population in developing countries over past two decades. One of the causative agents of broncho-constriction in asthma is thromboxane A2 receptor (TBXA2R). However few studies of TBXA2R polymorphism were performed so far. The present study aimed to assess potential association of TBXA2R rs34377097 polymorphism causing missense substitution of Arginine to Leucine (R60L) among 482 patients diagnosed with pollen-induced asthma and 122 control participants from West Bengal, India. Also we performed in-silico analysis of mutated TBXA2R protein (R60L) using homology modeling. Methods Clinical parameters like Forced expiratory volume in 1 second (FEV1), FEV1/Forced vital capacity (FVC) and Peak expiratory flow rate (PEFR) were assessed using spirometry. Patients’ sensitivity was measured by skin prick test (SPT) against 16 pollen allergens. Polymerase chain reaction-based Restriction fragment length polymorphism was done for genotyping. Structural model of wild type and homology model of polymorphic TBXA2R was generated using AlphaFold2 and MODELLER respectively. Electrostatic surface potential was calculated using APBS plugin in PyMol. Results Genotype frequencies differed significantly between the study groups (P=0.03). There was no significant deviation from Hardy-Weinberg equilibrium in control population (χ2=1.56). Asthmatic patients have significantly higher frequency of rs34377097TT genotype than control subjects (P=0.03). SPT of patients showed maximum sensitivity in A. indica (87.68%) followed by C. nusifera (83.29%) and C. pulcherima (74.94%). Significant difference existed for pollen sensitivity in adolescent and young adult (P=0.01) and between young and old adult (P=0.0003). Significant negative correlation was found between FEV1/FVC ratio and intensity of SPT reactions (P<0.0001). Significant association of FEV1, FEV1/FVC and PEFR was observed with pollen-induced asthma. Furthermore, risk allele T was found to be clinically correlated with lower FEV1/FVC ratio (P=0.015) in patients. Our data showed R60L polymorphism, which was conserved across mammals, significantly reduced positive electrostatic charge of polymorphic protein in cytoplasmic domain thus altered downstream pathway and induced asthma response. Discussion The present in-silico study is the first one to report association of TBXA2R rs34377097 polymorphism in an Indian population. It may be used as prognostic marker of clinical response to asthma in West Bengal and possible target of therapeutics in future.
Introduction: Prevalence of asthma is increasing steadily among general population in developing countries over past two decades. One of the causative agents of broncho-constriction in asthma is thromboxane A2 receptor (TBXA2R). However few studies of TBXA2R polymorphism were performed so far. The present study aimed to assess potential association of TBXA2R rs34377097 polymorphism causing missense substitution of Arginine to Leucine (R60L) among 482 patients diagnosed with pollen-induced asthma and 122 control participants from West Bengal, India. Also we performed in-silico analysis of mutated TBXA2R protein (R60L) using homology modeling.
Methods: Clinical parameters like Forced expiratory volume in 1 second (FEV 1 ), FEV 1 /Forced vital capacity (FVC) and Peak expiratory flow rate (PEFR) were assessed using spirometry. Patients' sensitivity was measured by skin prick test (SPT) against 16 pollen allergens. Polymerase chain reaction-based Restriction fragment length polymorphism was done for genotyping. Structural model of wild type and homology model of polymorphic TBXA2R was generated using AlphaFold2 and MODELLER respectively. Electrostatic surface potential was calculated using APBS plugin in PyMol.
Results: Genotype frequencies differed significantly between the study groups (P=0.03). There was no significant deviation from Hardy-Weinberg equilibrium in control population (c2=1.56). Asthmatic patients have significantly higher frequency of rs34377097TT genotype than control subjects (P=0.03). SPT of patients showed maximum sensitivity in A. indica (87.68%) followed by C. nusifera (83.29%) and C. pulcherima (74.94%). Significant difference existed for pollen sensitivity in adolescent and young adult (P=0.01) and between young and old adult (P=0.0003). Significant negative correlation was found between FEV1/FVC Introduction Asthma is correctly termed the 21st century epidemic and modern age disease and represents a substantial public health burden in many countries including India (1)(2)(3). The prevalence of asthma has been dramatically increasing in both developed and under developed countries across the world. According to Zvezdin 2015 (4), about 300 million people of all ages suffered from asthma. The disease severity is unevenly distributed, being mild in most cases, while severe asthma forms are registered in the minority of the patients (around 15%). It is assessed that another 100 million people will be affected by the disease until 2025 (5,6). This lower airway disorder is not a new discovery, as it had been recognized over two hundred years ago. It is therefore amazing that it took centuries to realize the importance of disease diagnosis and adequate treatment. Estimate suggests that India alone reported 6% of children and 2% of adults suffering from asthma (3,7). According to Global Burden of Disease (GBD, 1990-2019) study, India has 34.3 million asthmatics which accounts for 13.09% of the global burden.Evidence suggests that in Indian population, asthma accounted for 27.9% of disability-adjusted life years (DALYs) (8). One of the most significant causes of asthma was found to be natural exposure to pollens (9) that play a major role in its pathogenesis (10). An estimate suggested that thepollen induced asthma prevalence in India ranges between 10-40% of the total asthmatic population (11)(12)(13)(14). Early detection of genetically predisposed individuals to asthma is important for better management of the disease. Bronchial asthma is characterized by several inflammatory mediators including histamine, bradykinin and arachidonic acid metabolites such as prostaglandins, thromboxane A2 (TBXA2), andcysteinyl leukotrienes (LTC4, LTD4, LTE4); responsible for the clinical and pathological events (15,16). Over the last few decades, there has been a steady growth in interest in exploring the important roles of TBXA2 in the pathogenesis of asthma. TBXA2 binds to its receptor TBXA2R which is primarily expressed in tissues targeted by TBXA2, including bronchial and vascular smoothmuscle and platelets (17). TBXA2R stimulate broncho-constriction inhuman small airways (18) in response to pollen-induced mast cell activation in atopic asthma patients (19). It is a G protein-coupled receptor (GPCR) encoded by the TBXA2R gene located in Chromosome 19 at position p13.3 (20). Very few studies were recorded regarding the genetic association of TBXA2R with bronchial asthma so far.
So, in the present study, attempts have been made to in v e s t i g a t e t h e a s s o c i a t i o n o f T B X A 2 R r s 3 4 3 7 7 0 9 7 polymorphism with pollen induced bronchial asthma among population of West Bengal, India. rs34377097 is the only nonsynonymous polymorphism within TBXA2R gene reported till date (21). In this study, computational algorithms were performed on TBXA2R missense polymorphism (R60L) to examine the changes in protein structure and function. The change of amino acid from Arginine (R) to Leucine (L) may affect the protein conformation, leading to change in downstream signaling pathway. Therefore, the present study performed in-silico analyses of the mutated TBXA2R protein using homology modeling to explore and confirm the above outcome. Our study also attempted to determine the signaling molecules of TBXA2R pathway which are responsible for elevated asthmatic response in individuals carrying the risk allele.
Ethics approval and consent to participate
The Clinical Research Ethics Committee of Allergy and Asthma Research Center, West Bengal, India approved the present study protocols vide CREC-AARC Ref: 62/21. Written informed consent was taken from each patient and control subjects of this study and preserved for future reference.
Participants
A total of 632 individuals (both male and female) were selected for this case-control study during March to December, 2021. Pregnant and lactating women (n=5), patients suffering from illness other than asthma (n=11) and who did not provide the written informed consent (n=12) were excluded from this study. Finally, 482 patients and 122 control subjects were included in this present study. Details regarding epidemiology viz. age, gender, habitat, age of onset of symptoms, lifestyle, family history (Paternal or maternal), clinical features, aggravating factors and non-specific stimuli such as cold, exercise and other irritant factors, etc. were recorded in a self-prepared questionnaire. 482 participants diagnosed of having aero-allergen induced bronchial asthma and reported to be suffering from different asthmatic manifestations like airway hyper responsiveness, recurring episodes of airway obstruction, wheezing, dyspnoea and cough either alone or in different combination were considered as cases. Both patients and controls were classified according to their age like children (age 5-12 years), adolescents (age>12-19 years), young adults (age >19-40 years) and old adults (>40-70 years). Peripheral blood samples were collected from all the participants and centrifuged for separation of serum and kept in −20°C refrigerator for further analysis.
Spirometry
The recruited subjects were screened for the presence of asthma by spirometry test following Global Initiative for Asthma (GINA) guidelines 2017 (22). The following parameters were measured: Forced expiratory volume at time interval of 1.0 second (FEV 1 ), Forced vital capacity (FVC), FEV 1 /FVC ratio and Peak expiratory flow rate (PEFR). 20% or more reduction of FEV 1 and PEFR value than predicted is considered as asthmatic (23). FEV 1 /FVC ratio less than 0.75-0.80 is considered as asthmatic (22).
DNA extraction and genotyping
Extraction of genomic DNA was performed from blood samples of 482 patients and 122 control subjects using QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). TBXA2R rs34377097 polymorphism was identified using polymerase chain reaction (PCR) followed by restriction fragment length polymorphism (RFLP) technique ( Figure 1). The details of both the primers, PCR conditions, PCR product, RFLP fragments are given in Table 1
DNA sequencing
Representative PCR products for each of the three polymorphic genotypes of the selected polymorphism were undergone Sanger sequencing to validate the RFLP results ( Figure 1).
In silico analysis
Structural model of wild type TBXA2R was generated using AlphaFold2. Homology model of TBXA2R R60L was generated using MODELLER (24, 25). Figures were designed using PyMol (26). The electrostatic surface potential was evaluated using APBS plugin in PyMol.T-Coffee server was used for multiple sequence alignment and data were prepared using ESPript (27).
Statistical analysis
Continuous data are described by mean ± standard error (SE) and categorical data by number with percentages. Patients and controls were assessed by independent t test for continuous variables or Pearson chi-square test for categorical variables. Genotypic and allele distributionin patient and control group was evaluated using contingency chi-square or Fisher's exact test, where applicable. Risk analysis was performed by odds ratio (OR) under additive, recessive and dominant models. Hardy-Weinberg equilibrium (HWE) was checked for the genotypes by goodness-of-fit chi-square test with one degree of freedom (df). SPT sensitivity for pollens among different age groups was analyzed using One way ANOVA followed by Tukey's Multiple Comparison test. The association between FEV 1 /FVC ratio and atopy was analyzed using linear regression. FEV1/FVC ratio of both patient and control individuals residing in different habitats was analyzed using one way ANOVA test. The association of the TBXA2R polymorphic allele with FEV 1 /FVC ratio was analyzed using Independent t test. The association of the TBXA2R polymorphism with atopy was analyzed using one way ANOVA test. All statistical calculations were performed using GraphPad Prism ver. 7 (San Diego, CA). Significance level was taken as P<0.05.
Characteristics of the subjects
The present study comprises of 482 asthmatic patients and 122 control individuals. Demographic and clinical characteristics are described in Table 2. Majority of the study population including patients and controls reside in urban area compared to sub-urban and rural ones. In case of FEV 1 , FEV 1 /FVC ratio and PEFR, a significant deviation was observed from the predicted value both in male and female patients in all age groups as compared to control ( Table 2).
Sensitization of patients against pollens by SPT
The highest prevalence of sensitization was obtained with Azadirachta indica (87.68%), followed by Cocos nucifera (83.30%), Caesalpinia pulcherrima (74.95%), Brassica nigra (72.23%), etc. Sensitivity of patients to different pollen allergens categorized according to age is depicted in Figure 2. One-way ANOVA test showed significant difference in sensitivity between the different age groups (F=7.156 and P=0.0003). A pairwise Tukey's Multiple Comparison Test also showed significant difference in number of sensitive patients between adolescent and young adult (P= 0.01) and between young and old adult (P= 0.0003).
FEV 1 /FVC association with atopy and habitat
The association of FEV1/FVC ratio with atopy is depicted in Figure 3. Significant negative correlation was found between FEV1/ FVC ratio and intensity of SPT reactions which is denoted by wheel diameter in all age groups and sexes (P<0.0001).The association of FEV1/FVC ratio with habitat of both patients and controls is depicted in
Genotype distribution in patients and controls
The genotype frequencies differ significantly between the study groups (c2 =7.03; P=0.03) ( Table 3A). The allele frequencies showed no significant deviation between case and control group (c2 =2.67; P=0.10).There was no significant deviation from HWE in the control population (c2 =1.56 at df=1). The frequency of the rs34377097 TT genotype was significantly higher in asthma patients than in controls (OR=5.81, P=0.03). Both recessive and dominant models showed significant difference of rs34377097 TT genotype frequency between the study groups (P=0.04). Table 3B shows genotypic frequency distribution of both patients and controls according to age groups (5-12 yrs, 13-19 yrs, 20-40 yrs and 41-70 yrs). The genotype frequencies differed significantly among all the age groups of patients and controls (c2 =57.02; P<0.0001).The rs34377097 GT and TT genotype bears significant risk of asthma in children of age 5-12 years (OR=2.34, P=0.006 and OR=49.40, P=0.007 respectively). Table 3C depicts frequency distribution of both patients and controls according to sex. The genotype frequencies differed significantly between both sexes of patients and controls (c2 =16.98; P=0.009). The rs34377097 TT genotype showed significant odds ratio in females (OR=12.77, P=0.02). Table 3D explains frequency distribution of both patients and controls residing in different habitats (Urban, Semi-urban and rural). The genotype frequencies differed significantly between patients and controls of all habitats (c2 =29.92; P=0.0009). The rs34377097 TT genotype showed significant risk of asthma in urban and semi-urban dwelling subjects (OR=10.72, P=0.03 and OR=4.89, P=0.02 respectively).
Correlation of polymorphic genotypes with FEV1/FVC ratio and atopy
Asthma patients bearing rs34377097 T allele had significantly lower FEV 1 /FVC ratio (t=8.004, P=0.015) than T allele bearing controls; while that in G allele bearing patient and control individuals remained non-significant (t=2.425, P=0.136) ( Figure 5). Furthermore, FEV 1 /FVC ratio differ significantly between G and T bearing asthma patients (P=0.04) while no significant difference is observed in controls (P=0.91) ( Figure 5). One-way ANOVA test revealed that individuals bearing TT genotype showed highest intensity of SPT reactions in terms of wheel diameter than GG and GT bearing individuals (F=56.25 and P<0.0001) (Figure 6).
Mechanistic and structural analysis of asthmatic response induced by rs34377097
Multiple sequence alignment of TBXA2R gene among the orthologs in mammals (Mus musculus, Rattus norvegicus, Bos taurus, Mesocricetus auratus, Macaca mulatta, Pan troglodytes, Felis catus, Chlorocebus aethiops) denoted that the R60 amino acid is conserved across the species ( Figure 7A). The crystal structure of human TBXA2R (PDB: 6IIU) revealed that the structures share a canonical seven-transmembrane helical bundle SPT sensitization profile of bronchial asthma patients against different pollen allergen sources. Each bar represents percentage of sensitive patients which is divided according to three age groups. One Way ANOVA revealed a significant difference of SPT sensitivity among different age groups. For pairwise comparison, Post hoc Tukey's Multiple Comparison test applied which showed significant difference of sensitivity between adolescent and adult. (*P value significant). similar to other known GPCR structures ( Figure 7B) (29). The Nterminal extracellular domain functions as a receiver domain, whereas the C-terminal cytoplasmic domain functions as a signal transmitter domain. The Guanine nucleotide-binding Gq protein binds at the C-terminal cytoplasmic domain of TBXA2R and relays the signal. In the crystal structure of human TBXA2R, a number of important cytoplasmic loops, including the first cytoplasmic loop that contains the R60 residue, were lacking their electron densities (shown by dotted lines in Figure 7C). In order to model the cytoplasmic loops, we have prepared a structural model of TBXA2R using the artificial intelligence (AI) based tool AlphaFold2 (30). Superimposition of the model and the crystal structure (PDB: 6IIU) yielded root-mean-square deviations of 0.049Å. In order to understand the effect of R60L, we have prepared a homology model of the mutated TBXA2R protein. Figure 7D shows presence of L60 amino acid at the first cytoplasmic loop. We have generated the electrostatic potential surface of wild type TBXA2R ( Figure 7E) and TBXA2R R60L protein ( Figure 7F). It is interesting to note that the TBXA2R protein in its wild type exhibits strong positive charge potential (depicted in blue) on its cytoplasmic surface ( Figure 7E). Hence, it can be assumed that proteins that interact with the cytoplasmic domain of TBXA2R must possess negative charge potentiality. Further, the cytoplasmic domain of TBXA2R R60L displays much less positive charge potentiality (location is marked with red arrow) as compared to the wild type protein ( Figure 7F). This is because Arg is a positively charged amino acid whereas Leu is a hydrophobic amino acid. The interaction between TBXA2R and the Gq protein may be inhibited if the positive charge potential at the cytoplasmic domain of TBXA2R is reduced.
Discussion
Asthma is a heterogeneous and chronic airways disease and estimated to affect an average of 4.5% individuals throughout the globe, with substantial variation in different countries (31). Bronchial asthma is influenced by both environmental and genetic components (32). There is a substantial curiosity in upgrading of our understanding regarding biological mechanism of bronchial asthma by expanding genomic approaches. Current investigation is based on the association of TBXA2R rs34377097 polymorphism in an Indian population.
Earlier reports indicated that TBXA2 was a negative regulator of LTC4 synthase, which produces leukotriene (33). Evidence suggested that in absence of TBXA2, LTC4 synthase activity is enhanced through its receptor (TBXA2R), resulting in increased leukotriene biosynthesis and asthma symptoms (34). These results clearly signify that TBXA2R polymorphism might play an important role in bronchial asthma pathogenesis. TBXA2 is synthesized from arachidonic acid by the enzyme, cyclooxygenase (COX) (35). It causes constriction of bronchial smooth muscles in asthma (36)(37)(38). Pollen grains are important aeroallergens therefore it is essential to havethe knowledge of locally prevalent pollen allergens for better diagnosis and therapy of pollen induced asthma (11,39). Allergenic potentials of common pollens are investigated throughout India viz. Cynodon in northern region, Fusarium solani and Curvularia from southern region, Azaridacta in central region, Cocos and Phoenix from eastern region (40). In our study, more than 87% of patients were sensitive to Azadirachta indica, one of the major pollens found in West Bengal. This study also reported Cocos nucifera, Caesalpinia pulcherrima and Brassica nigraas top sensitizer as depicted in Figure 2 which is in accordance to reports from Dey et al. (2019) (14). Podder et al. 2009Podder et al. , 2010 reported that the occurrence of allergic symptoms to various inhalants is similar for both sexes. We also found significant difference in SPT sensitivity between adolescent & young adult and between young & old adult. Dey et al. (2019) (14) also reported similar observations among atopic population of West Bengal, India. This difference in sensitivity may be due to the increased exposure to various pollen allergens in the residential/occupational area encountered by young adults compared to adolescents and old adults. Figure 3 showed significant negative correlation of FEV1/ FVC ratio and intensity of SPT reactions (in terms of wheel Level of FEV1/FVC ratio in asthmatic patients bearing different alleles of TBXA2R rs34377097 polymorphism. Comparison between cases and controls was performed by Independent t test. (*P value significant).
FIGURE 6
Intensity of SPT reactions in asthmatic patients bearing different genotypes of TBXA2R rs34377097 polymorphism. One-way ANOVA test revealed that individuals bearing TT genotype showed highest intensity of SPT reactions in terms of wheel diameter than GG and GTbearing individuals. (*P value significant). obstruction (46). Figure 4 depicted existence of significant difference in FEV1/FVC ratio of both male and female patients residing in different habitats (Urban, semi-urban and rural) while that in control subjects remained non-significant. Some previous studies reported that asthma patients residing in urban and semi urban habitat with high indoor and atmospheric pollution show reduced FEV1/FVC ratio than patients from rural counterparts (47, 48) which supported our observation. Table 2 showed that 56.43% of our study population inherited asthma from their family (either maternal or paternal or both), which supports the role of genetic predisposition in asthma. Literature suggested that TBXA2R pathway play a significant role in asthma (21,33). In our study, we found significant association of rs34377097 polymorphic TT genotype in the exon-2 region of the TBXA2R gene with elevated pollen induced asthma response such as acute broncho and tracheal constriction in West Bengal population, India. Table 3A clearly demonstrated that TBXA2R rs34377097 TT genotype was significantly associated with pollen induced asthma (OR=5.81, P=0.03). Table 3B depicted may be a probable cause of higher asthma risk in children. Besides this, Kim 2004 (51) reported that children spend more time outdoors than adults, mostly in summer during late afternoon. This long-term exposure to various outdoor air pollutants significantly contributes in developing asthma since their immune system and lungs are still immature. Table 3C showed that females bearing TT genotype have significant higher risk of asthma. Pignataro et al. 2017 (52) reported that females have higher risk of developing asthma than males which is in accordance to our result. This difference in asthma risk between female and male may be due to expression of female sex hormone receptors on mast cells (53). In Table 3D, we observed that urban and semi-urban dwelling patients bearing TT genotype have higher risk of asthma. Priftis et al. 2009 (54) reported that children living in urban areas have significantly higher risk of asthma than their rural counterparts which confirmed our result. This may be due to increased exposure to indoor and outdoor air pollutants in urban and semi-urban habitat. Emissions of harmful gases from motor traffic contribute significantly to the development of asthma in urban and semi-urban dwelling patients (55). As per our knowledge is concerned, our study which is based on both structural and functional aspects, is the first one to report the association of TBXA2R rs34377097 polymorphism in an Indian population. It is also the first extensive in silico analysis that perform both polymorphism study and modeling approach to assess TBXA2R rs34377097 polymorphism clinically. Another study conducted in Japanese population reported lack of association of rs34377097 polymorphism with bronchial asthma (21). The present study also depicted significant associations of several clinical parameters like FEV 1 , FEV 1 /FVC ratio and PEFR with pollen induced asthma both in male and female patients in all age groups. Furthermore, in patients the risk allele T was found to be clinically correlated with FEV 1 /FVC ratio (P=0.015). Figure 6 demonstrated that TBXA2R rs34377097TT genotype bearing Protein-protein interaction network of TBXA2R.
Ganai et al. 10.3389/fimmu.2023.1089514 Frontiers in Immunology frontiersin.org individuals showed highest intensity of SPT reactions in terms of wheel diameter than GG and GT bearing individuals which indicate association of atopic status with asthma risk. STRING analysis showed that TBXA2R are involved in interaction with different known and predicted Gq proteins (guanine nucleotide-binding G protein, subunits alpha, group q) (Figure 8). Out of all Gq proteins, GNA11 acts as an activator of phospholipase C (PLC). PLC hydrolyses phosphoinositides into the two stimulatory second messengers -inositol 1,4,5-triphosphate (IP3) and diacylglycerol (56). IP3enhances cytoplasmic free calcium level and diacylglycerol (DAG) activates protein kinase C (PKC). Activated protein kinase C either directly phosphorylates LTC 4 synthase enzyme and inactivates it or this regulation may involve another regulatory protein which is yet to be discovered. Inactivation of LTC 4 synthase leads to reduction of Leukotriene C 4 (LTC 4 ) biosynthesis in platelets. However, in the case of risk allele rs34377097 T-bearing individuals, the non-synonymous mutation (R60L) in TBXA2R protein inhibits the interaction between GNA11 and TBXA2R due to the change in the positive charge potentiality at the cytoplasmic domain. This is in line with Hirata et al. 1996 (57), who demonstrated that the R60L mutation significantly reduces PLC activity. That leads to simultaneous inactivation of PKC which ultimately results in LTC 4 synthase enzyme activation. Activation of this enzyme leads to LTA 4 to LTC 4 conversion which results in acute asthmatic response including broncho-constriction, tracheolar constriction and increased mucus secretion as shown in Figure 9.
Conclusion
The present study provided several lines of evidence to support that TBXA2R rs34377097 polymorphism act as a potent risk factor for FIGURE 9 Model for asthmatic regulation via. TBXA2R. The left panel shows wild type TBXA2R prevents asthmatic responses by converting active LTC4 Synthase into an inactive one. The right panel shows how a mutation (R60L) in TBXA2R causes an asthmatic response. asthma in West Bengal population, India. More studies are warranted in diverse ethnic groups to validate the above findings. The present insilico study will be useful for the advancement of gene based therapy for better management and treatment of asthma in near future.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: SCV003761534 (ClinVar).
Ethics statement
The studies involving human participants were reviewed and approved by Clinical Research Ethics Committee, Allergy and Asthma Research Center, West Bengal, India (CREC-AARC Ref: 62/21). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
Author contributions
Collection of samples was done by IG, SS and NS. IG and AL contributed to data compilation and wrote first draft of the manuscript. PB did the structural modeling. SM diagnosed the disease and provided the samples. Statistical analysis was done by IS, HB and AL. SP conceived the study design, supervised the work, critically revised the manuscript and made the final draft. All authors contributed to the article and approved the submitted version. | 6,039.6 | 2023-03-01T00:00:00.000 | [
"Biology"
] |
Steel Surface Defect Detection Algorithm Based on Improved YOLOv8n
: The traditional detection methods of steel surface defects have some problems, such as a lack of feature extraction ability, sluggish detection speed, and subpar detection performance. In this paper, a YOLOv8-based DDI-YOLO model is suggested for effective steel surface defect detection. First, on the Backbone network, the extended residual module (DWR) is fused with the C2f module to obtain C2f_DWR, and the two-step approach is used to carry out the effective extraction of multiscale contextual information, and then fusing feature maps formed from the multiscale receptive fields to enhance the capacity for feature extraction. Also based on the above, an extended heavy parameter module (DRB) is added to the structure of C2f_DWR to make up for the lack of C2f’s ability to capture small-scale pattern defects between training to enhance the training fluency of the model. Finally, the Inner-IoU loss function is employed to enhance the regression accuracy and training speed of the model. The experimental results show that the detection of DDI-YOLO on the NEU-DET dataset improves the mAP by 2.4%, the accuracy by 3.3%, and the FPS by 59 frames/s compared with the original YOLOv8n.Therefore, this paper’s proposed model has a superior mAP, accuracy, and FPS in identifying surface defects in steel.
Introduction
Nowadays, our country vigorously develops new industries, vigorously promotes industry, and its information business is a very important asset.In steel production processes, the detection of surface defects is a very critical part, as it is related to the quality of the entire industrial products and production safety aspects.Steel is extensively employed in many fields, including, in the past few years in our country, the rapidly developing aerospace and aviation industry, the oil industry, the automobile industry, the shipbuilding industry, the large-scale equipment industry, and other important industrial areas.Steel surface defects in these important industry areas are very important and have an immediate impact on the caliber of the industrial goods we produce and manufacture out of the finished product quality.If the quality of the finished product is poor, the economy will suffer.Therefore, it is urgent to improve the accuracy of steel surface defect detection.
Current surface defect detection methods fall into one of three categories: deeplearning-based, traditional-technology-based, and machine vision methods.These include manual inspections using magnetic particles, eddy currents, vision, and visible light inspection.Artificial visual inspection consists of checking the appearance or performance defects of the product by visual or tactile means, and the shortcomings of artificial visual inspection are also very obvious, e.g., too subjective, low efficiency, and the influence of external factors.The eddy-current-based inspection method is a non-destructive surface defect detection method.If there are some common defects on the steel surface, these defects will change the distribution of eddy currents and the characteristics of the magnetic field, so that the defects can be identified and localized by detecting changes in the magnetic a small sample steel plate defect detection algorithm based on lightweight YOLOv8 to bypass the difficult-to-apply deep learning approach when it comes to the detection of small sample defects.The LMSRNet network was designed to replace the YOLOv8 backbone, and the CBFPN and ECSA modules were developed to address the issue of the light weight of the model.Guo et al. [18] proposed an algorithm based on the improved MSFT-YOLO model, integrating the TRANS module into the network and data enhancement processing to increase the accuracy of steel defect detection.Cui Kebin et al. [19] proposed MCB-FAH-YOLOv8 to address the problems of defect detection, such as false detection and high leakage rate, by adding the improved convolutional attention mechanism module (CBAM) and the replaceable four-head ASFF predictor, so that the network could enhance the ability to detect tiny objects and dense targets and improve the accuracy at the expense of speed.The above methods are all based on YOLO for steel surface defects.
Zhou Yan et al. [20] proposed a method for detecting steel using multiscale lightweight attention, using a channel attention module for multilevel feature maps to reconstruct their channel-related information and improve the detection effect.Zhu et al. [21] used an efficient swin transformer to detect and classify steel surface defects, an approach which strengthened the connection between the feature mapping channels, reduced the resolution, and solved the image information retention problem.He [22] suggested that an MFN network can effectively improve the ability to obtain information about steel surface defects, and its features are able to integrate information from various levels to detect steel surface defects.A baseline convolutional neural network (CNN) is employed to produce feature maps for subsequent stages.The proposed network (MFN) is capable of fusing multiple features into a single feature and, based on these multilevel features, the region proposal network (RPN) is employed to produce the region of interest (ROI); finally, the baseline network ResNet 34 is used to achieve 74.8% mAP.The above methods have been designed and improved based on transformer and convolutional neural networks, respectively.Although all of the above methods improve the performance of the target detection algorithm to some degree, they still fall short in terms of accuracy and some other aspects.To summarize, we propose the steel surface defect detection algorithm DDI-YOLO, and the contributions made in this essay are the following: (1) The resolution of the issue of the original C2f module of YOLOv8 having insufficient ability to extract defective features on the steel surface and being unable to extract multiscale contextual information.Therefore, the dilation-wise residual module is added to the original C2f to further enhance the capacity of the network to extract multiscale contextual information.
(2) Since C2f in YOLOv8 cannot detect small-scale patterns during training, the dilated reparam block module is used to enable the detection of defects by C2f on small-scale patterns, thus enhancing the training ability of the model.
(3) The C2f_DWR module and the C2f_DRB module are combined to form the C2f_DWR_DRB module, which blends the benefits of every module to improve the model's comprehensiveness.
(4) A faster and more accurate convergence by making use of the Inner-IoU loss function to take the place of YOLOv8's original CIoU loss function.
Model Architecture of YOLOv8
YOLOv8 is a model built by Ultraytics based on the success of previous generations of YOLO, including upgrades and new features to further enhance performance and flexibility.YOLOv8 introduces innovations such as a new backbone network, an anchorless detection header, and a new loss function to support multi-platform operations from CPUs to GPUs.According to the model depth and feature map size, YOLOV8 is divided into different versions: YOLOv8-s, YOLOv8-n, YOLOv8-m, YOLOv8-l, and YOLOv8-x, with a total of five versions.The network structure diagram of YOLOv8 can be separated into the following four major parts, namely input, backbone, neck, and head.Among them, input is the input start, which is in charge of adjusting the scaling of the supplied image to the required training size in a certain proportion, and contains operations such as scaling, adjusting the image's color tone, etc.The backbone is the module used to extract the main information and is made up of a convolution module, C2f, a module (CSPLayer_2Cnov) which replaces the C3 module and the SPPF spatial pyramid pooling module used in YOLOv5 block.The neck module is employed to improve the merging of features of different dimensions through its structure containing an FPN feature pyramid and a path aggregation structure called dual stream FPN, which has the advantage of high efficiency, speed, etc.The head part is similar to that found in the previous YOLOv6 [23] and YOLOX [24].The decoupled head is used, while the coupled head is used in the previous YOLOv3 [25], YOLOv4 [26], and YOLOv5 [27].YOLOv8 also uses three output branches, but each output branch is subdivided into two parts which are used to categorize and regress the bounding box, respectively.Considering the model's detecting capabilities, this experiment used YOLOv8n as the baseline model and improved on it.
C2f-DWR Module
The network structure of YOLOv8 contains a large number of C2f modules, the fu name of which is "Cross Stage Partial Feature Fusion", which improve the performanc and efficiency of the model by partially fusing features at different stages of the network Further, the C2f module performs feature fusion at different stages of the network to full utilize the multilayer feature representation of the network.This feature fusion metho helps increase the model's detection accuracy and cut down on the number of parameter while the model's computational effectiveness is also enhanced.However, because th defects on the steel surface are relatively large no matter the shape, location, or size di ferences, especially the cracking-type defects, rolling oxide skin defects, and pitting su face defects, the defects have a large distribution range, the size and shape are no
C2f-DWR Module
The network structure of YOLOv8 contains a large number of C2f modules, the full name of which is "Cross Stage Partial Feature Fusion", which improve the performance and efficiency of the model by partially fusing features at different stages of the network.Further, the C2f module performs feature fusion at different stages of the network to fully utilize the multilayer feature representation of the network.This feature fusion method helps increase the model's detection accuracy and cut down on the number of parameters while the model's computational effectiveness is also enhanced.However, because the defects on the steel surface are relatively large no matter the shape, location, or size differences, especially the cracking-type defects, rolling oxide skin defects, and pitting surface defects, the defects have a large distribution range, the size and shape are not uniform, and the original C2f module of YOLOv8 has insufficient ability to extract the defective features on the steel surface and it cannot extract the multiscale contextual information.Therefore, in this paper, to enhance the network's capacity of extracting multiscale contextual information, a brand-new module, the C2f-DWR module, is designed.The C2f-DWR structure is characterized by the addition of the DWR module to the original C2f module, a decision which strengthens the extraction of features from the extensible sensory field at the higher level of the network.
The DWR module is fully known as a dilation-wise residual [28], which is also known as a dilation residual module.The DWR module is designed in a residual manner.With the residual, a two-step approach is used to efficiently extract multiscale contextual information and then fuse the feature maps derived from multiscale sensory fields.Specifically, the earlier method of acquiring multiscale context information in a single step is decomposed into a two-step method to reduce the acquisition difficulty.
The network structure of the DWR module, which comprises two branches, is shown in Figure 3.The two branches are implemented as follows: The first branch generates the associated residual features from the input features of the steel surface defects which are referred to as region residuals.In this branch, a series of concise feature maps in the form of regions with different sizes are generated as material for the next morphological filtering.This step is achieved by a common 3 × 3 convolution paired with a ReLU and a BN layer.The 3 × 3 convolution is employed for initial feature extraction.The ReLU activation function, instead of the commonly used PReLU layer, has a significant impact on the activation of the region features and on their conciseness.
C2f-DRB Module
The DRB module, known in full as the dilated reparam block [29] extended reprameterization module, utilizes a compact kernel and multiple layers with varying dilation rates to enhance a larger kernel convolutional layer.Its key hyperparameters include the size (k) of the larger kernel, the size (k) of the parallel convolutional layers, and the rate of expansion (r). Figure 4 depicts the scenario with four parallel layers, indicated by k = 9, r = (1, 2, 3, 4), and k = (5, 5, 3, 3).For greater K, we can employ more expansion layers with bigger kernel sizes or expansion rates.The core size and expansion ratio of parallel branches are flexible, the sole restriction being (k − 1) r + 1 < k.
To create a sizable kernel convolution layer for inference from the dilated repararm block, we first merged each BN into the preceding convolution layer, converted each layer with a scaling of r > 1 using function 1, and summed all resultant kernels with appropriate zero padding.The non-dilated large core layer was enhanced by the dilated small kernel curl layer in the dilated reparam block.
From a parametric point of view, such an expanded layer is comparable to a nonexpanded convolutional greater sparse kernel layer, thus allowing the entire block to be converted correspondingly into a single big kernel convolution.The use of a parallel small kernel convolution in conjunction with the large kernel convolution is recommended, as the latter facilitates the training process by capturing small-scale patterns.Their outputs are summed after two corresponding batch normalization (BN) layers.After training, the BN layers were merged into the convolution layers using a structural reparameterization method, in order for the large kernel convolution and the tiny kernel convolution to be equivalently combined for inference.In addition to small-scale patterns, augmenting the capacity of the big kernel to detect sparse patterns (i.e., pixels on the feature map that might have a stronger correlation than their neighbors with certain distant pixels) can produce higher-quality features.The need to capture such patterns is perfectly addressed by the mechanism of dilation convolution.From the viewpoint of a sliding window, the The second branch consists of the morphological filtering of the regional features of steel surface defects using multirate extended depth direction convolution, which becomes the semantic decidualization of steel defects.For every channel feature, just one intended receptive field is applied to avoid, if at all possible, redundant receptive fields.In practical steel surface defect detection, the desired concise region feature map can be judiciously discovered in the initial phase based on the extent of the second step's receptive field for fast matching with the receptive field.
C2f-DRB Module
The DRB module, known in full as the dilated reparam block [29] extended reprameterization module, utilizes a compact kernel and multiple layers with varying dilation rates to enhance a larger kernel convolutional layer.Its key hyperparameters include the size (k) of the larger kernel, the size (k) of the parallel convolutional layers, and the rate of expansion (r). Figure 4 depicts the scenario with four parallel layers, indicated by k = 9, r = (1, 2, 3, 4), and k = (5, 5, 3, 3).For greater K, we can employ more expansion layers with bigger kernel sizes or expansion rates.The core size and expansion ratio of parallel branches are flexible, the sole restriction being (k − 1) r + 1 < k.
To create a sizable kernel convolution layer for inference from the dilated repararm block, we first merged each BN into the preceding convolution layer, converted each layer with a scaling of r > 1 using function 1, and summed all resultant kernels with appropriate zero padding.The non-dilated large core layer was enhanced by the dilated small kernel curl layer in the dilated reparam block.
input channel is scanned by a dilation convolution layer with a dilation rate of r to identify spatial patterns in which each pixel of interest is r-1 pixels away from its neighboring pixels.As a result, we employed dilated convolutional layers in parallel with the larger kernel and summed their outputs.Since C2f in YOLOv8 lacks the ability to capture small-scale patterns between training sessions, the dilated reparam block module was used to compensate for this shortcoming, thus enhancing the model's training capability.
Inner-IoU Loss Functions
The loss function for the regression used in YOLOv8 is CIoU [30].The CIoU loss function considers the entire intersection between target frames and adds a correction factor to more precisely quantify the similarity between target frames.The CIoU loss function has the following advantages: it is more robust relative to different shapes of target frames, it more easily captures the exact shape of the target, and it takes into account several factors such as position, shape, and direction, all of which can help enhance the model's performance in complex situations.However, with respect to practical applications in the context of steel surface defect detection, it cannot be adapted according to the different detection assignments and its capacity for generalization is weak; therefore, considering the aforementioned shortcomings, the model cannot be applied to the detection of steel surface defects.Based on the above shortcomings, the regression loss function, known as the Inner-IoU loss function, was introduced in this study.The Inner-IoU loss function was proposed by Zhang Hao [31] From a parametric point of view, such an expanded layer is comparable to a nonexpanded convolutional greater sparse kernel layer, thus allowing the entire block to be converted correspondingly into a single big kernel convolution.The use of a parallel small kernel convolution in conjunction with the large kernel convolution is recommended, as the latter facilitates the training process by capturing small-scale patterns.Their outputs are summed after two corresponding batch normalization (BN) layers.After training, the BN layers were merged into the convolution layers using a structural reparameterization method, in order for the large kernel convolution and the tiny kernel convolution to be equivalently combined for inference.In addition to small-scale patterns, augmenting the capacity of the big kernel to detect sparse patterns (i.e., pixels on the feature map that might have a stronger correlation than their neighbors with certain distant pixels) can produce higher-quality features.The need to capture such patterns is perfectly addressed by the mechanism of dilation convolution.From the viewpoint of a sliding window, the input channel is scanned by a dilation convolution layer with a dilation rate of r to identify spatial patterns in which each pixel of interest is r-1 pixels away from its neighboring pixels.As a result, we employed dilated convolutional layers in parallel with the larger kernel and summed their outputs.Since C2f in YOLOv8 lacks the ability to capture small-scale patterns between training sessions, the dilated reparam block module was used to compensate for this shortcoming, thus enhancing the model's training capability.
Inner-IoU Loss Functions
The loss function for the regression used in YOLOv8 is CIoU [30].The CIoU loss function considers the entire intersection between target frames and adds a correction factor to more precisely quantify the similarity between target frames.The CIoU loss function has the following advantages: it is more robust relative to different shapes of target frames, it more easily captures the exact shape of the target, and it takes into account several factors such as position, shape, and direction, all of which can help enhance the model's performance in complex situations.However, with respect to practical applications in the context of steel surface defect detection, it cannot be adapted according to the different detection assignments and its capacity for generalization is weak; therefore, considering the aforementioned shortcomings, the model cannot be applied to the detection of steel surface defects.Based on the above shortcomings, the regression loss function, known as the Inner-IoU loss function, was introduced in this study.The Inner-IoU loss function was proposed by Zhang Hao [31] et al. in 2023.This loss function calculates the IoU loss using an auxiliary bounding box, and a scale factor ratio is introduced to regulate the auxiliary bounding box scale size, which is utilized to determine the loss.To compensate for the shortcomings of the CIoU loss function, Inner-IoU entails the use of an auxiliary bounding box to compute the loss and quicken the process of bounding box regression.The scale factor ratio is introduced in the Inner-IoU and can be used to regulate the scale size of the auxiliary bounding box to overcome the limitation of the weak generalization ability of existing methods.The formula for the calculation of the Inner-IoU is as follows: The width and height of the GT frame are denoted by w gt and h gt , respectively, and w is the width and h is the height.The variable "ration" corresponds to the scale factor, which is usually in the value range of [0.5,1.5].
IoU inner =
inter union (7) The inter and union are calculated using the above formula.
Image Dataset
The NEU-DET [32] dataset was utilized in this study, consisting of a steel surface dataset from Northeastern University on which training and validation were performed.This dataset contains 1800 images in total, consisting of six categories of classified defects, and each type of defect is exemplified by 300 images which contain rolled-in scale (RS), patches (Pa), crazing (Cr), scratches (Sc), pitting surfaces (Ps), and inclusions (In), as shown in
Experimental Environment
The operating system used for the experiments in this paper was Windows 11, the CPU was a 13th Gen Intel(R) Core(TM) i5-13500HX, the GPU used was the NVIDIA Ge-Force RTX 4060 GPU, and the RAM was 16GB.The deep learning training architecture used was Pytorch 1.13.1.The specific parameters of the experiment were as follows: the learning rate was 0.01, the image size was 640 × 640, the number of iteration rounds (Epochs) was 200, and the optimizer chosen was SGD.
Experimental Metrics
The data evaluation metrics used in this study were precision and recall curves P-R (precision-recall curve), the number of parameters x (parameters), the average precision (AP) of each defect category, the average accuracy (mAP), and the frames per second (FPS), with the AP determined by the precision (Precision, P) and the recall (Recall, R).The P-R curve is a curve formed by the coordinate system of test precision and recall, and the area enclosed by the curve is mAP.The related calculation formula is: ( ) where TP is the number of predicted positive samples that are actually positive, i.e., positive samples correctly identified.FP is the number of predicted positive samples that are actually negative, i.e., negative samples misreported.
( ) where F N is the number of predicted negative samples that are actually positive, i.e., positive samples missed.
Experimental Environment
The operating system used for the experiments in this paper was Windows 11, the CPU was a 13th Gen Intel(R) Core(TM) i5-13500HX, the GPU used was the NVIDIA GeForce RTX 4060 GPU, and the RAM was 16 GB.The deep learning training architecture used was Pytorch 1.13.1.The specific parameters of the experiment were as follows: the learning rate was 0.01, the image size was 640 × 640, the number of iteration rounds (Epochs) was 200, and the optimizer chosen was SGD.
Experimental Metrics
The data evaluation metrics used in this study were precision and recall curves P-R (precision-recall curve), the number of parameters x (parameters), the average precision (AP) of each defect category, the average accuracy (mAP), and the frames per second (FPS), with the AP determined by the precision (Precision, P) and the recall (Recall, R).The P-R curve is a curve formed by the coordinate system of test precision and recall, and the area enclosed by the curve is mAP.The related calculation formula is: where TP is the number of predicted positive samples that are actually positive, i.e., positive samples correctly identified.FP is the number of predicted positive samples that are actually negative, i.e., negative samples misreported.
where FN is the number of predicted negative samples that are actually positive, i.e., positive samples missed.
where AP is the accuracy of each detection type.
where mAP denotes the mean of the values in all categories and n is the number of categories.
Ablation Experiments
To demonstrate the enhancement effect of each of our improvements to YOLOv8, we conducted five sets of ablation experiments separately.First, the DWR module was added to the sixth and eighth layers of the C2f structure in the backbone network, and then the DRB module was added outside the second, fourth, sixth, and eighth layers of the C2f structure.The DRB module was also added to the neck and head layers.In addition to that, a combination of DRB and DWR modules were added to the sixth and eighth layers in the C2f structure in the backbone network, and, lastly, the Inner-IoU loss function took the place of the original loss function.Each part of the modules corresponding to the above was added to the YOLOv8 model accordingly, and we report the experimental results of the various combinations of the improved modules.Table 1 shows the outcomes of our ablation studies.As it can be seen from Table 1, after replacing C2f in the backbone network with C2f_DWR, except for a slight decrease in recall and FPS, the rest of the metrics, such as mAP and precision, improved, with the mAP item experiencing a notable upgrade of 2.5 percentage points which was attributed to the excellent ability of DWR to extract the multiscale contextual information and then fuse the wildly formed by the multiscale fields into the feature maps, with a decrease in the number of parameters and computational effort.After substituting the C2f structure in the main network with the C2f_DRB structure, the DRB reparameterization module used a non-inflated small kernel and multiple inflated small kernel layers to augment a non-inflated large kernel convolutional layer, an approach which greatly reduced the number of parameters and significantly increased mAP and FPS, as well as FPS to 104.This was because DRB compensated for the inability of Cf2 to capture small-scale patterns between training sessions, thus improving the model's training and detection capabilities.Then, after substituting the loss function of YOLOv8 with the Inner-IoU loss function alone, while not affecting the size of the model parameters, there was a notable increase in accuracy, mAP, and FPS, with a 5.4% increase in accuracy, a 1.5% increase in mAP, and a 9% increase in FPS.Therefore, our improvement not only optimized the speed of the model, but also enhanced the detection of defects by the model, the average accuracy rate, and the speed of defect detection, indicating that our improvement is effective.The fourth set of experiments entailed the DWR module and the DRB module being combined to form a new C2f_DWR_DRB structure and, from Table 1, we can see that, except for the recall rate having declined, all other indexes experienced a great improvement, with the precision rate, mAP, and FPS improving by 2.1%, 4.6%, and 27%, respectively.The most obvious improvement was that experienced by the mAP and FPS indexes, as well as the quantity of calculations and the number of parameters.There was a decrease in the number of calculations and the number of parameters, showing that the C2f_DWR_DRB structure combined the advantages of the two respective modules.Finally, our DDI-YOLO experienced a decrease in recall, but all other metrics remained optimal.In summary, the improvement method of the model put forth in this research works well.
Comparative Experiments
To confirm that our suggested DDI-YOLO modeling algorithm is effective, in this paper, we used the original YOLOv8n as the baseline and tested YOLOv8n and DDI-YOLO using the NEU-DET dataset.Table 2 provides a summary of the findings.The indicator "↑1.8" indicates that for the defect type In, the mAP of DDI-YOLO was 1.8% greater than the value associated with the baseline model YOLOv8n, as well as the other models.As shown in Table 2, the accuracy was improved for the remaining four defect types, except for Cr defects and Ps defects for which the map values experienced a slight decrease.In addition, the AP values of In defects and Rs defects for the original YOLOv8n were only 84.7 and 74.6, respectively, and our model improved the mAP values of these two defects by 1.8% and 10.6%, respectively.Our introduction of the C2f_DWR_DRB module in place of the C2f module enhanced the extraction of features from the scalable receptive fields at the higher level of the network.In addition to this, our method improved the accuracy of Cr defects from 35.1% to 49.7% and of Rs defects from 52.1% to 67.4%, that is, an increase of 14.6% and 15.3%, respectively.Our approach offers a notable enhancement of the detection accuracy and precision of these defects, thus demonstrating the efficacy of our approach.To further learn about the detection capability of our proposed DDI-YOLO and baseline model YOLOv8n, the P-R curves of the two methods are shown in Figure 6a,b.The region enclosed by our proposed DDI-YOLO was to be greater than the region enclosed by YOLOv8.The NEU-DET dataset's overall mAP was 78.3%, which was a significant improvement of 2.4 percentage points over YOLOv8n.
To confirm the efficacy of the network improvement proposed in this work, the improved YOLOv8n algorithmic model was contrasted with other algorithmic models for experimentation purposes.In this paper, the classical SSD, YOLOv3-tiny, YOLOv5n, YOLOv6, YOLOv7-tiny [33], YOLOv8n (baseline), and the newer RT-DETR [34] were selected for comparative studies.These experiments were carried out utilizing an identical environment, dataset, and equipment, as shown in the graphs for the outcomes of the studies conducted to compare various algorithms.
Table 3 shows that, compared to several other algorithms, this paper's algorithm achieved the best performance (78.3% and 158 frames/s) in terms of mAP and FPS values, and was only second to YOLOv8n and YOLOv5n in terms of recall, but performed better than YOLOv8n with respect to all other metrics.This demonstrates that our improved model is superior to other algorithmic models in terms of detection accuracy and detection speed.Regarding the quantity of model parameters, compared to other algorithms, it was only a little higher than that characterizing YOLOv5, which is much smaller than most other algorithmic models.Figure 7 displays the experimental detection outcomes of several different algorithms, and it can be observed from the figure that the other algorithmic models were associated with wrong detections, missed detections, etc., while this paper's proposed algorithmic model had the best detection effect.To sum up, the improved algorithm presented in this essay not only performed better in terms of precision of measurements, accuracy, and detection speed, but also reduced the amount of arithmetic involved, something which, in turn, improved the detection efficiency.Therefore, the improved algorithm proposed in this work has a rather high generalization value and practicality.To confirm the efficacy of the network improvement proposed in this work, the improved YOLOv8n algorithmic model was contrasted with other algorithmic models for experimentation purposes.In this paper, the classical SSD, YOLOv3-tiny, YOLOv5n, YOLOv6, YOLOv7-tiny [33], YOLOv8n (baseline), and the newer RT-DETR [34] were selected for comparative studies.These experiments were carried out utilizing an identical environment, dataset, and equipment, as shown in the graphs for the outcomes of the studies conducted to compare various algorithms.
Table 3 shows that, compared to several other algorithms, this paper's algorithm achieved the best performance (78.3% and 158 frames/s) in terms of mAP and FPS values, and was only second to YOLOv8n and YOLOv5n in terms of recall, but performed better than YOLOv8n with respect to all other metrics.This demonstrates that our improved model is superior to other algorithmic models in terms of detection accuracy and detection speed.Regarding the quantity of model parameters, compared to other algorithms, it was only a little higher than that characterizing YOLOv5, which is much smaller than most other algorithmic models.Figure 7 displays the experimental detection outcomes of several different algorithms, and it can be observed from the figure that the other algorithmic models were associated with wrong detections, missed detections, etc., while this paper's proposed algorithmic model had the best detection effect.To sum up, the improved algorithm presented in this essay not only performed better in terms of precision of measure-
Conclusions
In response to the necessity of detecting steel surface defects in actual production, our institute proposes the defect detection algorithm DDI-YOLO.First, the backbone layer of YOLOv8 was improved by proposing the C2f_DWR_DRB module structure, which improved the backbone's ability to extract features.Second, in the neck structure, the C2f_DRB module was proposed to compensate for the inability of the C2f module in the neck network to capture the defects of small-scale patterns during training, and it improved the model's training ability.Finally, the Inner-IoU loss function was used to replace the loss function CIoU of the initial YOLOv8n model, leading to a more accurate model and faster training.Based on the experimental results, the following conclusions can be drawn: compared with YOLOv8n, DDI-YOLO enhanced the mAP by 2.4%, the accuracy by 3.3%, and the FPS by 59 frames/s, all of which can satisfy more practical needs.In summary, our proposed algorithmic model can meet the requirements of industrial detection, offers advantages in practical industrial applications, and can be applied to a
2. 2 .
Based on the Improved YOLOv8 Algorithm (DDI-YOLO) 2.2.1.DDI-YOLO As shown in Figure 1, the structure of the algorithmic model of the improved YOLOv8n consisted of the C2f_DWR and C2f_DRB modules in the backbone to replace the C2f modules in layer 6 and layer 7, with the C2f_DWR_DRB module improving the model's synthesis ability and incorporating each of the advantages associated with the C2f_DWR and C2_DRB modules into the backbone network.Then, in the neck layer, the C2f module in the twelfth, fifteenth, and twenty-first layers was replaced with the C2f_DRB module, with the use of C2f_DRB compensating for the inability of C2f to detect defects in smallscale patterns and enhancing the training ability of the model.Finally, the Inner-IoU loss function was used instead of YOLOv8n's loss function CIoU to make the model training faster and more accurate.Shown in Figure 2 is the whole process of detecting defects on steel surface by the DDI-YOLO model.
Figure 2 .
Figure 2. The overall process of DDI-YOLO model steel defect detection.
Figure 2 .
Figure 2. The overall process of DDI-YOLO model steel defect detection.
et al. in 2023.This loss function calculates the IoU loss using an auxiliary bounding box, and a scale factor ratio is introduced to regulate the auxiliary bounding box scale size, which is utilized to determine the loss.To compensate for the shortcomings of the CIoU loss function, Inner-IoU entails the use of an auxiliary bounding box to compute the loss and quicken the process of bounding box regression.The scale factor ratio is introduced in the Inner-IoU and can be used to regulate the scale size of the auxiliary bounding box to overcome the limitation of the weak generalization ability of existing methods.The formula for the calculation of the Inner-IoU is as follows: truth (GT) frame and the anchor are identified as gt B and B respectively, and the centers of the GT frame and the internal GT frame are denoted by the points ( which also denote the centers of the anchor and of the internal anchor.
truth (GT) frame and the anchor are identified as B gt and B respectively, and the centers of the GT frame and the internal GT frame are denoted by the points (x gt c ,y gt c ), which also denote the centers of the anchor and of the internal anchor. b
Figure 5 .
The dataset images contain 200 × 200 pixels and the total of 1800 images were randomly divided into an 8:1:1 ratio to create NEU-DET training, test, and validation sets, i.e., 1440 training samples, 180 test samples, and 180 validation samples.
Figure 5 .
Figure 5. Schematic diagram of six defect samples.
Figure 5 .
Figure 5. Schematic diagram of six defect samples.
Figure 6 .
Figure 6.Comparison of different loss functions.
Figure 7 .
Figure 7.Comparison of the experimental results.Figure 7. Comparison of the experimental results.
Figure 7 .
Figure 7.Comparison of the experimental results.Figure 7. Comparison of the experimental results.
Table 2 .
The detection performance of DDI-YOLO using the NEU-DET datasets. | 8,162 | 2024-06-20T00:00:00.000 | [
"Engineering",
"Materials Science",
"Computer Science"
] |
Impact of sanctions on agricultural policy in European Union and Russia
In the springtime of 2014 the USA, the EU, some other countries and international organizations applied sanctions against Russia. This process started after joining of Crimea to the Russian Federation. In its tern Russia imposed retaliatory sanctions, which banned import of some agricultural products, raw materials and food products from the United States of America, the European Union, Canada, Australia and Norway. Authors analyzed international trade of the Russian Federation with the European Union changing for the sanctions period in comparing with previous one, forecasting on 2016. There is also evaluation of structure changing of Russian food products import from Europe in the article. The ban effected 4,2% of total EU agri-food exports. The European Commission's measures for the compensation, storage and promotion of sales market diversification for agricultural products are presented. New destinations and alternative markets for European agri-food export are considered. Impact of sanctions on the food products supply in Russia is monitored according to the internal market price changing. To the end of 2015 in comparing with 2013 the most suffered for embargo production items became frozen fish (with rising of the price on 52%), sunflower oil (on 43%), apples (on 38%), butter and meet of cattle (on 29%). While inflation rate in Russia was 11,36% and 12,91% for 2014 and 2015 years. After all authors reviewed priorities of contemporary agricultural policy of Russia. The main is focus on internal resources of agricultural development, which means broad federal support of agriculture. As a result Russia can become a strong supplier of agricultural and food products on the world market.
Russia responded with a one-year ban on imports of beef, pork, poultry, fish, cheeses, fruit, vegetables and dairy products from Australia, Canada, the European Union, the United States and Norway.(Adriel Kasonta, 2015) In the current context, the most important is to react in a proportionate and rapid way should the situation arise.(European agriculture commissioner Dacian Ciolos, 2014) The EU has introduced three waves of restrictive measures against Russia, which are regularly updated.While they hurt the Russian economy, the EU sanctions also have a boomerang effect, especially in conjunction with the countersanctions imposed by the Kremlin on EU food imports.(Tatia Dolidze, 2015)
European sanctions against Russia
In spite of the European Union was one of the biggest partners of Russia in the world economy, European restrictive measures were really severe, see table 1. http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32014D0512 -embargo on arms and related materiel -embargo on dual-use goods and technology, if intended for military use or for a military end-user -ban on imports of arms and related materiel -(arms and related materiel related) ban on provision of certain services -(dual-use goods and technology related) ban on provision of certain services -(deep water, Arctic and shale oil related) controls on export of certain equipment for the oil industry -controls on provision of certain related services -prohibition of procurement from Russia of arms and related materiel -restrictions on issuance of and trade in certain 'bonds, equity or similar financial instruments' -prohibition to satisfy certain claims made by certain persons, entities and bodies -valid until 31.7.2016.It is necessary to notice, that from European side there were two amendments in 2014 and two ones in 2015, which added first list of sanctions (#512).As well as there were two amendments of the second list of restrictive measures (#833) in 2014 and one in 2015.
Russian sanctions against Europe
In its tern Russia imposed retaliatory sanctions on the 6th of August 2014.They were special economic measures of security assurance of the Russian Federation.The presidential edict banned or limited import of some agricultural products, raw materials and food products from countries, which applied sanctions against Russia.According to this edict the Government formed list of banned import products from the United States of America, the European Union, Canada, Australia and Norway, see table 2. The list was valid during one year, but on the 24th of June 2015 it was prolonged till the 6th of August 2016.
Analysis of Russian-Europe food products trade for 2014-2016
As a result of sanctions there is decrease of international trade between Russia and Europe.Dynamic of export from Russia to EU, import to Russia from EU, trade turnover and share in the total Russian trade turnover are presented in table 3. From 2013 to 2014 and from 2014 to 2015 share of international trade with Europe in the total Russian turnover is falling down.The first descending was on 1,4% only, when the second became on 1,9% higher and attended 3,3%.This tendency can lead to the falling on 3,3% more down to 41,5% at the end of 2016 (according to the extrapolation method).It will means, that the gap will rise up to 8% for last 3 years from 2013, see draft 1.There are no doubts, that this process is a consequence of sanctions -milk products was about -34,1%, -meat and fish contained food products was about -33,8%, -fruits and nuts was about -33,1%, -vegetables was about -32,5%, -other food products, contained milk of vegetable fat was about -7,8%, -milk and flour contained food products was about 10%.
From products group to group decrease became different.Last researching group already had a real growth in 10%.This changing reflects only how many products items were banned in the whole products group.
In comparing with 2015, reductions, mentioned above, seem not so enormous, see table 4.
Even if we regard data for 9 months of the year, the situation is utterly terrible.After the second wave of sanctions the situation changed dramatically.Share of products groups, including banned products, decreased quickly in total Russian import from Europe.
The main descending is accounted for: -fruits and nuts products group with share in total import 0,0005% for 9 months of 2015 in comparing with 0,9% for 2014; -fish and crustaceans, mollusks and other water invertebrates products group with share in total import 0,0009% for 9 months of 2015 in comparing with 0,1% for 2014; -meat products group with share in total import 0,0122% for 9 months of 2015 in comparing with 0,3% for 2014; -vegetables, edible roots and tuber crops products group with share in total import 0,0798% for 9 months of 2015 in comparing with 0,5% for 2014.
Impact of sanctions on EU market
In 2013, Russia was the second important destination for EU agri-food sector exports, after USA, representing in total a value of about 11.8 billion Euro in 2013 (10% of all EU agri-food exports).The agri-food products covered by the Russian ban represented a value of 5.1 billion EURO in 2013 exports, (43% of EU agri-food exports to Russia).The ban effected 4.2.% of total EU agri-food exports.These trade restrictions putted serious pressure on EU agriculture and food sector due to of temporary loss.It is needed to write that this happed in the full harvested season.
For EU it was necessary to ensure 3 main targets: -to maintain stability of the internal market via effective and properly calibrated market crisis management at EU level (reducing overall supply of exposed products on the European market as and when price pressures become too great) -to address negative impacts of the restrictions on some vulnerable EU sector by compensation.
-to strengthen the resilience of our agricultural and food sector and encourage reorientation towards new markets and opportunities.
Since August, 2014 the European Commission has introduced a number of measures for the compensation, storage and promotion of sales market diversification for EU agricultural products: Market measures (market withdrawals especially for free distribution and compensation for non-harvesting and green harvesting): • 11.08.201442:Exceptional measures to assist peach and nectarine producers (EUR 32.7 million).
• 18.08.201443:Emergency market measures to support EU producers of perishable fruit & vegetables (up to EUR 125 million to fund withdrawals for free distribution or other destinations), green harvesting and non-harvesting of perishable fruit and vegetable most immediately impacted by the Russian measures, with a ceiling of EUR 82 million for apples & pears and EUR 43 million for the other fruit & vegetables).
Suspended on 10.09.2014, due to application problems and abuse (e.g.Poland).It is to be replaced by a more targeted and coordinated scheme.
• The European Commission has currently announced it will extend support measures for the dairy and fruit and vegetable sectors hit by Russia's ban on food imports from Europe, which has been in place for a year.Aid for the dairy sector will be prolonged until 29 February 2016 while support for fruit and vegetable producers will be continued until 30 June 2016, the Commission said on Thursday (30 July).Aid for farmers will be allocated to EU countries that have exported significant quantities to Russia over the past three years.
Over the entire period since the embargo started, the EU agri-food sector has managed to compensate the losses in export sales to Russia by increasing exports to other main destinations and alternative markets.Major gains in export values have been achieved in the US (+16%), China (+33%), Switzerland (+5%) and in a number of key Asian markets, such as Hong Kong (19%) and the Republic of Korea (+29%).European exporters have increased their exports also to certain Arab countries: Saudi Arabia (+10%), United Arab Emirates (+14%) and Egypt (+26%).
Analysing EU exports to third countries of the sectors affected by the embargo, in the period August 2014-July 2015, export values of bovine and poultry products increased by 23% and 5% respectively while the value of pig meat exports was in line with the previous year.Concerning dairy products, the value of butter exports fully recovered from the initial difficulties and achieved +3 % (compared to the 12 months period a year ago) due to an increase on Middle East markets.For cheese and milk powders, however, export values still lag behind: -14%, -10%, and -24% respectively.Also, concerning fruit and vegetables, the value of exports registered is still around 12% lower compared to the period one year before.
The EU countries potentially worst affected are the Baltic States, Finland and Poland Germany and the Netherlands (in absolute values).However, the share of agriculture in EU GDP (1.7 %) and in EU exports (6.6 %) is relatively low.
The worst affected countries are: • Lithuania (vegetables, fruit, dairy products); (EUR 927 million -affected in terms of absolute value in 2013); • Latvia (fruit and vegetables, dairy products); • Cyprus (fruit and fish); In the short term, economic damage was most severe in the perishable products sector as harvest was ongoing for many fruits and vegetables when the Russian food ban entered into force.
Impact of sanctions on food products supply and agricultural policy in Russia
Impact of sanctions on the food products supply in Russia may be monitored according to the price changing.Average grow of prices on import banned food products in the internal Russian market was 116% for 2014 in comparing with 2013, 114% for 2015 in comparing with 2014 and 132% for 2015 in comparing with 2013.While inflation rate in Russia was 11,36% and 12,91% for 2014 and 2015 years.
For 2014 year of sanctions prices on pork and chicken meat rose on 27% in the internal Russian market, they were items with the biggest price changing.Prices on fish, apples and cheese rose on 22%, 21% and 19%.After the second wave of sanctions items with the biggest price changing became sunflower oil (with rising on 38%), frozen fish (25%), salted, marinated, smoked fish (21%) and meet of cattle (16%), see table 5.As to the whole researching period at the end of 2015 in comparing with 2013 the most suffered for embargo production items became frozen fish (with rising of the price on 52%), sunflower oil (on 43%), salted, marinated, smoked fish (on 40%), apples (on 38%), butter and meet of cattle (on 29%).
With the aim to cover banned directions Russia starts or expands collaboration with other partners of the world.European meet on the Russian market is changing on it from Brazil, Belorussia, Paraguay, Argentina and Iran etc.European fruits -on them from Ecuador, Pakistan, Morocco, China etc.European vegetables -on them from Egypt, Republic of South Africa, Israel, Azerbaijan etc.
On the other hand there is a large changing in the internal agricultural policy of the Russian Federation.Internal policy of import substitution starts to grow.In the 19-th of December 2014 Government of the Russian Federation increased on 39% expenditures of federal budget on realization of agricultural development program till 2020.The sum was risen from 1,5 trillion rubles in 2012 till 2,1 trillion rubles in 2014.
Before sanctions period main objectives of Russian agricultural development were defined in the Food Security Doctrine of the Russian Federation (2010).Share of Russian agricultural raw materials and food products in total volume of these products on internal market were: [10] -grain -not less than 95%, -sugar -not less than 80%, -vegetable oil -not less than 80%, -meat and meat products -not less than 85%, -milk and milk products -not less than 90%, -fish products -not less than 80%, -potato -not less than 95%, -salt -not less than 85%.
But in his annual presidential address to the Federal Assembly of the Russian Federation on the 3-rd of December 2015 V.V. Putin initiated a challenges to provide totally internal market by Russian agricultural raw materials and food products to 2020 and to move out to external markets with high quality natural products.
He said, that the Government has to support successful farmers and agricultural companies, first of all.All non-cultivated agricultural lands will be withdrawn and sold at auctions.Leading research institutes, the Russian Academy of Science must create Russian technologies of cultivating, storage and treatment of agricultural products.
This focus on internal resources of agricultural development means broad federal support of agriculture.As a result Russia will become really strong supplier of agricultural and food products on the world market in the nearest future.
Conclusion.
There is a big regrouping of food products trade flows in the world as a result of European-Russian mutual sanctions of 2014-2016.Both sides must spend a lot of resources to cover lack of demand on agricultural production in case of Europe and lack of agricultural raw materials and food products supply in case of Russia.
Almost two years on, it is clear that the EU agri-food sector has been remarkably resilient.In most regions, most of the affected sectors have been able to find alternative markets, either within the EU or beyond.Whereas Russia accounted for 10% of EU agri-food exports in 2013 (and the products banned amounted to 4 %), the value of overall exports have increased by 5 % from August 2014 to May 2015, compared to the same period of the previous year.
For the Russian side sanctions leaded to the price increasing.Average grow of prices on import banned food products in the internal Russian market was 132% for 2015 in comparing with 2013.While inflation rate in Russia was 11,36% and 12,91% for 2014 and 2015 years.
Besides looking for new agricultural production supplies embargo on import from Europe became one of the driving forces for intensification of governmental support for agriculture.
) No 833/2014 (OJ L 229, 31.7.2014,p. 1) Corrigendum (OJ L 246, 21.8.2014, p. 59) http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32014R0833 http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32014R0833R(02) -embargo on dual-use goods and technology, if intended for military use or for a military end-user -(arms and related materiel related) ban on provision of certain services -(dual-use goods and technology related) ban on provision of certain services -controls on export of certain items for the oil industry (deep water, Arctic and shale oil) -controls on provision of certain related services -restrictions on issuance of and trade in certain securities and money-market instruments -prohibition to satisfy certain claims made by certain persons, entities and bodies Source: EU restrictive measures in force (updated 15.1.2016).
Figure 1 :
Figure 1: Dynamic of EU's share in the total Russian trade turnover for 2013-2016, with forecast on 2016
Figure 2 :
Figure 2: EU goods banned by Russian Federation in EU 28 (m EUR)
Table 3 : International trade of the Russian Federation with the European Union for 2012-2015, in mil. doll.
Source: Federal Customs Service of Russia data.As to trade turnover between Russia and Europe, it fell down on 9% after first wave of sanctions till the end of 2014 and on 38% after second one till the end of 2015.At the same time Russian export to the European countries became on 8% lower in 2014 and on 36% in 2015.When Russian import from the EU fell down till 12% and 41% in 2014 and 2015.
Table 4 : Russian import of main food products from the European Union for 2013-2015 CU code
Source: Federal Customs Service of Russia data.* for 9 months. | 3,865.4 | 2016-06-24T00:00:00.000 | [
"Political Science",
"Agricultural and Food Sciences",
"Economics"
] |
‘ By faith alone ’ ( undivided loyalty ) in light of change agency theory :
This research is part of the
research project, ‘Biblical
Theology and Hermeneutics’,
directed by Prof. Dr Andries
van Aarde, professor
emeritus and senior fellow in
the Unit for Advancement of
Scholarship at the Faculty of
Theology of the University of
Pretoria, South Africa.
the fields of economics and politics (see Caldwell 2003 1 ; Sewell 1992).A change agent relates information from a change agency to a specific group with the aim to mediate change.The change agent facilitates transformation as directed by the agency.The agent is an intermediary between the two entities or structures.Change agency pertains to 'structural change'.Presuppositions with regard to changing structures are a 'multiplicity of structures', 2 the 'transposability 3 of schemas' and the 'intersection of structures' (Sewell 1992:16).According to Anthony Giddens' 'theory of structuration', structure is not a stagnant entity (cf.Sewell 1992:4).It entails a process.Analogous to de Saussure's ([1916] 1983) distinction between langue and parole, Giddens (1976:118-122) sees structure as consisting of abstract rules.It has a 'virtual' existence, like langue.It manifests in the enactment of these rules in space and time.Like parole, which refers to the production of actual sentences in the practice of speech, the enactment of the rules implies a constant transformation of structure.'Structure' with its 'virtual' existence, does not place constraints on human agency.It enables human agents to transform the 'very structures that gave them the capacity to act' (Giddens 1976:161;Sewell 1992:4).Sewell (1992:8) prefers the term 'schemas' to 'rules'.Structures are made up of schemas that are 'actualized or put into practice in a range of different circumstances' other than the 'situation in which they are first learned or most conventionally applied' (Sewell 1992:8).For Bourdieu ([1972] 1977) 'habitus' is a durable and transposable structure: [Habitus is] a system of lasting transposable dispositions which, integrating past experiences, functions at every moment as a matrix of perceptions, appreciations, and actions and makes possible the achievement of infinitely diversified tasks, thanks to analogical transfers of schemes permitting the solution of similarly shaped problems.(p. 83; emphasis original) The application of change agency theory to biblical exegesis provides a fresh view.Historical and literary interpretations can be enhanced by this focus on the sociological dynamics of the transmission of the Sache Jesu (cause of Jesus).Traditionally, this transmission was explained in terms of the stratification of literary strata and the historical criterion of multiple attestation (cf.Duling 2003:520-523).I would like to honour Malina and Pilch (2013:260-261) for their new direction in historical Jesus research which complements historicalcritical Jesus studies: [T]here is a very valuable way of dating the sequence of Jesusgroups.This method might be called a generational approach in which … persons are prominent, not numbers.A generation is marked by new noncontemporary people in a Jesus-group chain.Generations here are not years but chains of people.For Jesus 1. Caldwell (2003:131-212) discusses organisational change from four perspectives: leadership, management, consultancy and team models.
2.Regarding 'multiplicity of structures', Sewell (1992) remarks: Structures tend to vary significantly between different institutional spheres, so that kinship structures will have different logics and dynamics than those possessed by religious structures, and so on.There is, moreover, important variation even within a given sphere.For example, the structures that shape and constrain religion in Christian societies included authoritarian, prophetic, ritual and theoretical modes.These may sometimes operate in harmony, but they can also lead to sharply conflicting claims and empowerments (pp. 16-17).
3.Anthony Giddens uses the expression 'generalisable' and Pierre Bourdieu, against the background of his notion 'habitus', the expression 'transposable'.
movement groups, we obviously begin with Jesus and those about him (Peter and the Twelve; their families; their followers); they mark a first generation.A second generation includes Paul and his co-workers, who followed upon the first Jesus generation but did not experience Jesus.There is nothing in the Pauline writings about what Jesus said and did.This second generation likewise included the other non-first-generation persons mentioned in Paul's letters such as Timothy, Titus, and others.These were second-generation Jesus-group members, or firstgeneration Pauline Jesus-group members.
From a historical-critical perspective Bultmann ([1928] 1969:220-246) explained it in terms of discontinuity of content (inhältliche Diskontinuität) and continuity of intent (sachliche Relation) by which he meant that, though Jesus-followers conceptualised his words differently, they remained true to his intent (cf.Bultmann 1965:191).Marxsen (1976:45-62) Since Ritschl ([1882Ritschl ([ ] [1972Ritschl ([ ] 1999:154-171):154-171), it has been generally accepted in Jesus studies that Jesus of Nazareth, ethnically an Israelite crossed a variety of boundaries without being 'un-Jewish'. 4The Pauline expression 'by faith alone' is not an abstract truism.It has concrete historical roots (see Jüngel [1990Jüngel [ ] 1995:82-119) :82-119) in Jesus' crossing of gender, ethnic and cultural boundaries.How this became a tradition which was conveyed to following generations, is explained in this study by means of change agency theory.In the early Jesus groups 'apostles', who were sent by the God of Israel as Jesus was, acted as 'change agents'.This was known as the 'apostolic tradition'.In their social-science commentary on the deutero-Pauline letters, Malina and Pilch (2013) explained the relationship between the Pauline and deutero-Pauline traditions.I aim to trace this tradition back to Jesus, who lived his entire life 'by faith alone'.No convention was needed.To live 'by faith alone' requires radical transformation, that is: change.
In ancient Greek literature, the πίστις-word group forms part of the semantic domain 'to trust' and 'to rely on' (Louw & Nida 1988:376-379).The object of 'confidence', according to Louw and Nida (1988:377), has the qualities of 'being trustworthy' and 'dependable'.Crook (2004Crook ( :167-1770) convincingly argues that the term 'loyalty', rather than 'faith', should be seen as a better equivalent for the 'πίστις-root words'.Should the denotation 'faith' then be abandoned from our religious vocabulary?This is not really a necessity if the connotation 'faithfulness' is semantically and pragmatically embedded in Greco-Roman cultural codes of an honour-shame and patronage-clientalage conceptualisation.Subsequently 'faith' would be understood in a non-doctrinal way as 'fidelity' and 'faith alone' as 'undivided loyalty'.
As in Paul's letter to the Galatians, πίστις (Gal 3:2, 5) is also a key concept in the letter to the Romans.In Romans 1:5, Paul's task as God's client 7 among the non-Israelites 8 (ἐν πᾶσιν τοῖς ἔθνεσιν) is to bring them (τὰ ἔθνη) to rely on a πίστις similar to that of the Christ-followers (εἰς ὑπακοὴν πίστεως). 9Romans 1:5 5. Bruce (1952:71-72) points to the difference between the free translation of Habakkuk 2:3-4 (LXX -Codex Alexandrinus) in Hebrews 11:37-38 and Paul's use in Romans and Galatians of other LXX versions where 'faithfulness' pertains to God: 'The other LXX authorities read "But the righteous one shall live by faith in me" (ek pisteōs mou, taking mou as objective genitive)' (Bruce 1952:72, note 18).
6. Bruce ([1958] 1959:16) puts it as follows: 'It is plain, too, from the Habakkuk commentary that the Teacher was not only a spiritual leader but a figure of eschatological significance.Acceptance of his teaching, loyally keeping to the path which he marked out for his followers -this was the way to eternal life.So the wellknown affirmation of Habakkuk ii 4b, "the righteous shall live by his faith", is explained thus: "Its interpretation concerns all the doers of the law in the house of Judah, whom God will save from the house of judgment because of their labour ('āmāl) and their faith in (or faithfulness to) the Teacher of Righteousness"' (1Q p Hab. viii 1-3).
For Paul 'faith in God', that is fidelity and loyalty to God, means trust in God as 'Totenerwecker' (Hahn [2002] 2005:268).In Romans 4:24, faith is trusting in God who resurrected Jesus from the dead (τοῖς πιστεύουσιν ἐπὶ τὸν ἐγείραντα Ἰησοῦν τὸν Κύριον ἡμῶν ἐκ νεκρῶν).Believers are inspired to accept this creatio ex nihilo -an act of recreationas a gift from God (cf.Rom 12:6κατὰ τὴν χάριν τὴν δοθεῖσαν ἡμῖν).According to Crook (2005), seen from the perspective of patronage and clientage, it: might be the case that Paul changed brokers from Moses (or perhaps from the priestly cult) to Jesus -though this would be difficult to prove ….The subsequent changes in Paul's behaviour (1 Cor 15:8; Gal 1:13-15; 22-24; Phil 3:8) were the natural extension of having a new brokers as well as discovering that God was to be pleased and honoured in a new manner.(p.18) Faith requires for Paul a 'renewal of mind' (Rom 12:2).On the one hand it is a 'remaking'/'metamorphosis' (μεταμορφοῦσθε) of one's inner convictions (τῇ ἀνακαινώσει τοῦ νοός) and on the other hand it is about a life in faith (in accordance with the 'measure of faith' God has given you [ἑκάστῳ ὡς ὁ Θεὸς ἐμέρισεν μέτρον πίστεως -Rom 12:3]) as a 'member of the body of Christ' (ἑνὶ σώματι-Rom 12:4).Those who trust God believe that they have been resurrected to live an ethos in spite of impending physical death.Faith (πίστις) is a relationship based on trust (Hahn [2002] 2005:269) and a gift from God.
10. Brown ([2012]:62-78) summarises the exegetical opinions on the meaning of 'the obedience of faith' as follows: Paul uses the phrase "the obedience of faith" to frame his letter (1:5, 16:26), forming an inclusio, which gives the phrase central importance for understanding Paul's rhetorical ends.There is much debate surrounding the genitival relationship of this phrase ('unto the obedience of faith').
There are three proposals that carry the most weight.First, faith may be understood as an adjectival genitive meaning 'faith's obedience' or 'the obedience characterized by faith'.Second, and the majority view, is that this is a genitive of apposition, 'the obedience which consists in faith'.A third option is a subjective genitive, 'obedience'.
Though Jesus' understanding of death as such differed from Paul's understanding of Jesus' death, his words and deeds were compatible with how Paul formulated it.For Jesus, undivided loyalty and obedience to the will of God can result in the tragedy that one could be killed.At least, such an absolute compliance with what God demands compels selfdenial, however, for the benefit of others.With retrospection, Paul interpreted Jesus' death as a deed of redemption; a deed 'for us' (pro nobis).By inference, one can say Jesus' understanding of his followers' deaths is compatible with Paul's understanding of the death of Jesus' followers.The life of Jesus was condensed in his death.Mark's understanding of Jesus' directive that his followers take up their cross and die in order to gain life (Mk 8:34-37) is the core of the message that God creates new life when one dies to this aiōn (Rom 6:2-11).According to Jesus, authentic life was not to be found in pleasing people, but in doing God's will (Mk 8:33).It cannot be found in conventions, culture, ethnicity or anything from this world.Jüngel (1962:266) describes Romans 1:17 as a summary of Jesus' entire message.The expression πίστις Χριστοῦ Ἰησοῦ (Gal 2:16; 3:2; Phil 3:9; Rom 3:22) -interpreting the genitive of Χριστοῦ as 'subjective' and not 'objective', that is the faithfulness of Christ Jesus and not faith in Christ Jesus -refers to the Jesus followers' understanding of Jesus' 'willingness to die, to follow on God's plan for him despite the cost' (Crook 2004:175).Seen that the πίστ-and ἄπιστwords are framed 'in categories of patronage and clientage', Crook (2004:175) interprets these semantic domains as, respectively, 'loyalty' (πίστις) and 'disloyalty' (ἄπιστος) which 'were the backbone of changes or continuity' in 'relationships between ancient Mediterranean people and their gods and/ or philosophical leaders'.
In his book, Paul & his world: Interpreting the New Testament in its context, Koester (2007:12-14) An active 'life in love' is, according to Koester (2007:14), more important to Paul than either passively: receiving the divine gift of righteousness as 'absolution' in the juridical sense of the word (according to Gal 5:6, 'faith works through love' [cf. Bruce [1963] 1985:103], or a self-righteous obedience to legalistic cultural and religious conventions. 'Faith alone' (i.e.undivided loyalty) is not the central theme of Paul's theology, but rather the dialectic 'flesh' (σάρξ) and 'spirit' (πνεῦμα) (cf. Bruce [1963Bruce [ ] 1985:38-39):38-39).'Faith alone' should be understood as a tenet (δόγμα -in the ancient Greek sense of the word) that is based on Paul's conviction that the death of Jesus means dying to the law -be it tradition, cultural or cultic -and that the resurrection is the beginning of the 'new person' (2 Cor 5:17).For me it is not a matter of either ... or, but rather both ... and.To transcend life κατὰ σάρκα and attain life κατὰ πνεῦμα is living ἐκ πίστεως.Furthermore, 'faith alone' is not limited to Pauline literature.It can also be found in the gospel tradition and beyond.
According to Bruce (1952), it appears in Mark, Hebrews and James.
Resurrection faith and its ethical implications featured prominently among 1st-century Christ-followers influenced by Paul.The crux of this faith is conveyed in the expressions ἐκ πίστεως and κατὰ πνεῦμα.How this ethos survived can be explained by means of change agency theory.In their commentary on deutero-Pauline literature, Pilch and Malina (2013:9) Romans 12 does not provide many clues as to the interdependence of the ethos of Jesus (as, e.g.expressed in the Sermon on the Mount and based on the Sayings Source Q) and Paul.Jewett (2007:779) sees Romans 12:9-21 as a 'discourse on "genuine love"', which represents a 'transformative framework that is universal in scope, but local in operation'.From a traditional historical critical perspective, the relationship between Jesus and Paul has been understood primarily in historical and literary terms.Exegetical traditions refer to it as a probable (Allison 1982:11-12;Dunn 1988Dunn :745, 1990:201;:201;Stuhlmacher 1983:240-250;Thompson 1991:96-105) or an improbable relationship (see Neirynck 1986:265-321;Walter 1985:501-502).Wenham (1995:251) speculates that Paul built on an unknown Jesus tradition and that Luke, who was influenced by Paul, contributed further to the Pauline tradition.Wilson (1991:165-171) and Zerbe (1992:207-208), on the other hand, 12.See, for example, the expression πρὸς τὸ συμφέρον in 1 Corinthians 12:7.
13.This analogy with an 'apartheid social structure' is also drawn by Bruce Where 'faith alone' (undivided loyalty) was not central to the thought of some early Jesus-groups, it led to two thought constructs.One is the revival of an apocalyptic retribution mentality regarding the out-group.Post-Pauline 2 Thessalonians is an example of this (cf.Van Aarde 1996:237-266).The other is the re-institutionalisation of the Jesus-group in terms of patriarchal and hierarchical 'in house' systems.In certain post-Pauline writings, both resurrection faith (the ethos of 'faith alone'/'undivided loyalty') and the opposing patriarchal household codes are to be found, for example, Colossians 3:18-4:1; Ephesians 5:22-6:9 and 1 Peter 2:18-3:7.In the Pastoral Letters, an ἐκ πίστεως motif is not prominent and household codes (1 Tim 2:8-15; 6:1-2; Tit 2:1-10) are not relativised by an emphasis on the value of 'faith alone'.Theissen (1982:107-108) describes this as 'moderate conservatism', which takes 'social differences' for granted, but 'ameliorates them through an obligation of respect and love, an obligation imposed upon those who are socially stronger'.Building on the work of Troeltsch (1912:67-83), he describes this departure from Paul (and Jesus) as 'lovepatriarchalism' (Liebespatriarchalismus) or 'love-communism' (Liebeskommunismus) (Theissen 1995:689-711). 14In his 1999 book, A theory of primitive Christian religion, Theissen (1999:98-99) calls this a 'reversal of power relations'. 15Traces of both mentalities (love-patriarchalism and an apocalyptic desire for vengeance) can be seen in interpolations into authentic Pauline letters, such as 1 Corinthians 14:33b-38 and Romans 5:6-7 16 (cf.Dewey et al. 2010:112, 253 resp).
14.In the German original of Theissen's 1982book, Theissen ([1979] 1989:102 note 67) cites Troeltsch (1912) I will now explain Colossians from the perspective of change agency theory.The discussion will conclude with an illustration from Colossians and Ephesians.Ephesians used Colossians as an important source. 17
Change is vital
A change agent creates awareness that there are alternatives to the problems of the community.In Galatians 1:7 and 1 Corinthians 10, Paul names the opposing groups directly and expresses his displeasure at their efforts to sabotage his work.
In Colossians, on the other hand, the opposing points of view are identified without reference to specific people.Colossians also does not refer to the financial needs of the Jesus-group to whom the letter is addressed. 18The author as change agent makes the group aware of what their problems are.The reason for his writing of the letter could have been their anxiety.
Exchange of information
The change agent assures the group that they, as heirs, will receive God's liberation.This includes liberation from the power of darkness and initiation into the kingdom of the Son of God (Col 1:12-14).What is said about the Messiah serves to strengthen the self-image of the group.In light of the values of the Messiah, cultural and traditional demands have 17.More than a third of the 155 verses in Ephesians have parallels in Colossians, both regarding syntactical order and semantic content.Collins (1988) refers to the relationship between these two post-Pauline letters as follows: The form in which Ephesians has been handed down is the nub of the problematic relationship between Ephesians and Colossians.In many respects, that relationship can be considered as a minor synoptic problem.The comparable structures, content, and vocabulary of the two epistles point to some sort of literary dependence of one upon the other (p.143).However (Lohse [1968] 1971), although: '[I]n certain passages Ephesians reads like the first commentary on Colossians, though admittedly it does more than explicate the thoughts of Colossians: it also expands them into concepts of its own' (p.4).
Due to the limited scope of this contribution the phases of the transformation that develops from Colossians through Ephesians cannot be discussed in detail.become irrelevant .This includes social prescriptions regarding 'food, drink, and the observance of certain days (Col.2.16)' (DeMaris 1994:57) as well as cultic purity (Col 2:20-23) -however, 'the drive for purity goes beyond the cultic and ritual practices typical of Judaism' (DeMaris 1994:58).
Motivation for change
The hymn about the Messiah (Col 1:15-20) motivates the group to remain strong in hope and trust.Lohse ([1968Lohse ([ ] 1971: 103-105, 182;: 103-105, 182;cf. Collins 1988:185) points out similarities between the Jesus-group in Colossae and Rome (Col 1-2 and 3-4; Rom 1-11 and 12-15).However, the letters were intended for different Jesus-groups which consisted of people from different generations (cf.Duling 2003:263-264).The theme of death and resurrection grounded in the baptism of Christ features in both Romans (6:1-11) and Colossians (2:11-13). 19 The change agent's conceptualisation of time (eschatology) is based on his view of baptism as resurrection with Christ (Col 2:12; 3:1), to the extent that the person who has been baptised is 'already in heaven' or at least in the sphere of liberation, the church, the body of Christ.In such a 'Christologicalcosmological ecclesiology', the exalted Christ is the head of the body (Col 1:18; 2:10, 19) who brings all believers together in fellowship (Col 1:18, 24).
Identification of matters of concern
The letter to the Colossians gives clues as to the readership, namely that the intended audience were non-Judeans (Col 1:21, 27; 2:13; 3:5, 7).The author did not know them personally (Col 2:1), since this 'house church' was not founded by him.
It was founded by Epaphras, a non-Israelite (Col 4:12), who had a different understanding of the gospel than the author of Colossians (1:23).This he conveyed to the congregation (Col 1:7).On the one hand the gospel message did bear fruit among them (Col 1:3-8), but there were also false teachers who were misleading them (Col 2:4,8,(16)(17)(18).The believers accepted the teachings of the false prophets and adhered to their prescriptions (Col 2:20-23).
Colossians, especially in Chapter 3, emphasises that following the Messiah means that their lives have changed radically.In Christ they have died to the world and what is of the world.The change agent author stresses that creation centres on the Messiah and reaches its highpoint in reconciliation.He is the head over everything because he existed before anything.He brought peace through his blood on the cross.Twice (Col 2:12 and 3:1) the change agent points out that believers have been resurrected with Christ and therefore their lives should reflect the values of the Jesus-group.
Initiating change
Characteristic of the change agent is his petition that believers should adapt to changed circumstances -the Messiah's 19.Other elements that are central to Pauline theology, such as the second coming of the Lord, the resurrection of the body and the final judgement are absent from Colossians.
presence in the world (Col 1:10; 2: 20).He emphasises that freedom is characteristic of the new lifestyle.This contrasts strongly with an ascetic life regulated by demands that are foreign to the values of the Jesus-group (Col 2:20-21).Collins (1988) puts it as follows: Now, there is a new situation, for Christians have been delivered from the dominion of darkness and share in the inheritance of the saints of light (Col 1:12-13).Light characterizes the domain in which God has placed these Christians because of the mediatorial action of Christ.(p. 198) The change agent in Colossians goes further than Paul in seeing the resurrection of the believer with Christ as already fulfilled.In Colossians 2:12-13, it is said with reference to baptism.In Colossians 3:1, it is repeated and qualified with 'hidden with Christ in God'.With this the author emphasises the reality of the new life as well as its mystery and incomparability.The new life is not just a given or a static characteristic of believers.Because of its mystery and incomparability, they should 'seek the things that are above' and not set their minds on 'things that are on earth' (Col 3:1-2).A further consequence is that believers' lives should correspond with this new reality.The content of Colossians 3:1-4 forms the basis for Colossians 3:5-17 in which the implications of being resurrected with Christ are worked out.
In Colossians 3:5-17, the old and new life are contrasted in a concrete way.This section consists of two parts.The first (Col 3:5-11) is about how the old life should be left behind and the second (Col 3:12-17) is about how the new life should be lived.The life of the Jesus-group is hidden with Christ in God (Col 3:3).There is no more room for earlier practices, specifically behaviour that harms other believers and threatens the unity of the Jesus-group.).The record of debt that stood against them with its legal demands has been cancelled (Col 2:13-14).
Stabilisation of continuity
One can argue that the household code in Colossians is but the stabilising rhetoric of the change agent.Except for the words 'the Lord', 'Christ' and 'the Lord in heaven', nothing reminds of the values of the Jesus-group.There are, however, two differences with the common Hellenistic household codes of the time.The first is that the Hellenistic household codes refer to deities, country, family and friends.The second is that Hellenistic household codes only addressed the husband, father or owner as an adult and agree person.Women, children and slaves were not regarded as such.
In Philo (De Decalogo 165-167) and Josephus (Contra Apionem 199-210), they are addressed as 'couples' in household codes along with the husband, father and owner for the first time.
Changes regarding the social value and dignity of women, children and slaves as could be seen in the Jesus-groups and were articulated in documents such as Colossians only became more general in the social world much later.
These later social changes can most certainly be attributed to the influence of the historical Jesus and his followers regarding how the lowly and marginalised should be treated.
Notably, there are no household codes in authentic Pauline letters.The change agent author of Colossians aims to convince the 'house church' that everything has changed.They were resurrected with Christ and should set their minds on things that are above, not on things that are on earth (Col 3:1-2).The old life has passed and the new life is hidden with the Messiah in God (3: 3).Their life now reflects the Messiah as an image of God.In this image they have been renewed (Col 3:10).The ethos that follows is that all people are equal in Christ and all forms of discrimination are to be removed (Col 3:11).Appeals for compassion (Col 3:12), forgiveness and love (Col 3:13-14) are characteristic of the Jesus-tradition.
Terminating relationship
The change agent in Colossians is, however, also the reason why this particular Jesus-group moves away from the values of the Messiah.This is the result of his uncritical tendency towards institutionalisation, which can be seen in his use of the Hellenistic household code (Col 3:18-4:1).Such household codes are not to be found in authentic Pauline writings.One could then argue that the change agent conformed to the world around him for the sake of peace and harmony, trying to assuage conflict in this particular Jesus-group.It is notable that the oppositional pair, 'man or woman', is absent from Colossians 3:11, which is clearly based on Galatians 3:28 -Greek or Judean circumcised or not circumcised, and the like.
The change agent in Colossians seems to have adapted Galatians so as that its equal treatment of man and woman would not contradict the household codes, which required the submission of women.
Should this observation be accurate, it constitutes a post-Pauline reaction against the inclusivity of Paul, which ended up as 'love-patriarchalism'.The dominant position was yet again allocated to males.This is an anomaly with the change agent's message that they should not be misled or be dictated to by opposing Jesus-followers who could not or would not transcend the exclusivity of the in-group (Col 2:4,8,16).Such a thought construct is also contrary to his exhortation that his readers should be rooted and built up in Christ cf. 2:19).Such a deviation constitutes a termination of the relationship with the values of the Jesus-group that go back to Paul and the historical Jesus.According to Rogers and Shoemaker (1971:337;cf. Malina & Pilch 2013:238), change agents: are especially likely to make this mistake (that is, not anticipate consequences in meaning) when they do not empathize fully with the members of the recipient culture, as in cross-cultural contacts or in other heterophilous situations.
Epilogue
My view that 'faith alone'/'undivided loyalty' is die core of the trajectory from Jesus, to Paul, to the deutero-Pauline writings Colossians and Ephesians, differs from that of Malina and Pilch (2013).Though for them the resurrection belief is also fundamental, it does not constitute the nucleus of the transformation (change) inaugurated by the Jesusevent.According to them, the innovation conveyed to the post-Pauline Jesus-groups consists of two elements: the preservation of a 'forthcoming theocracy' and (an eschatological) 'forthcoming redemption'.This was meant for Israel alone.They put it as follows (Malina & Pilch 2013): The innovation that Jesus proclaimed, on the one hand, was forthcoming Israelite theocracy or the kingdom of heaven/God.The innovation that Paul proclaimed was that the God of Israel raised Jesus from the dead, thus revealing Jesus to be Israel's Messiah (Christ) and cosmic Lord, with a view to the forthcoming Israelite theocracy (1 Thessalonians and frequently).According to these New Testament witness, then, the founder or change agency of Jesus-groups and their ideology is God, the God of Israel ... Paul was one of those who believed that God's raising of Jesus signaled Israel's forthcoming redemption ...And his message to his fellow Israelites was that God's redemption of Israel has dawned by means of Israel's Messiah raised by God.Further, his Jesus has been exalted by God.In these post-Pauline letters, this Jesus is now cosmic Lord.(pp.8-10) According to Malina and Pilch's perspective, Theissen's notions of 'love-patriarchalism' and 'obligation of respect and love' could be applied to the Jesus-group, but they would limit it to the Israelite in-group.
Two responses to the Liebesidee promoted by Gerd Theissen are notable.For Schottroff ([1985] 1999:275) Theissen's view of an almost universal 'love-patriarchalism' in the post-Pauline communities is 'an historically inaccurate impression of the character of the early Christian communities'.However, her characterisation of the post-Pauline Jesusgroups as 'communities of the poor' is also too generalised.
According to Syreeni (2003:419), Paul is not 'blameless' when it comes to the post-Pauline 'patriarchal' reception of Liebeskommunismus.He regards the hierarchically structured household codes (Haustafeln) as a Pauline heritage -not only of the deutero-Pauline Jesus-groups.Even when one accepts Paul as a child of his time, 20 as he does, the ethos of 'faith alone'/'undivided loyalty', 'dying with Christ' and a 'renewal of life' that he advocates is not congruent with 20.Schottroff ([1985] To me, the tenets 'faith alone', 'dying with Christ' and a 'renewal of life' which appear alongside patriarchal household codes in some post-Pauline letters could be regarded -to use Troeltsch's (1912:67)
Resumé
This article contests that the author of Colossians preservesthough modifies a bit, however with far-reaching consequences -a key element of commonality between 23.The grammatical preposition behind in the expression 'The Sache Jesu as the canon-behind-the canon' alludes, according to Marxsen (1968:282-284), to both a material and chronological priority (cf.Devenish 1992:xii;in Marxsen 1992: xixxxv).
24. Lohse ([1968] 1971:103): 'Putting off the body of flesh, however' -and the author of Col makes this point clear -does not mean contempt for earthly life.Rather it means being active in this life in obedience to the Lord: 'put off the old man with his practices, and put on the new man, who is being renewed in knowledge according to the image of his creator'.and the recipients' sharing in Jesus' resurrection through baptism), identifies matters of concern, initiates change and stabilises continuity within the group (hereby appealing for all forms of discrimination to be removed, a characteristic of the Jesus tradition).Finally, the article discusses the termination of the relationship by noting that the author of Colossians hedges his bet to some extent by including the household code.This reinsertion of distinctions moves away from Paul and in the direction of 'love-patriarchalism'.The article's contention is that 'faith alone' (undivided loyalty) rejects both patriarchalism and love-patriarchalism.
find that Paul may have cited from the Didache or some unknown Judean (or Judaeo-Christian) wisdom tradition (cf.Jewett 2007:766, note 103).Over against such speculative exegesis, I find Pilch and Malina's use of 'change agency theory' and their innovative social-scientific critical understanding of the transmission of the Jesus tradition particularly valuable.The application of this social-scientific theory confirms the continuity between Jesus and Paul (including certain post-Pauline documents), specifically with regard to transcending an in-group mentality and adopting radical inclusivity and love as the distinctive ethical values of the Jesus-group.It is my conviction that the roots of such a transforming ethos which reaches across generations not only lie in the Pauline concept 'faith alone' but can also be traced back to the historical Jesus.
Jesus-followers who live a new life are renewed according to the image of their Creator.Here the thinking of the change agent in Colossians and that of Paul as the 'prophet of the Messiah' intersect.According to Paul, believers who behold the glory of Christ are being transformed into the same image (2 Cor 3:18; cf.Gal 4:19).What Paul had in mind here is in keeping with what the author of Colossians explains: followers of Jesus are recreated to become new persons who should live their lives accordingly.New life means sharing in the resurrection of Christ (Col 2:12-13 21.SeeSpjut (2013) who exposes the so-called 'Protestant historiographic myth' as if the 'opponents' in Colossae represented simply either an in-group or out-group heritage.22.Duling (2003:265): In the literature of the Jesus Movement, household codes are first found in the late first and early 2nd century, that is, in the generations after Paul.Such literature includes the Church Fathers (1 21:6-9; Polycarp To the Philippians 4:1-6:3; Ignatius To Polycarp 4:1-6:1; Didache 4:9-11; Barnabas 19:5-7) … (emphasis original).
Jesus and Paul.Many of the key figures in early Jesus movements are seen as change agents.The article suggests that a social-scientific model of change agency can help exegetes and theologians address questions that historical and literary interpretations cannot.It proposes that historical Jesus and Pauline scholars ought to consider adopting the sequence of generations to complement historical-critical approaches that utilise a chronological stratification of texts.It seeks a way of considering the continuity between Jesus and Paul.With the Lutheran reformation in the 16th century as background, the article introduces the concept of 'by faith alone' from the Pauline letters.By this expression is meant an undivided fidelity to an inclusive approach to understanding God's work, with concrete historical roots in Jesus' crossing of gender, ethnic and cultural boundaries.Living in this manner requires reformation, transformation and change.The article spells out in fuller detail what is understood 'by faith alone' by discussing the meaning of 'faith' within its semantic domain embedded in the codes of 1st-century Mediterranean culture.Living in faith is both a change of one's inner convictions and about a life in faith.The article aims to argue that both Paul and Jesus reject boundaries related to tradition, culture, cult and ethnicity.Following the work of Bruce Malina and John Pilch on change agents, it reads Paul as a 'Jesus-group prophet' who transcended the interests of individuals belonging to the in-group, but also the interests of the whole in-group which tended to exclude the out-group.This is a reading which differs substantively from Malina and Pilch.It is at this point that the article locates the commonality of Jesus and Paul, specifically with regard to transcending an in-group mentality and adopting radical inclusivity and love as the distinctive ethical values of the Jesus-group.Finally, the article argues that in those places where 'faith-alone' as undivided loyalty was not the regnant position within an early Jesus group, two alternative positions arose: an apocalyptic retribution mentality regarding the out-group and the deinstitutionalisation of the Jesus-group in terms of patriarchal and hierarchical 'in house' systems.It shows that Colossians fit within a limited number of texts (Col, Eph, 1 Pt) that demonstrate both Paul's resurrection faith and his opposing of patriarchal household codes.Seven characteristics of change agents are described.The article explains how each of them is relevant to the interpretation of Colossians.It demonstrates how the author of the letter to the Colossians convinces the group that change is vital, exchanges information with the group, motivates them to change (based on Jesus' having been resurrected reality among all those who venture to accept this offer by becoming members of the new worldwide community of those who love each other and care for each other regardless of any racial, ethnic, gender, sexuality, and socialstatus distinctions.Righteousness as personal piety and morality only creates divisions within a society and among nations.The justice of God cannot be realized in this way.It can become real already here and now in a society without hierarchs who try to enforce divisive moral obligations, and without the borderlines of traditions that are reinforced by pious self-righteousness.God's righteousness is the gift of freedom -even freedom from piety and particularly from moral self-righteousness.It requires the establishment of justice among people who are free to abide by the standards of mutual respect, equality, and carrying one another's burdens.
proposes criteria for the ethics and morality of the 'new community' of Christfollowers.Any community would employ legal means to prevent crimes such as murder, robbery, prostitution and the like.However, what is of central concern to Christ-followers specifically is the relationships among 'brothers and sisters' in the new 'regenerated' community.The question is whether they succeed in realising God's righteousness in this world.forPaul it is not about the development of a personal piety and moral sensitivity which could lead to an attitude of moral superiority.Koester explains Paul's intent as follows:It is a new just society that the apostle envisages.Personal righteousness, piety, and moral achievements no longer matter.Justice and righteousness belong to God ... God is love, and his justice becomes a describe Paul as a 'change agent' who articulated the resurrection faith for Christ-followers: this information into their traditional and ancestral kinship religion.(pp.9-10)InRomans12:1-13:14Paul, as a 'Jesus-group prophet' (Gal 1:15)(Malina 2002:608)expands on whatMalina and Pilch (2006:275-282)call 'Jesus-group values'.Paul describes the 'group task' that God expected them to fulfil as χαρισμάτα.This 'group task' was aimed at enhancing the well-being of the group as a whole. 12ch an advantage for all is a 'typical collectivistic appreciation of the primacy of group integrity over individualistic self-reliance'.According toMalina and Pilch (2006:177), this is what Paul had in mind when he used the expression 'one body in Christ and individually … members of one another' (οἱ πολλοὶ ἓν σῶμά ἐσμεν ἐν Χριστῷ, τὸ δὲ καθ' εἷς ἀλλήλων μέλη -Rom 12:4-6).As a 'Jesus-group prophet' Paul not only transcended the interests of individuals belonging to the in-group but also the interests of the whole in-group which tended to exclude the out-group (contra Malina).Such a transcending and transformative ethos is subversive with regard to the prevalent social tradition of 'separateness' (akin to 'apartheid') in Mediterranean and other cultures.
Hence, the people Paul approached were an Israelite minority living in Hellenistic societies.And the message to his fellow Israelites was that God's redemption of Israel has dawned by means of Israel's Messiah raised by God, Further, his Jesus has been exalted by God.In these post-Pauline letters, this Jesus is now cosmic Lord.The Israelites who found this message a solution to their problem of being Israelites in the first century http://www.hts.org.zaOpen Access would fit 13 Though a similar trend can be identified in 1st-century Stoic philosophy of universal inclusivity (cf.Van Aarde 2014:2 of 11), this ethos is distinctive to Jesus of Nazareth and his 'prophet', Paul.One of the most notable subversive pronouncements in both the Jesus tradition and the Pauline tradition is loving one's neighbour.A concrete example is turning the other cheek when slapped by someone from either the in-group or the out-group (Q 6:29 // Mt 5:39 // Lk 6:29).Paul's version is that when an aggressive act is met with love, that love becomes like burning coals -burning the enemy with shame (Rom 12:20; cf.Ecc 25:21-22).This points to an ethos that transcends the in-group (cf.Van Aarde 2012:43-68).
love-patriarchalism'. One can, however, agree withSyreeni (2003:419)that the Pauline and deutero-Pauline heritage is 'ambiguous' to say the least.21 1999:419) refers to Antoinette Clark Wire's experience when she came across the 1754 will of her American 'forefather' in which he bequeathed 'ten young negros' and 'my great Bible and all my law books' to his son.Wire (1998:291) responded: What do I do with such a heritage?Change my name?Hide this page and read the rest to my grandson?But what is shameful should be heard.This is family history, mine and that of others descended from those who were enslaved, and I must go through it rather than around it.Likewise Paul and the enslaved people whose lives shaped his writings are our collective family history.The shame and the glory are tangled, and this 'mess of pottage' is our precious heritage.' | 8,839.6 | 2017-04-26T00:00:00.000 | [
"Philosophy"
] |
Prostate Cancer Secretome and Membrane Proteome from Pten Conditional Knockout Mice Identify Potential Biomarkers for Disease Progression
Prostate cancer (PCa) is the second most common cause of mortality among men. Tumor secretome is a promising strategy for understanding the biology of tumor cells and providing markers for disease progression and patient outcomes. Here, transcriptomic-based secretome analysis was performed on the PCa tumor transcriptome of Genetically Engineered Mouse Model (GEMM) Pb-Cre4/Ptenf/f mice to identify potentially secreted and membrane proteins—PSPs and PMPs. We combined a selection of transcripts from the GSE 94574 dataset and a list of protein-coding genes of the secretome and membrane proteome datasets using the Human Protein Atlas Secretome. Notably, nine deregulated PMPs and PSPs were identified in PCa (DMPK, PLN, KCNQ5, KCNQ4, MYOC, WIF1, BMP7, F3, and MUC1). We verified the gene expression patterns of Differentially Expressed Genes (DEGs) in normal and tumoral human samples using the GEPIA tool. DMPK, KCNQ4, and WIF1 targets were downregulated in PCa samples and in the GSE dataset. A significant association between shorter survival and KCNQ4, PLN, WIF1, and F3 expression was detected in the MSKCC dataset. We further identified six validated miRNAs (mmu-miR-6962-3p, mmu-miR- 6989-3p, mmu-miR-6998-3p, mmu-miR-5627-5p, mmu-miR-15a-3p, and mmu-miR-6922-3p) interactions that target MYOC, KCNQ5, MUC1, and F3. We have characterized the PCa secretome and membrane proteome and have spotted new dysregulated target candidates in PCa.
Introduction
Prostate cancer (PCa) is the most frequent cancer and has the second-highest morbidity and mortality rate among men, with 1,276,106 (7.1%) new cases globally and 358,989 (3.8%) deaths by cancer [1,2]. In the United States, the estimated number of new cases of PCa diagnosed in 2021 was 248,530, with 34,130 deaths. PCa in the United States accounts for 26% of all new cancer cases [3]. Current statistics show that one in seven men will be diagnosed with prostate cancer during their life and that one in 39 men will die of the disease [4].
The introduction of novel androgen receptor (AR) antagonists for clinical treatment has improved outcomes; however, most metastatic castration-resistant prostate cancer (mCRPC) patients ultimately develop resistance to these therapies. Patients with localized and advanced prostate tumors are sensitive to androgen deprivation therapy (ADT) and are highly curable; patients with metastatic prostate cancer acquire resistance to ADT and succumb to this disease [5]. While a large number of prostate cancer cases are diagnosed at a localized stage and are curable, metastatic prostate cancer remains fatal. In the last decade, large-scale omics analysis has revealed well-established and new master regulators and pathways involved in the metastatic and lethal behavior of PCa [6]. mCRPC is incurable, with a median survival rate f two years from diagnosis, and available treatments extend life for few months [7]. mCRPC commonly exhibits genetic alterations involving the AR, cell cycle and cell survival pathways such as the phosphatidylinositol-3-kinase (PI3K) and protein kinase B (PKB/AKT) [8,9]. One of the most frequently deleted genes in PCa, which negatively regulates PI3K-AKT signaling, is the tumor suppressor phosphatase and tensin homolog (PTEN) that is consistently associated with more aggressive forms and worse prognosis of PCa [10,11]. Loss of PTEN function has been well documented in PCa and PTEN mutations have been found in 40% metastatic PCa tumors [12,13]. Genetically Engineered Mouse Model (GEMM) Pb-Cre4/Pten f/f mice have been used since 2003, exhibiting pathological features similar with human prostate cancers, which includes the progression from intraepithelial neoplasia to invasive well-and poor-differentiated adenocarcinoma [14]. Moreover, this model has been used to produce new GEMM by combining mutations and to explore diet manipulation effects on prostate cancer progression [15,16].
Gene expression analysis is an important tool for understanding the behavior of tumors. Gene expression signatures have been successfully applied to define subclasses of different types of cancers with different biological behaviors and responses to therapies [17][18][19][20][21]. Several studies have revealed gene expression signatures of PCa tumors that correlate with poor prognosis in retrospective analyses [22][23][24]. Some of these molecular signatures help stratify patients with a Gleason score of 7, improve prognostic prediction, and provide appropriate management plans for patients after radical prostatectomy [23][24][25].
Comprehensive studies of histological, genomic, and transcriptome analyses and their relationship with PCa are necessary. Abida et. al. (2019) [26] presented an integrative analysis of genomic alterations with expression and histological evaluation of tumors from patients with mCRPC, representing the clinical spectrum of advanced disease, and with tissues collected before and after treatment with androgen signaling inhibitors [26]. However, most molecular signatures do not require validation before clinical use. In addition, some signatures include too many genes, which are expensive and hard to use in the clinic [23]. In addition, the list of genes generated in these signatures generally does not overlap between studies, and no gene sets have been validated for clinical use [27][28][29]. The search for molecular gene signatures is based on the assumption that a clear distinction between tumors that will relapse and those that will not is possible using gene expression profiles [29]. Therefore, more studies are needed to identify and validate prognostic markers.
Therapeutic and diagnostic options for PCa are limited, and progress in drug development is delayed because most cancers are highly complex at different levels, including cellular, genomic, and metabolic. The current challenge in PCa diagnosis is the lack of alternative screening to replace the existing PCa biomarker, prostate-specific antigen (PSA). Although PSA is widely used, it cannot distinguish between indolent and aggressive PCa [30][31][32]. Therefore, exploring new types of biomarkers beyond the conventional AR and PI3K pathways and/or altered genes, such as PTEN, P53, and RB1, are highly important in prostate cancer research.
The tumor microenvironment plays an important role in the initiation and progression of tumors [33][34][35]. Transcriptome analysis revealed that stromal regions adjacent to the tumor express genes that allow for re-stratification of the tumor microenvironment [36]. Secreted and membrane proteins play an important role in cancer metastasis by stimulating cancer cell migration and invasion, consequently increasing cancer metastasis [35][36][37][38]. Therefore, investigating potential targets for diagnosis and prognosis that are available in PCa tumor stroma provides an opportunity to reframe and help treat this disease.
In this study, we used available data to perform an integrative analysis of the PCa secretome and tumor membrane proteome. Our design consisted of identifying potential biomarker targets at different stages of PCa progression, demonstrated here by the early stages of mouse Prostatic Intraepithelial Neoplasia (mPIN), Middle-stage tumor (MT), and Advanced-stage tumor (AT), focusing on the tumor microenvironment of PCa. From the list of targets (genes and proteins) available in the Human Protein Atlas (HPA) secretome, we investigated a commonly deregulated gene network in the transcriptome of Pb-Cre4/Pten f/f mice. Enrichment analysis, protein-protein interaction (PPI) network, and in silico tools allowed us to identify nine membranes and secreted proteins that were either downregulated or upregulated in PCa. We also compared the transcriptomic profiles of prostate adenocarcinoma (PRAD) and normal tissue samples using The Cancer Genome Atlas (TCGA) and Genotype-Tissue Expression (GTEx) data, which revealed that four genes that encode secreted proteins were downregulated in PRAD. Finally, the gene expression patterns and prognosis of patients with PCa were analyzed by comparing four published datasets with disease outcomes (decreased relapse-free survival, overall survival, and probability of freedom from biochemical recurrence), with subsequent validation of HPA protein expression in human PCa and normal prostate samples.
Identification of Gene Expression Profile in Prostate Cancer of Pb-Cre4/Pten f/f Mice
We performed an integrative analysis of the prostate cancer secretome and membrane proteome data to identify clinically relevant diagnostic and prognostic biomarkers. According to the criteria used and from the list of genes included, our analysis identified upregulated and downregulated genes in the mPIN, MT, and AT distributed in all Anterior Prostate (AP), Dorsal Prostate (DP), Lateral Prostate (LP), and Ventral Prostate (VP) lobes.
After identifying the list of upregulated and downregulated genes in each prostate lobe, we checked for shared genes present in the AP, DP, LP, and VP lobes. We identified a list of genes common to the four prostate lobes. The number of upregulated common genes for membrane proteins was as follows: in mPIN = 98 genes, MT = 124 genes, and AT = 136 genes (Figure 2A A list of secreted protein genes common to all four prostate lobes was also performed. The number of genes commonly upregulated for secreted proteins was as follows: in mPIN = 33 genes, MT = 52 genes, and AT = 63 genes ( Figure 3A-C). The number of downregulated common genes for secreted proteins present in the four lobes was as follows: in mPIN = 25 genes, MT = 31 genes, and AT = 26 genes ( Figure 3D-F). Our list of downregulated and upregulated PCa genes in mPIN, MT, and AT was analyzed using the EnrichR platform to identify enriched ontological terms. The most significant categories were tyrosine kinase activity, an integral component of the plasma membrane for upregulated genes (Tables 1 and 2). The most enriched terms for downregulated genes were inorganic cation transmembrane transport, potassium channel activity, and ion transmembrane transport (Tables 3 and 4). This analysis also showed their involvement in biological processes, such as glycolysis, carbohydrate biosynthetic processes, glycosaminoglycan metabolic processes, and extracellular organization.
Protein-Protein Interaction (PPI) Network of Membrane and Secreted Proteins Enriched in Prostate Cancer
Venn diagrams showing the list of common genes identified in the four prostatic lobes of predicted membrane proteins are represented in Figure 2; upregulated and downregulated genes are shown in Figure 2A-C, and Figure 2D-F, respectively. The common genes identified in the predicted secreted proteins are represented in Figure 3 for upregulated ( Figure GO analysis of upregulated membrane proteins revealed significant protein enrichment in the integral components of the membrane, cell surface, immune system processes, and cell surface receptor signaling pathways (Supplementary Materials Data; Figure S1). The GO terms of downregulated membrane proteins revealed significant protein enrichment in the sarcoplasmic reticulum membrane, transmembrane transporter activity, transmembrane transporter activity, and ion transport (Supplementary Materials; Figure S1).
GO analysis of upregulated secreted proteins revealed significant protein enrichment in the categories of immune system process, glycosaminoglycan metabolic process, and cell migration (Supplementary Materials; Figure S2). The GO terms of downregulated secreted proteins revealed significant protein enrichment in the extracellular region, glycosaminoglycan binding, and extracellular space (Supplementary Materials; Figure S2
Differential Gene Expression of Transcripts Translated into The Membrane and Secreted Proteins in Prostate Cancer
After identifying all proteins present in the prostatic PPI network in mPIN, MT, and AT and among lobes, we selected the common upregulated and downregulated protein clusters in the three stages of PCa progression. The results identified 4 downregulated membrane proteins: Myotonin-protein kinase or myotonic dystrophy protein kinase (DMPK); Phospholamban (PLN); Potassium voltage-gated channel subfamily KQT member 5 (KCNQ5) and Potassium voltage-gated channel subfamily KQT member 4 (KCNQ4). We have also found 3 downregulated secreted proteins: Myocilin (MYOC); Wnt inhibitory factor 1 (WIF1); and Bone morphogenetic protein 7 (BMP7). The results identified Tissue Factor (F3) and Mucin-1 (MUC1) common upregulated proteins were present in secreted and membrane proteins.
The gene expression levels of the targets identified in our final list were analyzed using the online Gene Expression Profiling Interactive Analysis (GEPIA) tool (Tang et al., 2017). This tool allows the comparison of transcriptome profiles from TCGA and GTEx using uniformly processed and unified RNA sequencing data from the Toil Pipeline. The expression profiles of genes encoding secreted and membrane proteins were analyzed using the GEPIA tool to identify prognostic biomarkers of PRAD. The analysis showed that three genes (DMPK, KCNQ4, and WIF1) were significantly downregulated in PRAD (Log 2 fold change cutoff = 1 and q-value cutoff = 0.01) when compared to normal tissues ( Figure 6).
Survival Analysis and Risk Assessment
The gene (KCNQ4, PNL, F3, and WIF1) expression patterns and prognosis of patients with PCa were analyzed by comparing four published datasets (MSKCC, Cambridge, Stockholm, and MCTP). In these analyses, overexpression of the KCNQ4 gene expression with cut-off >7.83 (red line) and ≤7.83 (blue line) was associated with a reduced time of biochemical recurrence (p = 0.037) in the MSKCC dataset ( Figure 7A). In the MSKCC and Stockholm datasets PLN (p = 0.019) expression with cut-off >5.28 (red line) and ≤5.28 [42,43] from an integrative study. (A)-Kaplan-Meier curve with the probability of freedom from biochemical recurrence of PCa with (red) or without (blue) KCNQ4 overexpression with cut-off = 7.83 from the MSKCC study [8]; the difference was statistically significant, p = 0.037. (B)-Kaplan-Meier curve with the probability of freedom from biochemical recurrence of PCa with (red) or without (blue) PLN overexpression with cut-off = 5.28 from the MSKCC study [8]; the difference was statistically significant, p = 0.0019. (C)-Kaplan-Meier curve with the probability of freedom from biochemical recurrence of PCa with WIF1 expression with cut-off <6.02 (black line), WIF1 gene expression with cut-off <6.41 (red line), and patients with WIF1 gene expression with cut-off >6.41 (blue line) (p = 0.0046) from the Stockholm study [44]; the difference was statistically significant, p = 0.0046. (D)-Kaplan-Meier curve with the probability of overall survival of PCa patients with (red) or without (blue) F3 alteration from the metastatic prostate adenocarcinoma (MCTP, Nature 2012) study [45]; the difference was statistically significant, p = 0.0005807.
In Silico Validation of Protein Expression in Membrane and Secreted Proteins in Human Prostate Cancer
We analyzed the protein expression in human PCa and normal prostate samples from two upregulated (F3 and MUC1) and downregulated (MYOC and KCNQ5) targets found in the membrane and secreted protein list. The expression of F3 and MUC1 proteins using the HPA database showed increased (high or medium) immunostaining intensity in PCa tumor tissues; however, the expression remained low or undetectable in normal prostate tissues (Figure 8). Other dysregulated proteins identified in the HPA analyses were KCNQ4, DMPK, and PLN. At the time of this study, there was no immunohistochemistry tissue data available in the HPA database on PCa samples for BMP7 and WIF1 proteins.
Discussion
In this study, we first analyzed the PCa transcriptome from a Pten knockout mouse model for genes encoding membrane proteins and secreted proteins in PCa. We identified important DEGs in the PCa extracellular matrix (ECM) pertaining to hitherto unexplored pathways. From an integrative analysis of the secretome and membrane proteome, we identified 9 altered targets in PCa progression stages. The gene expression profile of these markers was altered in human PCa patients with worse overall survival and a worse probability of biochemical recurrence. Our strategy was to identify potential prognostic biomarker targets at different stages of PCa progression, focusing on the tumor microenvironment of PCa.
Tumors in patients with PCa present large histological, genetic, and molecular heterogeneity. A patient may harbor more than one genomic and phenotypically distinct prostate cancer; that is, these tumors appear independently and follow separate evolutionary trajectories. These clonally independent tumors exhibit biological differences and contribute differently to disease progression and clinical outcomes [46][47][48]. Currently, data on PCa proteomics and transcriptomes, using different GEMM and human patient samples, have been explored and integrated to identify potential targets against this disease.
The tumor microenvironment is a dynamic network of cells and structures, including tumor cells. The surrounding stroma is comprised of cancer-associated fibroblasts (CAFs), immune cells, mesenchymal stem cells (MSCs), ECM, cytokines, chemokines, and growth factors secreted by these cells [33,49]. It is already known that the tumor microenvironment plays an important role in the formation and progression of metastasis. CAFs deposit and degrade ECM components and thus remodel it during cancer progression, promoting immune cell infiltration and cancer cell proliferation, migration, and invasion [33,34]. CAFs can significantly promote proliferation and migration of prostate cancer cell lines [50,51]. Studies seeking to understand and identify precise biomarker signatures are necessary to identify effective targeted therapeutics to reduce the clinical and lethal diseases related to the role of inflammation in PCa progression [52,53]. It is important to note that studies that identified biomarkers for PCa, derived from markers of stromal infiltration or stromal transcriptomic and proteomic profiles, have not pointed to any of the markers that we found in our study [54][55][56][57]. Our targets are not directly related to the inflammatory profile but rather to another class of proteins related to the tumor microenvironment and ECM.
Of note, the findings of these targets were from the PCa transcriptome of knockout animals for Pten (Pb-Cre4/Pten f/f GEMM), in which they present important histopathological characteristics [58,59]. In addition to prostatic intraepithelial neoplastic (PIN) lesions, larger heterogeneous areas of fully invasive, both well-and poorly differentiated adenocarcinomas associated with reactive stroma are present in Pb-Cre4/Pten f/f GEMM. This model also presents a loss of the basal membrane structure and disorganization of the smooth muscle cells but shows rare metastasis. Additionally, infiltrated inflammatory cells are commonly identified in these tumors [58][59][60].
We believe that we found a set of proteins downregulated in PCa that are biologically important in (sub)types of human cancers. Myotonin-protein kinase or myotonic dystrophy protein kinase (DMPK) is a serine/threonine-protein kinase necessary for the maintenance of muscle structure and function [61]. DMPK is mainly expressed in smooth, skeletal, and cardiac muscles, and overexpression of DMPK mediated by p53 promotes contraction of the actomyosin cortex, which leads to the activation of caspases and concomitant cell death by apoptosis [61,62]. DMPK also phosphorylated phospholamban (PLN), another downregulated protein determined in our analyses. PLN is a small, and reversibly phosphorylated transmembrane protein found in the sarcoplasmic reticulum. Depending on its phosphorylation state, PLN binds to and regulates the activity of Ca 2+ pumps [63]. These two proteins are downregulated during PCa progression. We believe that this was due to the loss of smooth muscle cells [58]. Our enrichment analysis also showed changes in the sarcoplasmic reticulum membrane and ion transport, which may be related to autophagy processes, calcium homeostasis, and endoplasmic reticulum stress, as previously reported [64,65].
We also identified potassium voltage-gated channel subfamily KQT member 5 (KCNQ5) and member 4 (KCNQ4), both of which are important in regulating neuronal excitability. Voltage-gated potassium channels are responsible for the repolarization phase of the membrane action potential and play crucial roles in the excitability of neurons and other cells (Li et al., 2021). Several studies have proposed the use of KCNQ5 gene for the early clinical detection of colorectal precancerous lesions and cancer [66][67][68]. Downregulated expression of KCNQ5 has also been observed in other diseases [69,70]. These proteins play an important role in potassium homeostasis and are related to the enriched terms of potassium channel activity and potassium ion transport at different levels of PCa progression presented in our results. In our analysis, patients with altered KCNQ4 and PLN genes showed the shortest time for biochemical recurrence. The downregulation of these genes may be related to PCa progression.
In addition to membrane proteins, Myocilin (MYOC) was identified in our study. MYOC is a secreted glycoprotein that regulates the activation of different signaling pathways in adjacent cells to control different processes, including cell adhesion, cell-matrix adhesion, cytoskeleton organization, and cell migration [71]. Mutations in the MYOC gene are an important cause of glaucoma with dominant inheritance (Liuska [72,73]. How-ever, other types of cancer, such as thymoma, exhibit MYOC downregulation, thereby corroborating our results [74]. Wnt inhibitory factor 1 (WIF1) is a secreted protein that binds to WNT proteins and inhibits their activities. WNT signaling mainly controls cell proliferation, differentiation, and maintenance of stem cells (β-catenin-dependent pathway), cell polarity, and migration (β-catenin-independent signaling). The WNT/Ca 2+ signalling pathway is also associated with the release of Ca 2+ from intracellular stores [75,76]. A large body of evidence has shown that activation of the WNT signaling pathway contributes to the proliferation and transformation of malignant cells with metastatic activity [77,78]. The WNT protein is regulated by a variety of secreted extracellular proteins that interfere with the formation of the WNT-receptor complexes. Extracellular inhibition of the WNT signaling pathway, WIF1, plays an important role in controlling cell proliferation and acts as a tumor suppressor [79]. Owing to its biological function, interest in using WIF1 as a biomarker for the early detection, diagnosis, and prognosis of cancer has increased in recent years [80][81][82][83]. As shown here, WIF1 was associated with favorable overall survival in PCa, corroborating other studies [84].
Bone morphogenetic protein 7 (BMP7) (https://www.uniprot.org/uniprot/P18075 (accessed on 21 December 2021)) is a growth factor of the TGF-β superfamily that plays important role in various biological processes, including proliferation, differentiation, and apoptosis in many different cell types [85,86]. Bone morphogenetic proteins can act as either tumor suppressors or oncogenes, depending on the cellular context and tumor type [87,88]. Studies have suggested that BPM7 inhibition may represent a target for overcoming resistance to cancer immunotherapies [85], and the use of BPM7 overexpression is a strong predictor of the risk of tumor recurrence in gastric cancer [87].
The upregulated gene in the common membrane and secreted proteins, tissue factor (F3) (https://www.uniprot.org/uniprot/P13726 (accessed on 21 December 2021)), is a transmembrane glycoprotein and primary initiator of the extrinsic blood coagulation cascade and ensures rapid hemostasis in case of organ damage [89]. F3 has been associated with strong tumor growth enhancement and poor prognosis in cancer [90]. In our analyses, we found that PCa patients with altered F3 gene expression had reduced survival rate. F3 expression is increased in tumors and is associated with tumor progression, particularly in pancreatic [91,92], cervical [93], breast [94], and prostate cancers [95].
The transmembrane glycoprotein Mucin-1 (MUC1) is highly glycosylated and is normally expressed in glandular and luminal epithelial cells. MUC1 provides protection and creates a physical barrier to negatively charged sugars, limiting accessibility, and preventing pathogenic colonization [96,97]. MUC1 is overexpressed and has been identified as a potential target for diagnosis, prognosis, and therapy in most human cancers and plays an important role in tumor progression [96][97][98][99][100]. Recently, we reported a family of deregulated mucins, including MUC1, in PCa progression, where it was shown that mucin cells (mucinous metaplasia) are in AR-negative areas of proliferation, and that mucin-associated genes have a worse prognosis in PCa and have significant prognostic value for PCa patients [58]. miRNA controls gene expression by targeting mRNA based on sequence complementarity and can serve as oncomiR or tumor suppressor miRs by targeting mRNA that encode oncoproteins or tumor suppressor proteins [101,102]. Using miRNA-mRNA tumor expression data, we identified deregulated miRNA that was validated in the regulatory networks of the four target genes. Some miRNAs found in our analysis that regulate MYOC, KCNQ5, MUC1, and F3 mRNAs has been also described in other cancers, such as colorectal cancer cells [103] and identified as biomarkers in lung cancer [104], osteosarcoma [105], ovarian cancer [106] and penile cancer [107].
The miR-15a-3p miRNA has been associated with the three DEGs (KCNQ5, MUC1, and F3) in PCa in our analysis. The miR-15a-3p has been shown to suppress proliferation and migration inhibiting the expression of BCL2 and MCL1 in epithelial cells [108] and restrains the growth and metastasis of ovarian cancer cells by regulating Twist1 [106].
Evidence has shown that miR-15a-3p overexpression also suppressed cell proliferation via down-regulating Wnt/β-catenin signaling in PCa cells [109]. Moreover, the mir-671-5p was previously described to function as a tumor suppressor, inhibiting tumor proliferation by blocking cell cycle in osteosarcoma and negatively regulates SMAD3 to inhibit migration and invasion of osteosarcoma cells [110,111]. The mir-671-5p interacts with MYOC and KCNQ5 and may be down-regulating the expression of these genes in PCa.
Here, our strategy consisted of selecting upregulated and downregulated membrane and secreted proteins identified in PCa transcriptome analysis, which stratifies a group of unexplored proteins with high prognostic value and a potential target for therapy in a subgroup of patients. Although we tried to avoid bias in our study, certain limitations still need to be considered. Experimental in vivo and in vitro analysis should be performed to confirm our findings. Investigate the role and function of these miRNAs in PCa and their regulation of these genes are required. The experimental validation of the membrane and secreted proteins identified in this study could help correlate the results obtained herein with another group of patients' prognoses, diagnosis, and/or overall survival. Despite the above limitations, we have demonstrated a well-characterization of secretome and membrane proteome and have spotted new dysregulated target candidates in PCa.
Analysis of RNA-Seq Data of the Genetically Engineered Mouse Model (GEMM) for PCa: The Pten Conditional Knockout
We used RNA-seq data derived from the analysis of samples from the four prostatic lobes obtained from the GEMM Pten f/f , control, and Pb-Cre4/Pten f/f mice. We accessed RNA sequencing data derived from all prostate lobes using the NCBI Gene Expression Omnibus platform (GEO, https://www.ncbi.nlm.nih.gov/geo/ (accessed on 21 December 2021)), reference number GSE94574. Briefly, 72 samples were submitted for RNA-seq analysis, including 20 prostate samples from wild-type (WT), 16 mouse prostatic intraepithelial neoplasia (mPIN), 20 well-differentiated tumors (middle-stage tumor, MT), and 16 poorly differentiated tumors (advanced-stage tumor, AT). A minimum of four samples for each prostatic lobe and pathological condition for RNA-seq analysis were used. A detailed description of the histopathological aspects of each prostatic lobe and tumor stage of the mouse model has been previously described [59,112]. First, we explored genes that were differentially expressed in each lobe and at different stages of tumor progression (mPIN, MT and AT); we used Log 2 FC ≥ |+1| ≤ |−1| and adjusted the p-value < 0.05. The transcriptome used in this study was generated from animals provided by Dr. David
Integration of Secretome and Membrane Proteome Analyses to Identify Prostate Cancer Biomarkers
The DEGs from RNA-seq were used to predict membrane and secretome proteins using a known list of 5520 genes of Predicted Membrane Proteins and 1708 genes of Predicted Secreted Proteins available in The Human Protein Atlas Secretome (https://www. proteinatlas.org/humanproteome/tissue/secretome) [113,114] (accessed on 21 December 2021). The Predicted Membrane Proteins are a selection of seven prediction algorithms used to create a majority decision-based method (MDM) using the combined results from the chosen tools to estimate the human membrane proteome [115]. The human secretome was predicted by a whole-proteome scan using three methods for signal peptide prediction: SignalP4.0, Phobius, and SPOCTOPUS, which have all been shown to produce reliable prediction results in comparative analysis and selected the genes that were altered in at least three prostatic lobes in mPIN, MT, and/or AT.
Protein-Protein Interaction Network and Functional Enrichment Analysis
We used EnrichR software from Ma'ayan Lab (https://maayanlab.cloud/Enrichr/) (accessed on 21 December 2021) to determine the enrichment of ontological terms and molecular pathways related to DEGs [116,117]. The cutoff criteria used for both analyses were adjusted to a p-value of ≤ 0.05. Gene ontology (GO) enrichment analysis of secreted and membrane proteins was grouped into a single list. The ontological terms downregulated and upregulated in the biological process, molecular function, and cellular component categories with the lowest adjusted p-values were selected.
We used the STRING database (https://string-db.org/) [39] to identify the proteinprotein interaction (PPI) network by individually analyzing the upregulated and downregulated genes. The minimum interaction score required was 0.700 (high confidence), and the nodes disconnected from the network were hidden to simplify the display. The PPI enrichment p-value indicated the statistical significance provided by STRING (accessed on 21 December 2021).
The ShinyGO application (version 0.741) (http://bioinformatics.sdstate.edu/go/) [118] (accessed on 21 December 2021) was used to explore the enrichment of ontological terms in GO (http://geneontology.org/) (accessed on 21 December 2021) categories for the biological process of proteins from PPI. The cut-off criterion used for both analyses was a false discovery rate (FDR) p-value < 0.05.
Gene Expression Profile in Prostate Cancer
Differential expression levels were calculated using a web-based gene expression profiling analysis (GEPIA) tool [40]. GEPIA analysis revealed that genes encoding secreted proteins are regulated in PRAD (http://gepia.cancer-pku.cn/detail.php) (accessed on 21 December 2021). DEGs between tumor and normal samples were determined by oneway analysis of variance (ANOVA), applying the log 2 fold-change > 1 and q-value < 0.01. Genes were considered positively or negatively regulated and indicated in red and green, respectively, in PRAD (n = 489-492) relative to normal tissue (n = 150-152).
Survival Analysis and Risk Assessment
After identifying the secretome and membrane proteome targets of knockout mice, we performed analyses using data from publicly available databases. We investigated gene expression using the Cambridge Carcinoma of the Prostate App (CamcAPP) database and developed the CamcAPP (https://bioinformatics.cruk.cam.ac.uk/apps/camcAPP/) [41] (accessed on 21 December 2021) and the cBioPortal for Cancer Genomics database (https:// www.cbioportal.org/) [42,43] (accessed on 21 December 2021), to determine the association of gene alterations with patient clinical data, such as tumor risk development, prognosis, and survival rates. Survival curves were constructed using the Kaplan-Meier method. The expression of genes was associated with disease outcomes (decreased relapse-free survival and an increased expression level of genes in advanced prostate cancer) in several published PCa datasets, namely Memorial Sloan-Kettering Cancer Center (MSKCC) [8] and Cambridge and Stockholm integrative studies [44] performed using CamcAPP [41]. The published PCa datasets used was Metastatic Prostate Adenocarcinoma (MCTP, Nature 2012) [45] performed using the cBioPortal for Cancer Genomics database [42,43]. The grouping of samples found by recursive partitioning (RP) was used to construct a Kaplan-Meier plot by CamcAPP.
In Silico Validation of Differentially Expressed Genes (DEGs)
After gene expression analysis, the deregulated genes identified in our analysis of GEMM Pb-Cre4/Pten f/f PCa were assessed using the HPA (https://www.proteinatlas.org/) database (accessed on 21 December 2021) [113,114] to identify the distribution and localization of proteins in normal and tumor prostate samples via immunohistochemistry.
Prediction of Commonly Dysregulated miRNAs-mRNA Targets
We used the miRWalk 3.0 tool (http://mirwalk.umm.uni-heidelberg.de/) [119] to perform the regulatory interaction between miRNA and mRNA (MYOC, KCNQ5, MUC1, and F3); the algorithm for target validation was used in other available databases of Homo sapiens. The miRNA was considered significant when involved with at least three of the four genes selected. An alluvial plot diagram was generated using the online tool SankeyMATIC (http://sankeymatic.com/) to demonstrate the interaction networks between the miRNA and mRNA. | 6,944 | 2022-08-01T00:00:00.000 | [
"Biology"
] |
Electroweak breaking and neutrino mass:"invisible"Higgs decays at the LHC (Type II seesaw)
Neutrino mass generation through the Higgs mechanism not only suggests the need to reconsider the physics of electroweak symmetry breaking from a new perspective, but also provides a new theoretically consistent and experimentally viable paradigm. We illustrate this by describing the main features of the electroweak symmetry breaking sector of the simplest type-II seesaw model with spontaneous breaking of lepton number. After reviewing the relevant"theoretical"and astrophysical restrictions on the Higgs sector, we perform an analysis of the sensitivities of Higgs boson searches at the ongoing ATLAS and CMS experiments at the LHC, including not only the new contributions to the decay channels present in the Standard Model (SM) but also genuinely non-SM Higgs boson decays, such as"invisible"Higgs boson decays to majorons. We find sensitivities that are likely to be reached at the upcoming Run of the experiments.
I. INTRODUCTION
The electroweak breaking sector is a fundamental ingredient of the Standard Model many of whose detailed properties remain open, even after the historic discovery of the Higgs boson [1,2]. The electroweak breaking sector is subject to many restrictions following from direct experimental searches at colliders [3,4], as well as global fits [5,6] of precision observables [7][8][9]. Moreover, its properties are may also be restricted by theoretical consistency arguments, such as naturalness, perturbativity and stability [10]. The latter have long provided strong motivation for extensions of the Standard Model such as those based on the idea of supersymmetry.
Following the approach recently suggested in Refs. [11,12] we propose to take seriously the hints from the neutrino mass generation scenario to the structure of the scalar sector. In particular, the most accepted scenario of neutrino mass generation associates the smallness of neutrino mass to their charge neutrality which suggests them to be of Majorana nature due to some, currently unknown, mechanism of lepton number violation . The latter requires an extension of the SU(3) c ⊗ SU(2) L ⊗ U(1) Y Higgs sector and hence the need to reconsider the physics of symmetry breaking from a new perspective. In broad terms this would provide an alternative to supersymmetry as paradigm of electroweak breaking. Amongst its other characteristic features is the presence of doubly charged scalar bosons, compressed mass spectra of heavy scalars dictated by stability and perturbativity and the presence of "invisible" decays of Higgs bosons to the Nambu-Goldstone boson associated to spontaneous lepton number violation and neutrino mass generation [13].
In this paper we study the invisible decays of the Higgs bosons in the context of a type-II seesaw majoron model [14] in which the neutrino mass is generated after spontaneous violation of lepton number at some low energy scale, Λ EW Λ ∼ O(TeV) [15,16] 1 . This scheme requires the presence of two lepton number-carrying scalar multiplets in the extended SU(3) c ⊗ SU(2) L ⊗ U(1) Y model, a singlet σ and a triplet ∆ under SU(2) -this seesaw scheme was called "123"-seesaw model in [14] and here we take the "pure" version of this scheme, without right-handed neutrinos. The presence of the new scalars implies the existence of new contributions to "visible" SM Higgs decays, such as the h → γγ decay channel, in addition to intrinsically new Higgs decay channels involving the emission of majorons, such as the "invisible" decays of the CP-even scalar bosons. As a result, one can set upper limits on the invisible decay channel based on the available data which restrict the "visible" channels.
The plan of this paper is as follows. In the next section we describe the main features of 1 The idea of the Majoron was first proposed in [17] though in the framework of the Type I seesaw, not relevant for our current paper. On the other hand the triplet Majoron was suggested in [18] but has been ruled out since the first measurements of the invisible Z width by the LEP experiments. Regarding the idea of invisible Higgs decays was first given in Ref. [19], though the early scenarios have been ruled out.
the symmetry breaking sector of the "123" type II seesaw model. In section III we discuss the "theoretical" and astrophysical constraints relevant for the Higgs sector. Taking these into account, we study the sensitivities of Higgs boson searches at the LHC to Standard Model scalar boson decays in section IV. Section V addresses the non-SM Higgs decays of the model. Section VI summarizes our results and we conclude in section VII.
II. THE TYPE-II SEESAW MODEL
Our basic framework is the "123" seesaw scheme originally proposed in Ref. [14] whose Higgs sector contains, in addition to the SU(3) c ⊗ SU(2) L ⊗ U(1) Y scalar doublet Φ, two lepton-number-carrying scalars: a complex singlet σ and a triplet ∆. All these fields develop non-zero vacuum expectation values (vevs) leading to the breaking of the Standard Model (SM) gauge group as well as the global symmetry U (1) L associated to lepton number. The latter breaking accounts for generation of the small neutrino masses. Therefore, the scalar sector is given by with L = 0 and L = −2, respectively, and the scalar field σ with lepton number L = 2. Below we will consider the required vev hierarchies in the model.
A. Yukawa Sector
Here we consider the simplest version of the seesaw scheme proposed in Ref. [14] in which no right-handed neutrinos are added, and only the SU(3) c ⊗ SU(2) L ⊗ U(1) Y electroweak breaking sector is extended so as to spontaneously break lepton number giving mass to neutrinos. Such "123" majoron-seesaw model is described by the SU In this model the neutrino mass (see Fig. 1) is given by, where v 1 and v 2 are the vevs of the singlet and the doublet, respectively. Here κ is a dimensionless parameter that describes the interaction amongst the three scalar fields (see below), and m ∆ is the mass of the scalar triplet ∆. is the dimensionful parameter responsible of lepton number violation, see eq. (3). Therefore if y ν ∼ O(1) and the mass M ∆ lies at 1 TeV region then one has that ∆ ∼ m ν and µ ∼ 1 eV. Note that one may consider two situations: v 1 Λ EW (high-scale seesaw mechanism) in whose case the scalar singlet and the invisible decays of the Higgs are decoupled [15]; the second interesting case is when Λ EW v 1 few TeV (low-scale seesaw mechanism). In this case the parameter κ is the range [10 −14 , 10 −16 ] for y ν ∼ O (1). In this case one has new physics at the TeV region including the "invisible" decays of the Higgs bosons. Therefore, led by the smallness of the neutrino mass we can qualitatively determine that the analysis to be carried out is characterized by having a vev hierarchy and the smallness of the coupling κ, that is κ 1.
B. The scalar potential
The scalar potential invariant under the SU(3) c ⊗ SU(2) L ⊗ U(1) Y ⊗ U(1) L symmetry is given by [15,16] As mentioned above the scalar fields σ, φ and ∆ acquire non-zero vacuum expectation values, v 1 , v 2 and v 3 , respectively, so that, they can be shifted as follows, The minimization conditions of eq. (4) are given by, and from these one can derive a vev seesaw relation of the type where κ is the dimensionless coupling that generates the mass parameter associated to the cubic term in the scalar potential of the simplest triplet seesaw scheme with explicit lepton number violation as proposed in [21] and recently revisited in [12].
Neutral Higgs bosons
One can now write the resulting squared mass matrix for the CP-even scalars in the weak basis (R 1 , R 2 , R 3 ) as follows, 2 From now on we follow the notation and conventions used in Ref. [16].
The matrix M 2 R is diagonalized by an orthogonal matrix as follows, We use the standard parameterization O R = R 23 R 13 R 12 where and c ij = cos α ij , s ij = sin α ij , so that the rotation matrix O R is re-expressed in terms of the mixing angles in the following way: On the other hand, the squared mass matrix for the CP-odd scalars in the weak basis (I 1 , I 2 , I 3 ) is given as, The matrix M 2 I is diagonalized as, where the null masses correspond to the would-be Goldstone boson G 0 and the Majoron J, while the squared CP-odd mass is The mass eigenstates are linked with the original ones by the following rotation, where the matrix O I is given by, with
Charged Higgs bosons
The squared mass matrix for the singly-charged scalar bosons in the original weak basis (φ ± , ∆ ± ) is given by, We now define where c ± and s ± are given as The massless state corresponds to the would-be Golstone bosons G ± and the massive state H ± is characterized by, On the other hand, the doubly-charged scalars ∆ ±± has mass
C. Scalar boson mass sum rules
Notice that using the fact that the smallness of the neutrino mass implies that the parameters κ and v 3 are very small one can, to a good approximation, rewrite eq. (6) schematically in the form, and eq. (11) becomes, As a result, the scalar H 3 and the pseudo-scalar A are almost degenerate, In the same way, by using eqs. (11), (18) and (19), one can derive the following mass relations, which can be rewritten in the form, This sum rule is also satisfied in the Type-II seesaw model with explicit breaking of lepton number. Imposing the perturbativity condition one finds that the squared mass difference between, say doubly and singly charged scalar bosons, cannot be too large [12]. Explicit comparison shows that λ 5 in eq. (4) corresponds to λ H∆ in Ref. [12]. Therefore when the couplings of the singlet σ in eq. (4) are small, λ 5 is constrained to be in the range [−0.85, 0.85], so that the remaining couplings are kept small up to the Planck scale and vacuum stability is guaranteed. See Figure 4 in Ref. [12]. Likewise when one decouples the triplet one also recovers the results found in Ref. [11].
III. THEORETICAL CONSTRAINTS
Before analyzing the sensitivities of the searches for Higgs bosons at the LHC experiments, we first discuss the restrictions that follow from the consistency requirements of the Higgs potential. We can rewrite the dimensionless parameters λ 1,2,3 and β 1,2,3 in eq. (4) in terms of the mixing angles, α ij and scalar the masses m H 1,2,3 by solving . Hence one gets, In addition, using eqs. (11), (18) and (19) we can write the dimensionless parameters λ 4,5 and κ as functions of the vevs v 1,2,3 and the masses of the pseudo-, singly-and doublycharged scalar bosons (i.e. m A , m H ± and m ∆ ±± , respectively) as, .
From the theoretical side we have to ensure that the scalar potential in the model is bounded from below (BFB).
A. Boundedness Conditions
In order to ensure that the scalar potential in eq. (4) is bounded from below we have to derive the conditions on the dimensionless parameters such the quartic part of the scalar potential is positive V (4) > 0 as the fields go to infinity. We have that the parameter κ 1 (due to the smallness of the neutrino mass) and non-negative. This follows from where we have used the last expression in eq. (24) and the fact that v 3 v 2 , v 1 . Then κ is neglected with respect to the other dimensionless parameters λ i and β j , i.e. λ i , β j κ. As a result the quartic part of the potential V (4) | κ=0 turns to be a biquadratic form λ ij ϕ 2 i ϕ 2 j of real fields. Therefore, in this strict limit, the copositivity criteria described in [22] may be applied and the boundedness conditions for eq. (4) are the following, where λ 24 ≡ λ 2 + λ 4 . In addition all the dimensionless parameters in the scalar potential are required to be less than √ 4π in order to fulfill the perturbativity condition.
B. Astrophysical constraints
In our type-II seesaw model there are some constraints on the magnitude of SU (2) triplet's vev ∆ = v 3 , that one must take into account. First of all, v 3 is constrained to be smaller than a few GeVs due to the ρ parameter ( ρ = 1.0004 ± 0.00024 [23]).
On the other hand, the presence of the Nambu-Goldstone boson associated to spontaneous lepton number violation and neutrino mass generation implies that there is a most stringent constraint on v 3 coming from astrophysics, due to supernova cooling. If the majoron is a strict Goldstone boson (or lighter than typical stellar temperatures) one has an upper bound for the Majoron-electron coupling This is discussed, for example, in Ref. [24] and references therein. This implies Taking into account the profile of the Majoron [14] 3 one can translate this as a bound on the projection of the Majoron onto the doublet as follows [16] | J|φ | = 2|v 2 |v 2 Notice that this restriction on the triplet's vev is stronger that the one stemming from the ρ parameter. The shaded region in Fig. 2 corresponds to the allowed region of v 3 as function of v 1 . To close this section we mention that our phenomenological analysis remains valid if the Nambu-Goldstone boson picks up a small mass from, say, quantum gravity effects.
IV. TYPE-II SEESAW HIGGS SEARCHES AT THE LHC
We now turn to the study of the experimental sensitivities of the LHC experiments to the parameters characterizing the "123" type-II majoron seesaw Higgs sector, as proposed in [14]. In the following we will assume that m H 1 < m H 2 < m H 3 where 1,2,3 refer to the mass ordering in the CP even Higgs sector. Therefore, there are two possible cases that can be considered 4 : where m H is the mass of the Higgs reported by the ATLAS [2] and CMS [25] collaborations, i.e. m H = 125.09 ± 0.21(stat.) ± 0.11(syst.) GeV [26]. For case (i), we have to enforce the constraints coming from LEP-II data on the lightest CP-even scalar coupling to the SM and those coming from the LHC Run-1 on the heavier scalars. Such situation has been discussed by us in Ref. [13] in the simplest "12-type" seesaw Majoron model. In case (ii), only the constraints coming from the LHC must be taken into account.
The neutral component of the Standard Model Higgs doublet couplings get modified as follows, where we have defined C i ≡ O R i2 and O R ij are the matrix elements of O R in eq. (9).
A. LEP constraints on invisible Higgs decays
The constraints on H 1 , when m H 1 < 125 GeV, stem from the process e + e − → Zh → Zbb which is written as [27] where σ SM hZ is the SM hZ cross section, R hZ is the suppression factor related to the coupling of the Higgs boson 5 to the gauge boson Z.
where C 1 = cos α 13 sin α 12 , eq. (28). Notice that C 1 ≈ sin α 12 for the limit α 13 1 and then one obtains the same exclusion region depicted in Fig. 1 in Ref. [13]. 4 Recall that m H3 ≈ m A , eq. (22), which implies that the mass of H 3 must be close to that of the doublycharged scalar mass. Therefore, as we will see in the next section, the existing bounds on searches of the doubly-charged scalar exclude the case where m H3 is lighter than the other CP-even mass eigenstates. 5 The Feynman rules for the couplings of the Higgs bosons H i to the Z are the following:
B. LHC constraints on the Higgs signal strengths
In addition, we have to enforce the limits coming from the Standard Model decay channels of the Higgs boson. These are given in terms of the signal strength parameters, where σ is the cross section for Higgs production, BR(h → f ) is the branching ratio into the Standard Model final state f , the labels NP and SM stand for New Physics and Standard Model respectively. These can be compared with those given by the experimental collaborations. The most recent results of the signal strengths from a combined ATLAS and CMS analysis [28] are shown in Table I One can see with ease that the LHC results indicate that µ V V ∼ 1. In our analysis, we assume that the LHC allows deviations up to 20% as follows,
C. LHC bounds on the heavy neutral scalars
In our study we will impose the constraints on the heavy scalars from the recent LHC scalar boson searches. Therefore, we use the bounds set by the search for a heavy Higgs in the H → W W and H → ZZ decay channels in the range [145 − 1000] GeV [29] and in the h → τ τ decay channel in the range range [100 − 1000] GeV [30]. We also adopt the constraints on the process h → γγ in the range [65 − 600] GeV [31] and the range [150, 850] GeV [32]. Besides, we impose the bounds in the A → Zh decay channel in the range [220 − 1000] GeV [33].
D. Summary of the searches of charged scalars
The type-II seesaw model with explicit breaking of lepton number contains seven physical scalars: two CP-even neutral scalars H 1 and H 2 , one CP-odd scalar A and four charged scalars ∆ ±± and H ± . Such a scenario has been widely studied in the literature and turns out to be quite appealing because it could be tested at the LHC [34][35][36][37][38][39][40][41][42][43][44]. For instance, the existence of charged scalar bosons provides additional contributions to the one-loop decays of the Standard Model Higgs boson. Indeed, they could affect the one-loop decays h → γγ [39,40] and h → Zγ [40] in a substantial way. In this case the signal strength µ γγ can set bounds on the mass of the charged scalars, ∆ ±± and/or H ± .
The doubly-charged scalar boson has the following possible decay channels: ± ± , W ± W ± , W ± H ± and H ± H ± . However, it is known that for an approximately degenerate triplet mass spectrum and vev v 3 < ∼ 10 −4 GeV the doubly charged Higgs coupling to W ± is suppressed (because it is proportional to v 3 as can be seen from Table III) and hence ∆ ±± predominantly decays into like-sign dileptons [41,44,45]. In this case, CMS [46] and ATLAS [47] have currently excluded at 95% C.L., depending on the assumptions on the branching ratios into like-sign dileptons, doubly-charged masses between 200 and 460 GeV 6 . For v 3 > ∼ 10 −4 GeV, the Yukawa couplings of triplet to leptons are too small so that ∆ ±± dominantly decays to like-sign dibosons, in which case the collider limits are rather weak [43,[48][49][50].
In the present "123" type-II seesaw model there are two additional physical scalars, a massive CP-even scalar H 3 and the massless majoron J. The latter, associated to the spontaneous breaking of lepton number, provides non-standard decay channels of other Higgs bosons as missing energy in the final state 7 .
V. INVISIBLE HIGGS DECAYS AT THE LHC
We now turn to the case of genuinely non-standard Higgs decays. We focus on investigating the LHC sensitivities on the invisible Higgs decays. In so doing we take into account how they are constrained by the available experimental data. In the previous section we mentioned that in our study the CP-even scalars obey the following mass hierarchy m H 1 < m H 2 < m H 3 . Furthermore, we will also assume that the masses m H 3 , m A , m H + and 6 From doubly-charged scalar boson searches performed by ATLAS and CMS one can also constrain the lepton number violation processes pp → ∆ ±± ∆ ∓∓ → ± ± W ∓ W ∓ and pp → ∆ ±± H ∓ → ± ± W ∓ Z [41]. This may also shed light on the Majorana phases of the lepton mixing matrix [34][35][36]. 7 These include, for example, H i → JJ and H ± → JW ∓ . Here we focus mainly on the first, the decays of H ± deserve further study but it is beyond the scope of this work and will be considered elsewhere. m ∆ ++ are nearly degenerate. As a consequence, the decay of any CP-even Higgs H i into the pseudo-scalar A is not kinematically allowed. Therefore, the new decay channels of the CP-even scalars are just, H i → JJ and H i → 2H j (when m H i < m H j 2 for i = j). The latter contributing also to the invisible decay channel of the Higgs as, The Higgs-Majoron couplings are given by, where O I ij are the elements of the rotation matrix in eq. (13) and the decay width is given by Following our conventions we have that the trilinear coupling H 2 H 1 H 1 turns out to be, ) . (34) and hence, for example when m H 1 < 2m H 2 , the decay width H 2 → H 1 H 1 is given by As we already mentioned, a salient feature of adding an isotriplet to the Standard Model is that some visible decay channels of the Higgs receive further contributions from the charged scalars, namely the one-loop decays h → γγ and h → Zγ. That is, the scalars H ± and ∆ ±± contribute to the one-loop coupling of the Higgs to two-photons and to Z-photon, leading to deviations from the Standard Model expectations for these decay channels. The interactions between CP-even and charged scalars are described by the following vertices, Note that the contributions of H ± and ∆ ±± to the decays h → γγ and h → Zγ are functions of the singlet's vev v 1 , this is in contrast to what happens in the type-II seesaw model with explicit violation of lepton number. According to eq. (26) the dimensionless parameters λ i and β i can change the sign of the couplings of g HaH + H − and g Ha∆ ++ ∆ −− , hence the contribution of the charged scalars to h → γγ and h → Zγ may be either constructive or destructive.
For the computation of the decay widths h → γγ and h → Zγ we use the expressions and conventions given in Ref. [51]. The decay width Γ(H a → γγ) turns out to be where G F is the Fermi constant, α is the fine structure constant and the form factors X j i are given by 8 , where τ x = 4m 2 x /m 2 Z . Here N F c and Q F denote, respectively, the number of colors and electric charge of a given fermion. The one-loop function f (τ ) is defined in appendix B. The parameters C a correspond to the Standard Model Higgs couplings in eq. (28).
The decay width Γ(H a → Zγ), using the notation in Ref. [51], is expressed as follows 8 We have taken into account that v 3 is very small so that any contribution involving the triplet's vev is neglected. Then for instance the Feynman rule for the vertex Table III).
where the form factors X j i are given by 9 , where C b 0 and ∆B b 0 are defined in appendix B.
VI. TYPE-II SEESAW NEUTRAL HIGGS SEARCHES AT THE LHC
We stated above that in our study we are assuming m H 1 < m H 2 < m H 3 and v 1 v 2 . Furthermore, because of the ρ parameter and the astrophysical constraint on the triplet's vev we also have that v 3 v 1 , v 2 . We found that the smallness of v 3 and the perturbativity condition of the potential lead to a very small mixing between the mass eigenstate H 3 and the CP-even components of the fields, σ and Φ, in other words, the angles α 13 and α 23 must lie close to 0 or π. As a result, we obtain the following relation, This extra mass relation is derived from eq. (24), by using eq. (25) and the fact that α 13,23 ∼ 0(π). In addition, also as a result of α 13,23 ∼ 0(π), we find that the coupling of H 3 to the Standard Model states is negligible, In Fig. 12 of appendix A we give a schematic illustration of the mass profile of the Higgs bosons in our model. The mass spectrum and composition are summarized in Table II, and provide a useful picture in our following analyses.
A. Analysis (i)
In this case we have taken the isotriplet vev v 3 = 10 −5 GeV, automatically safe from the constraints stemming from astrophysics and the ρ parameter. We have also considered 9 Here we have also assumed v 3 the following mass spectrum, and varied the parameters as v 1 ∈ [100, 2500] GeV, α 12 ∈ [0, π] and α 13,23 where 0 ≤ δ α < 0.1. As described in section IV we must enforce the LEP constraints on the lightest CP-even Higgs H 1 and LHC constraints on the heavier scalars. The near mass degeneracy of H 3 , A, H ± and ∆ ±± ensures that the oblique parameters are not affected.
In analogy to the type-II seesaw model with explicit lepton number violation we expect that, because of v 3 < 10 −4 GeV, the doubly-charged scalar predominantly decays into same sign dileptons [41,44,45] and that m ∆ ±± = 500 GeV is consistent with current experimental data, see subsection IV D.
We show in Fig. 3 the mass of the lightest CP-even scalar as a function of the absolute value of its coupling to the Standard Model states, |C 1 | in eq. (28). The blue region corresponds to the LEP exclusion region and the green(red) one is the LHC allowed(exclusion) region provided by the signal strengths 0.8 < µ XX < 1.2. The presence of light charged scalars can enhance significantly the diphoton channel of the Higgs [39]. Fig. 4 shows the correlation between µ ZZ and µ γγ (µ Zγ ) on the left(right) with µ γγ 1.2 for charged Higgs bosons of 500 GeV. The correlation between the signal strength µ ZZ and the signal strengths µ γγ and µ Zγ is shown in Fig. 4. Note that the former may exceed one due to the new contributions of the singly and doubly charged Higgs bosons. Figure 4: Analysis (i). On the left, we show correlation between µ ZZ and µ Zγ . On the right, correlation between µ ZZ and µ γγ . The color code as in Fig. 3.
The invisible decays of the Higgs bosons, characteristic of the model, turn out to be correlated to the visible channels, represented in terms of the signal strengths, as shown in Fig. 5. Note that the upper bound on the invisible decays of a Higgs boson with a mass of 125 GeV has been found to be BR(H 2 → Inv) 0.2. This limit is stronger than those provided by the ATLAS [52] and the CMS [53] collaborations 10 .
In Fig. 6 we depict the correlation between the invisible branching ratios of H 2 with the one of the lightest scalar boson H 1 . And, as can be seen, H 1 can decay 100% into the invisible channel (majorons). 10 The ATLAS collaboration has set an upper bound on the BR(H → Inv) at 0.28 while the CMS collaboration reported that the observed (expected) upper limit on the invisible branching ratio is 0.58(0.44), both results at 95% C.L. Finally, as we have mentioned we obtained that the reduced coupling of H 3 to the Standard Model states is C 3 ∼ O(10 −7 ) so that it is basically decoupled. As a result its invisible branching is essentially unconstrained, 10 −5 BR(H 3 → Inv) ≤ 1. On the other hand we find that the constraint coming from the LHC on the pseudo-scalar A with a mass of 500 GeV is automatically satisfied as well, since from the LHC,
B. Analysis (ii)
We now turn to the other case of interest, namely with v 3 = 10 −5 GeV, as before. Now we scanned over v 1 ∈ [100, 2500] GeV, α 12 ∈ [0, π] and α 13,23 = δ α (π − δ α ) (42) where 0 ≤ δ α < 0.1. As we already mentioned in this case we only have to take into account the constraints coming from Run 1 of the LHC at 8 TeV, see Table I. In practice we assume µ XX = 1.0 +0.2 −0.2 . We show in Fig. 7 the correlation between µ ZZ and µ γγ (µ Zγ ) on the left(right). As before, the allowed region is in green while the forbidden one is in red. We can see that µ γγ 1.2 for m H ± m ∆ ±± = 600 GeV. In this case we find that eq. 32 (for α 13,23 ∼ 0(π) and v 3 v 1 , v 2 ) at leading order is given by, where m H 1 = 125 GeV. BR(H 1 → Inv) versus the Higgs-majoron coupling g H 1 JJ is shown on the right of Fig. 9. Note also from the left panel in Fig. 9 that BR(H 1 → Inv) is anti-correlated with v 1 , as expected. In Fig. 10 we show the correlation between the invisible branching ratio of H 2 (the Higgs with a mass in the range 150 GeV < m H 2 < 500 GeV) and the one of H 1 . We have verified that the LHC constraints on the heavy scalars (H 2 , H 3 and A) are all satisfied. As an example, the reader can convince her/himself by looking at Fig. 11 that H 2 easily passes the restriction stemming from σ(ggH 2 )BR(H 2 → τ τ ) (top left) and/or σ(bbH 2 )BR(H 2 → τ τ ) (top right). The black continuous lines on those plots represent the experimental results from Run 1 of the CMS experiment [30]. We also found that the square of the reduced coupling of H 2 to the Standard Model states is C 2 2 0.1 for m H 2 = [150, 500] GeV. Then, one finds that the experimental upper bounds set by the search for a heavy Higgs in the H → W W and H → ZZ decay channels in [3,29] are automatically fulfilled. However, improved sensitivities expected from Run 2 may provide a meaningful probe of the theoretically consistent region, depicted in green. Also in this case, H 3 is decoupled, so the restrictions on H 3 and the massive pseudoscalar A are automatically fulfilled.
VII. CONCLUSIONS
In this paper we have presented the main features of the electroweak symmetry breaking sector of the simplest type-II seesaw model with spontaneous violation of lepton number. The Higgs sector has two characteristic features: a) the existence of a (nearly) massless Nambu-Goldstone boson and b) all neutral CP-even and CP-odd, as well as singly and doubly-charged scalar bosons coming mainly from the triplet are very close in mass, as illustrated in Fig. 12 of appendix A. However, one extra CP-even state, namely H 2 coming from a doublet-singlet mixture can be light. After reviewing the "theoretical" and experimental restrictions which apply on the Higgs sector, we have studied the sensitivities of the searches for Higgs bosons at the ongoing ATLAS/CMS experiments, including not only the new contributions to Standard Model decay channels, but also the novel Higgs decays to majorons. For these we have considered two cases, when the 125 GeV state found at CERN is either (i) the second-to-lightest or (ii) the lightest CP-even scalar boson. For case (i), we have enforced the constraints coming from LEP-II data on the lightest CP-even scalar coupling to the Standard Model states and those coming from the LHC Run-1 on the heavier scalars. In case (ii), only the constraints coming from the LHC must be taken into account. Such "invisible" Higgs boson decays give rise to missing momentum events. We have found that the experimental results from Run 1 on the search for a heavy Higgs in the H → W W and H → ZZ decay channels are automatically fulfilled. However, improved sensitivities expected from Run 2 may provide a meaningful probe of this scenario. In short we have discussed how the neutrino mass generation scenario not only suggests the need to reconsider the physics of electroweak symmetry breaking from a new perspective, but also provides a new theoretically consistent and experimentally viable paradigm. | 7,865.2 | 2015-11-23T00:00:00.000 | [
"Physics"
] |
Evaluation of Serological Methods and a New Real-Time Nested PCR for Small Ruminant Lentiviruses
Small ruminant lentiviruses (SRLVs), i.e., CAEV and MVV, cause insidious infections with life-long persistence and a slowly progressive disease, impairing both animal welfare and productivity in affected herds. The complex diagnosis of SRLVs currently combines serological methods including whole-virus and peptide-based ELISAs and Immunoblot. To improve the current diagnostic protocol, we analyzed 290 sera of animals originating from different European countries in parallel with three commercial screening ELISAs, Immunoblot as a confirmatory assay and five SU5 peptide ELISAs for genotype differentiation. A newly developed nested real-time PCR was carried out for the detection and genotype differentiation of the virus. Using a heat-map display of the combined results, the drawbacks of the current techniques were graphically visualized and quantified. The immunoblot and the SU5-ELISAs exhibited either unsatisfactory sensitivity or insufficient reliability in the differentiation of the causative viral genotype, respectively. The new truth standard was the concordance of the results of two out of three screening ELISAs and the PCR results for serologically false negative samples along with genotype differentiation. Whole-virus antigen-based ELISA showed the highest sensitivity (92.2%) and specificity (98.9%) among the screening tests, whereas PCR exhibited a sensitivity of 75%.
Introduction
Maedi-Visna virus (MVV) and caprine arthritis-encephalitis virus (CAEV) belong to the family Retroviridae, and the genus Lentivirus. They are members of a heterogeneous group called the small ruminant lentiviruses (SRLVs) comprising five different genotypes (A, B, C, D and E), infecting goats and sheep, causing cross-and superinfections [1][2][3][4][5]. MVVlike and CAEV-like strains belong to genotype A and B, respectively, with a worldwide distribution. The transmission routes for SRLVs are principally vertically through colostrum or milk but also horizontally through respiratory secretions, favored by overcrowding [6]. SRLV infections persist for life, whereby the mounted specific immune response does not protect against disease or superinfection [7]. Only one-third of the infected animals show clinical signs [8,9]. Still, SRLVs have a significant economic impact on animal production Pathogens 2022, 11, 129 2 of 21 and affect animal welfare [10]. The presence of specific antibodies is useful as an indicator of an infection and therefore, eradication and surveillance programs are based on the serological detection of infected animals. The most frequently used assays are the agar gel immunodiffusion (AGID) test and the enzyme-linked immunosorbent assay (ELISA) (OIE Terrestrial Manual, https://www.oie.int/fileadmin/Home/eng/Health_standards/tahm/ 3.07.02_CAE_MV.pdf, accessed on 6 December 2021). For the AGID test, the two major SRLV antigens used routinely are the glycoprotein 135 (gp135) and the capsid protein (p28), respectively. The specificity of the AGID test is high, but its sensitivity has been reported as being low [11][12][13]. Still, a recent study carried out on Belgian sheep and goats [13] reported a sensitivity and specificity of 100% by combining the results of two AGID kits. Several ELISA tests have been developed and described in the literature using whole virus preparations, recombinant proteins or synthetic peptides, mostly designed as indirect but also as competitive assays (according to the OIE Terrestrial Manual). The high sensitivity, efficient handling and ease of interpretation of the results are the major advantages of the ELISA compared to AGID tests. For this reason, the ELISA tests are preferred for surveillance and eradication programs. A major drawback of serology is the high genetic variability of the SRLVs, translating into high antigenic diversity [14,15]. Therefore, the serological diagnosis of SRLV remains quite challenging [7,12,16] in the absence of a "gold standard" [17]. The high genomic variability also complicates the design of tools for the molecular detection of SRLV. Furthermore, apart from low viral load, no free virus is detectable in the blood of naturally infected animals using PCR methods. Therefore, these methods mainly target the provirus in peripheral blood leukocytes [18,19].
The current SRLV diagnostic procedure in Switzerland is based on a protocol consisting of an array of four sequentially used serological tests [20] performed on individual seropositive-not herd-samples. The sampling follows field criteria, e.g., animal purchase, animal transport or clinical suspicion. The serum samples are initially screened with the IDEXX CAEV/MVV Total Ab Test (Idexx Laboratories, Liebefeld, Switzerland), a whole-virus antigen-based indirect ELISA [21] and the Small Ruminant Lentivirus Antibody Test Kit VMRD (VMRD, Pullman, WA, USA), which is a gp135-based competitive ELISA [22]. Subsequently, the immunoblot based on whole virus antigen is performed as a confirmatory assay for seropositive samples [23] followed by the SU5-ELISA set as a serotyping differentiation test into the subtypes A1, A3, A4, B1 and B2 [24]. As far as the voluntary MVV control program in sheep is concerned, the sera are screened initially for SRLV antibodies using the Eradikit ® SRLV Screening Kit (In3 diagnostic, Via B.S. Valfrè, 18, 10,121 Torino, Italia) before confirmation as described above. This thorough diagnostic procedure is expensive, laborious and time-consuming, and still quite frequently leads to inconclusive or contradictory results. Therefore, the aim of this work was to evaluate more closely the performance of the single and combined tests used, with the goal of optimizing the overall procedure. Additionally, we designed a nested real-time PCR targeting highly conserved genomic regions [25] and evaluated the benefit of the inclusion of this tool in the current diagnostic strategy.
Determination of SRLV True Positive Standard
The results obtained for the 290 samples used in the serological screening, confirmation and genotype differentiation tests and the nested real-time PCR are visualized in a graphical heatmap format in Table 1. The colors reflect the intensity of each serological reaction, with values ranging from dark to light blue to white (negative) and from light to dark red (positive). In general, positive serological reactions were seen in all flocks investigated, indicating the presence of SRLV infections. However, overall, remarkable discordance between the results obtained by different serological tests was evident(*), avoiding the recognition of clear serological reactivity patterns within flocks. The highest agreement among the screening and confirmatory test results was seen in the samples of flocks 2, 3 Pathogens 2022, 11, 129 3 of 21 and 5, whereas flock 10 exhibited discordance among the results of the screening tests as well as among screening and confirmation (Table 1).
In order to evaluate the performance of each test in the absence of a generally accepted gold standard, the SRLV true positive status was identified as a composite reference standard as recommended by others [13,16,[26][27][28]. For that purpose, the composite concordance of the serological results across the different screening tests was analyzed, as shown in the Venn diagram ( Figure 1). A total of 137 samples were positive with all three commercial tests, and therefore were considered as serologically true positives. The IDEXX-, VMRD-and ERADIKIT ELISA tests detected 178, 177 and 170 positive samples, respectively. The results of the Immunoblot (IB) formerly used as a serological and real-time PCR as an additional truth standard were included for comparative purposes (Figure 2). In the non-overlapping areas, one of four single-positive IDEXX ELISA samples showed a positive IB result, and three of them showed a positive real-time PCR result. None out of seven single-positive VMRD ELISA samples were positive in IB; however, three were positive by PCR. Finally, none out of nine single-positive ERADIKIT ELISA samples were positive in IB, and only one was positive by PCR. The proportion of real-time PCR positive samples was highest in single-positive IDEXX ELISA samples. In the overlapping regions between the IDEXX ELISA with either the ERADIKIT ELISA or the VMRD ELISA, more IB and real-time PCR positive results were observed than in the overlapping region between the ERADIKIT-and the VMRD ELISA. Samples' ELISA reactivity tended to be higher in the overlapping areas of the Venn diagram than that of samples in the non-overlapping regions (data not shown). Based on these results, we arbitrarily defined the condition of a positive reaction with at least two screening ELISAs as a composite standard rule for serological truth, complemented with real-time PCR as an additional, independent truth standard. In order to evaluate the performance of each test in the absence of a generally accepted gold standard, the SRLV true positive status was identified as a composite reference standard as recommended by others [13,16,[26][27][28]. For that purpose, the composite concordance of the serological results across the different screening tests was analyzed, as shown in the Venn diagram ( Figure 1). A total of 137 samples were positive with all three commercial tests, and therefore were considered as serologically true positives. The IDEXX-, VMRD-and ERADIKIT ELISA tests detected 178, 177 and 170 positive samples, respectively. The results of the Immunoblot (IB) formerly used as a serological and realtime PCR as an additional truth standard were included for comparative purposes ( Figure 2). In the non-overlapping areas, one of four single-positive IDEXX ELISA samples showed a positive IB result, and three of them showed a positive real-time PCR result. None out of seven single-positive VMRD ELISA samples were positive in IB; however, three were positive by PCR. Finally, none out of nine single-positive ERADIKIT ELISA samples were positive in IB, and only one was positive by PCR. The proportion of realtime PCR positive samples was highest in single-positive IDEXX ELISA samples. In the overlapping regions between the IDEXX ELISA with either the ERADIKIT ELISA or the VMRD ELISA, more IB and real-time PCR positive results were observed than in the overlapping region between the ERADIKIT-and the VMRD ELISA. Samples' ELISA reactivity tended to be higher in the overlapping areas of the Venn diagram than that of samples in the non-overlapping regions (data not shown). Based on these results, we arbitrarily defined the condition of a positive reaction with at least two screening ELISAs as a composite standard rule for serological truth, complemented with real-time PCR as an additional, independent truth standard.
Performance of the Serological Tests Based on the Composite Truth Standard vs. Real-Time PCR
Sensitivity and specificity were calculated by crossing the results of each test with the composite truth standard ( Table 2). The IDEXX ELISA exhibited a sensitivity of 92.2%, a specificity of 98.9%, with a positive (PPV) and a negative (NPV) predictive value of 99,4% and 85.9%, respectively, whereas the VMRD ELISA's sensitivity, specificity, PPV and NPV were 90.1%, 95.7%, 97.7% and 82.2%, respectively. The lowest performance was observed with the ERADIKIT ELISA with a sensitivity of 84.4%, a specificity of 91.3%, a PPV of 95.3% and an NPV of 73.7%. The distribution of the results of the commercial ELISA tests is shown as dot plots in Figure 2, revealing the IDEXX ELISA as the clearest discriminant between the expected negative and positive samples, followed by VMRD and the ERADIKIT ELISA tests. The immunoblot, which was formerly used as a confirmatory test, showed a low sensitivity of 59.2%, a specificity of 92.4%, a PPV of 94.2% and an NPV of
Performance of the Serological Tests Based on the Composite Truth Standard vs. Real-Time PCR
Sensitivity and specificity were calculated by crossing the results of each test with the composite truth standard ( Table 2). The IDEXX ELISA exhibited a sensitivity of 92.2%, a specificity of 98.9%, with a positive (PPV) and a negative (NPV) predictive value of 99,4% and 85.9%, respectively, whereas the VMRD ELISA's sensitivity, specificity, PPV and NPV were 90.1%, 95.7%, 97.7% and 82.2%, respectively. The lowest performance was observed with the ERADIKIT ELISA with a sensitivity of 84.4%, a specificity of 91.3%, a PPV of 95.3% and an NPV of 73.7%. The distribution of the results of the commercial ELISA tests is shown as dot plots in Figure 2, revealing the IDEXX ELISA as the clearest discriminant between the expected negative and positive samples, followed by VMRD and the ERADIKIT ELISA tests. The immunoblot, which was formerly used as a confirmatory test, showed a low sensitivity of 59.2%, a specificity of 92.4%, a PPV of 94.2% and an NPV of 52.2%. The performance of the nested real-time PCR as the new, independent truth standard exceeded that of the immunoblot, with a sensitivity of 75.5%, a specificity and PPV of 100%, and an NPV of 66.2% (Table 3). Comparing the results regarding the animal host or viral species (Tables 4 and 5), the immunoblot exhibited more false negative samples in sheep (50%) than in goats (5.9%, Table 4. Still, from the view of the viral species, the results of the immunoblot show a clear tendency toward more false negative results in samples identified as MVV than in samples identified as CAEV by PCR (Table 5). No significance tests were applied due to the small sample size of goats.
Serological and Molecular Differentiation between MVV and CAEV Infections
The SU5 ELISA had been used to date to differentiate between MVV and CAEV infections in seropositive samples. According to the composite truth standard, 24 samples were categorized as false positive. Due to its use as a differentiation test, we restricted the evaluation of SU5 ELISA to the serological differentiation between MVV and CAEV. In total, 151 out of 184 serologically positive samples could be characterized as MVV-or CAEV-positive. Samples without positive SU5 test results were classified as "SRLV-positive" ( Table 1). Using the nested real-time PCR for differentiation, a total of 147 samples were classified either as MVV or CAEV, whereas in six samples, coinfection with both MVV and CAEV was detected. In a total of 57 real-time PCR positive samples, the classification was confirmed by sequencing ( Figure 3). The differentiation between MVV and CAEV by real-time PCR was 100% concordant with the phylogenetic analysis of the sequences using both NCBI BLAST (basic local alignment search tool [29]) and phylogenetic analysis. To evaluate the agreement between the SU5 ELISA and the real-time PCR, the results of 116 samples available for both methods were compared. The real-time PCR results were considered as true on the basis of the sequence confirmation. Fifty-nine and 51 samples were differentiated as CAEV and MVV, respectively, by real-time PCR (Figure 2d). Good concordance between the real-time PCR and SU5 ELISA results (48 out of 51 samples) was observed in the samples classified as MVV by real-time PCR (Figure 2d). However, 19 out of 59 samples classified as CAEV were misclassified as MVV by SU5-ELISA. Furthermore, the real-time PCR was able to detect coinfection with both virus types in six samples (Table 1).
Phylogenetic Analysis of Sequences
The genetic relatedness among the SRLV was analyzed using a 200 bp-long LTR-gag sequence fragment located within the real-time PCR target sequence. A total of 57 sequences supplemented with 14 closely related sequences retrieved from GenBank were analyzed phylogenetically ( Figure 3). Two main clusters with clear separation between genotype A and genotype B according to the reference sequences were observed in the phylogenetic tree. Thus, the clusters were denominated as genotype A and genotype B (Figure 3). Moreover, as mentioned above, the classification as MVV or CAEV by real-time PCR was in complete agreement with the genotype classification in the phylogenetic tree.
The genetic relatedness of the sequences characterized was higher within flocks or within given geographic regions. The sequences from Italian samples in the genotype B cluster, split into two different clades closely related to Italian sequences previously characterized as subtype B2 (MG554402) and B3 (JF502417) strains [30], respectively. Additionally, four Italian samples were placed as separate groups in the genotype A cluster. These sequences were closely related to representative sequences of the subtypes A19 (MH374287) and A8 (MH374284), respectively, which similarly originated from Italy. The sequences of Croatian samples were also present in both the genotype A and B clusters of the tree. Two sequences located in the genotype B were found to be related to the subtype B1 reference sequence of the strain CAEV-Co (M33677.1) and three to the representative subtype B1 sequence MG554410. Sequences of genotype A were located in three different clades and were closely related to the representative sequences of the subtypes A18 (MG554409) and A4 (AY445885). Interestingly, due to sequence divergence, two distant clades were related to the same subtype A18 (MG554409) sequence. This divergence was also reflected in the percentages of identity among the samples of the individual clades (data not shown) and additionally in the different ranges of identity observed between the two clades and the subtype A18 (MG554409) sequence (Figure 4). Given the most commonly accepted SRLV subtyping method, it would be interesting to sequence the gag gene of these samples, which was beyond the focus of this work. clades were related to the same subtype A18 (MG554409) sequence. This divergence was also reflected in the percentages of identity among the samples of the individual clades (data not shown) and additionally in the different ranges of identity observed between the two clades and the subtype A18 (MG554409) sequence (Figure 4). Given the most commonly accepted SRLV subtyping method, it would be interesting to sequence the gag gene of these samples, which was beyond the focus of this work. The 17 German samples were sent for diagnostic and epidemiological purposes. The only two positive goat samples were serologically positive repetitively in a German SRLV surveillance program. Our serological and genotyping results were in complete concordance with those of the Friedrich-Loeffler Institut (FLI, personal communication). The sequence of the German sample classified as genotype A was closely related to the subtype A2 sequence (MT993908), and that classified as genotype B was linked to the sequence of the subtype B1 CAEV-Co strain.
The Swiss samples were almost exclusively from sheep with only two samples from goats, which tested negative with all the protocols used. Sequence analysis revealed a balanced distribution of the Swiss sheep samples within the genotype A and B clusters. Samples with genotype B sequences were exclusively found in the clade with the subtype B1 CAEV-Co reference strain, whereas the genotype A samples were located in four different clades. The closest relatives of these four clades were sequences of the subtypes A18 (MG554409), A4 (AY445885) and A2 (MT993908), respectively ( Figure 3).
Discussion
The diagnosis of SRLV infections is based on the serological detection of infected animals [3,12,[31][32][33][34][35][36][37]. Due to low viral load and genomic heterogeneity, direct detection of the virus in blood is considered less efficient [3,12,16,32,33,[38][39][40][41][42][43][44][45], though not negligible in view of delayed seroconversion [18,[46][47][48][49][50][51][52][53]. In this work, using a panel of 290 sera of animals originating from different European countries, we aimed to improve and simplify the quite complex diagnostic protocol established in our diagnostic laboratory consisting of several serological tests run in parallel. Heat-mapping of the results from three commercial screening ELISAs, Immunoblot as a confirmatory assay and five SU5 peptide ELISAs for genotype differentiation exhibited a quite complex and challenging pattern of reactivity with a number of inconsistent results. Therefore, it was crucial to define, based on the given serological results, an overall truth standard. We based this decision on the well-accepted concept of using a "composite standard" [13,16,[26][27][28] defined by the three screening ELISAs used. A useful complementation of this purely serological approach was the addition of a newly developed, highly sensitive PCR consisting of a primary round of a conventional PCR followed by a nested real-time PCR with the ability to differentiate between CAEV and MVV due to reliably discriminating probes with 100% specificity. The nested approach using serial dilutions of plasmids containing the target sequences exhibited at least 10-fold higher sensitivity compared to simple real-time PCR (data not shown). This composite truth for determining the condition of infection status of a given animal allowed us to re-evaluate the current diagnostic procedure performed in our laboratory. IDEXX ELISA showed the highest sensitivity and specificity out of the three screening tests and the combination of IDEXX ELISA and VMRD ELISA detected the most "true positives" compared to the combinations of IDEXX-/ERADIKIT ELISA and VMRD-/ERADIKIT ELISA. The Immunoblot based on CAEV whole virus used for confirmation exhibited an unsatisfactory sensitivity of 59.2%. This is in accordance with its tendency toward inconclusive results due to an unexplained nonspecific reactivity of most sera with the capsid protein p25 of CAEV seen over the years (data not shown). Furthermore, we show a weaker reactivity of the immunoblot in the case of MVV compared to CAEV infections ( Table 5), which does not appear to be due to either the host species or the antigen used (not shown). Given this rather poor performance, the Immunoblot is a quite laborious, demanding and antigen-consuming confirmatory assay, and we would recommend its exclusion from the diagnostic procedure.
PCR exhibited a nearly equal overall sensitivity (75.5%) compared to the SU5-ELISA set (80.7%), combined with 100% specificity and consequent unambiguous differentiation between the infection source (CAEV or MVV). Therefore, it seems appropriate to exclude the labor-intensive and costly SU5 peptide ELISA approach consisting of five ELISAs in favor of the PCR protocol as a differentiation tool. The application of the PCR in the diagnostic procedure has additional advantages regarding the possibility of detecting seronegative animals and sequencing the amplification products for molecular epidemiological analysis. Furthermore, the chance to achieve specific amplification in a seropositive sample might be significantly higher, apparently depending on the sample quality according to our laboratory experience (up to 85%, not shown). This point combined with the determination of the analytical sensitivity of the nested real-time PCR should be elucidated in further applied studies.
As also shown by others [12,43,[54][55][56][57][58][59][60][61][62], the resulting LTR sequence fragments of this work allowed quite a robust and plausible phylogentic reconstruction of the taxonomy of lentiviruses detected in the samples originating from a number of Western European countries. Sequencing of the whole genome or other fragments thereof to exclude homologous recombination could be a useful complementation for future work.
In conclusion, given these results, the diagnosis of SRLV infection by a reference laboratory could actually be reduced to the combined application of well-established screening ELISAs complemented with the PCR described here as a differentiation tool regarding the source of infection, i.e., CAEV or MVV, respectively. Finally, with only minor loss of overall performance (not shown), the screening could be reduced to two ELISA kits, e.g., IDEXX ELISA combined with VMRD ELISA and PCR.
Samples
EDTA-anticoagulated whole blood samples from 290 animals originating from different countries including Croatia, Germany, Italy and Switzerland were analyzed (Table 1). Animals were sampled during 2019 and 2020 for serological and molecular diagnosis of SRLV infections. All flocks tested positive for SRLV infection previously. Croatian samples consisted of 50 samples from goats of 5 flocks (10 each). The Italian samples consisted of 51 samples from sheep of 3 flocks located in south of Italy (11, 20 and 20, respectively). German samples consisted of 17 samples from goats of one flock. The samples originating from Switzerland were sent to our laboratory for SRLV diagnosis consisting of 170 sheep and 2 goat samples from 18 different flocks with one or more samples per flock (Table 6).
Serology
Blood samples were analyzed for the presence of SRLV antibodies using three commercial ELISA kits in parallel, namely the IDEXX ELISA (IDEXX CAEV/MVV Total Ab Test (Idexx Laboratories, Liebefeld, Switzerland; whole-virus antigen based indirect ELISA), VMRD ELISA (Small Ruminant Lentivirus Antibody Test Kit-VMRD, Pullman, WA, USA; genotype B gp135 competitive ELISA) and the ERADIKIT ELISA (Eradikit ® SRLV Screening Kit-In3Diagnostic, Torino, Italy; gag and env peptide based indirect ELISA), followed by immunoblotting and an in-house SU5 peptide ELISA test [24]. The cut-off values for the commercial ELISA tests were determined according to the manufacturers' guidelines.
The Immunoblot confirmation was based on the detection of the viral capsid (p25), matrix (p15) and nucleocapsid (p18) proteins [23]. Reactive bands were visually evaluated and scored from 1 to 3 according to the band intensity. Samples showing staining of at least two bands or single bands scored as 2 or more were considered as positive.
The SU5 peptide ELISA was implemented for genotype differentiation. This test consisted of five ELISA tests containing synthetic peptides of the immunodominant SU5 region of the SRLV subgenotypes A1, A3, A4, B1 and B2. Samples with an OD value equal to or over 30% were considered as positive.
Multiple Sequence Alignments and Design of Primers and Probes
Primers and probes for the nested real-time PCR system are shown in Table 7. Whole genome sequence alignment comprising 52 published SRLV sequences retrieved from the National Centre for Biotechnology Information (NCBI) was used for the selection of conserved genomic regions and the design of the nested real-time PCR system. The Geneious prime software version 2020 (https://www.geneious.com, accessed on 6 December 2021) was used for alignment and the Primer Express ® Software Version 3.0.1 (Applied Biosystems) was used for the design of primers and probes.
The forward primers (outer primer-F) of the first amplification step were designed to anneal to the highly conserved lentiviral RNAt lys primer-binding site (PBS) located at the leader region of the lentiviral genome. The similarly conserved annealing region of the reverse primers (outer primer-R) is located at 87 and 108 bp downstream of the predicted ATG gag initiation codon of the genotype A (GenBank Acc. No M60610) and B (GenBank Acc. No. M33677), respectively (outer primer-R). The forward primers for the genotype-specific real-time PCRs are located in a previously described conserved region encompassing a stem-loop structure that contains the dimer initiation site (DIS), just upstream of the major splice donor (MSD) of the small ruminant lentiviruses [63,64]. The reverse primers and probes are located immediately downstream of the predicted gag start codon. The target sequences of the real-time PCR reverse primers and probes and that of the outer reverse primers used were described previously [65,66]. A schematic diagram showing the localization of the nested real-time PCR target sequences is shown in Figure 3. Table 7. Sequences of primers and probes for the nested real-time PCR system. IUPAC codes were used to indicate degenerated primers. a F indicates forward primer, R reverse primer and P fluorescent probe, respectively, b GenBank accession number M33677 used as a reference sequence to indicate positions of primers and probes' binding sites, and c GenBank accession number used as a reference sequence.
DNA Extraction and Two-Step Nested Real-Time PCR System
Seven hundred and fifty microliters of EDTA blood were treated with ammonium chloride/Tris buffer (0.14 M NH4Cl, 0.17 M Tris, pH 7.2) to obtain the buffy coats [67]. Buffy coats were stored at −20 • C or used directly for DNA extraction. DNA extraction was performed using the Qiagen DNeasy ® Blood & Tissue kit (Qiagen, Hilden, Germany) and finally the DNA was eluted in 100 µL of elution buffer according to the manufacturer's protocol.
The nested real-time PCR was carried out in two successive amplification steps. The first step PCR reaction consisting of a conventional qualitative PCR contained 12.5 µL of Hotstar Taq Master Mix (HotStarTaq DNA Polymerase kit, Qiagen GmbH, Hilden, Germany), 300 nM of each primer and 5 µL of the extracted DNA. Amplification started with the activation of the polymerase at 95 • C for 15 min, followed by 40 cycles at 95 • C for 20 s, and at 60 • C for 30 s. All products of the first PCR were then tested in parallel with the genotype-specific real-time PCRs (second step of the nested real-time PCR protocol) for a sensitive detection and discrimination between genotypes A and B.
Real-time PCR reactions contained 12.5 µL TaqMan™ Universal PCR Master Mix (Applied Biosystems, Life Technologies), 900 nM of each primer, 200 nM probes and 5 µL of the PCR product of the first step. Amplification profiles consisted of a hold stage of 20 s at 95 • C and a PCR stage of 40 cycles at 95 • C for 15 s and 60 • C for 1 min. Thermal cycling was performed with a 7300 Real-Time PCR System (Applied Biosystems, Life Technologies).
Sequencing and Sequence Analysis
Samples with positive nested real-time PCR results were submitted for sequencing of the real-time PCR target region (Microsynth AG, Balgach. Switzerland). Sequences obtained were aligned using the clustal omega algorithm implemented in Geneious Prime software (Geneious Prime 2020.2.4). The phylogeny was inferred using the maximum likelihood method and the Tamura-Nei substitution model. Phylogenetic analyses were conducted using MEGA X [68]. The sequences from PCR products were deposited in GenBank under the following accession numbers: OL456240, OL456241 and OL449029 to OL449084.
Data Analysis
Sensitivity and specificity of serological tests and PCR were calculated using Single Sample Binary Diagnostic Tests by the software NCSS (NCSS Statistical Software (2019). NCSS, LLC. Kaysville, UT, USA, ncss.com/software/ncss, accessed on 6 December 2021). Funding: This research received no external funding. This study was conducted as a part of our core functions as reference laboratory for the small ruminant lentivirus in Switzerland.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: All sequencing data obtained were submitted to NCBI database. Accession numbers are given. | 6,680.8 | 2022-01-21T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Do innate killing mechanisms activated by inflammasomes have a role in treating melanoma?
Abstract Melanoma, as for many other cancers, undergoes a selection process during progression that limits many innate and adaptive tumor control mechanisms. Immunotherapy with immune checkpoint blockade overcomes one of the escape mechanisms but if the tumor is not eliminated other escape mechanisms evolve that require new approaches for tumor control. Some of the innate mechanisms that have evolved against infections with microorganisms and viruses are proving to be active against cancer cells but require better understanding of how they are activated and what inhibitory mechanisms may need to be targeted. This is particularly so for inflammasomes which have evolved against many different organisms and which recruit a number of cytotoxic mechanisms that remain poorly understood. Equally important is understanding of where these mechanisms will fit into existing treatment strategies and whether existing strategies already involve the innate killing mechanisms.
| INTRODUC TI ON
The introduction of drugs that targeted the BRAFV600E mutation in the MAPK signal pathway and advances in immunotherapy based on blockade of immune checkpoints on lymphocytes ushered in a new era in treatment of melanoma (Luke, Flaherty, Ribas, & Long, 2017).
Despite these advances, not all patients respond to such treatments and even in responding patients therapy resistance occurs in the majority of patients. Multiple causes of resistance to immunotherapy have been defined (Ribas & Wolchok, 2018;Sharma, Hu-Lieskovan, Wargo, & Ribas, 2017) including defects in antigen presentation (Sade-Feldman et al., 2017) and low T-cell infiltration into tumors . Similarly, resistance to targeted therapies involves a diverse array of causes including reactivation of the MAPK pathway (Song, Piva, et al., 2017) and epigenetic changes (Hugo et al., 2015;Shaffer et al., 2017).
Recent studies on resistance mechanisms have placed increasing importance on the plasticity of some melanoma and adaptive changes induced by treatment (Bai, Fisher, & Flaherty, 2019). These concepts have led to classifications based on different states of differentiation and that treatment resistance can involve dedifferentiation of melanoma to relatively undifferentiated states . These effects of therapy with MAPKi had been shown previously in melanoma (Hugo et al., 2015). Similarly, resistance to immunotherapy had been shown in previous studies to be associated with dedifferentiation and loss of melanoma antigens due to TNF production by T cells responding to the tumor (Landsberg et al., 2012). Similar findings were reported in adoptive T-cell therapy in melanoma with loss of differentiation antigens MART1 and gp100 (Mehta et al., 2018).
An important finding in studies on melanoma resistance was that certain dedifferentiated resistant melanoma was nevertheless sensitive to anti-cancer drugs when their expression profiles were matched to drugs in a pharmacogenomics portal (Seashore-Ludlow et al., 2015). In particular, dedifferentiated melanoma was sensitive to ferroptosis-inducing drugs. These studies have raised questions as to whether other innate cytotoxic mechanisms may have different resistance mechanisms that can be targeted particularly in melanoma not responding to targeted or immunotherapy. The following sections review the studies on ferroptosis and then review evidence that pyroptotic cell death induced by inflammasomes may also provide novel approaches against resistant melanoma.
| FERROP TOS IS A S A MODEL OF INNATE CELL DE ATH MECHANIS MS IN C ANCERS
Ferroptosis is a non-apoptotic form of cell death resulting from iron-dependent lipoxygenase enzyme peroxidation of polyunsaturated fatty acids in cell membranes (Dixon, 2017). These enzymes are normally inhibited by glutathione-dependent anti-oxidants such as glutathione peroxidase 4 (GPX4; Yang et al., 2016). Ferroptosis can therefore be induced by inhibition of GPX4. The production of GPX4 is dependent on membrane transporters that transport cysteine needed for production of glutathione. The transporters can be inhibited by drugs like erastin or sorafenib and GPX4 itself by RSL3 and FIN56. Anti-oxidants like ferrostatin-1 also inhibit ferroptosis. The significance of GPX4 in treatment resistance of cancer cells was revealed in studies on drugtolerant persister cells from several types of cancer that were found to be vulnerable to inhibition of GPX4 (Hangauer et al., 2017;Viswanathan et al., 2017). Further interest in ferroptosis was generated by the discovery that immunotherapy with anti-CTLA4 and anti-PD1 was inhibited by the anti-oxidant liprostatin-1 , indicating that although ferroptosis was an endogenous cytotoxic mechanism, it could also be induced by T cells. The mechanism of induction of ferroptosis by the T cells appeared to involve interferon (IFN)-γ-mediated downregulation of the glutamate-cystine transporter system required for production of GPX4 (Zitvogel & Kroemer, 2019).
| INFL AMMA SOME-INDUCED CELL DE ATH IN C AN CER S
These studies on ferroptosis-induced death in cancer cells that are resistant to treatment raise the possibility that other innate cell death mechanisms may also be recruited against melanoma. These include regulated pathways such as necroptosis and pyroptosis. Necroptosis is a programmed form of necrosis showing morphological features similar to necrosis that is dependent on activation of the receptorinteracting serine/threonine kinases (RIPK)1, RIPK3 and mixed lineage kinase domain-like pseudo-kinase (MLKL). The latter is recruited to phosphotidylinosites and oligomerizes in the plasma membrane (Kaczmarek, Vandenabeele, & Krysko, 2013;Tang, Kang, Berghe, Vandenabeele, & Kroemer, 2019). It does not involve caspases but can be induced by death receptor ligands such as TNF and Fas when caspase-8 is inhibited or at low expression levels. It was found that RIPK3 mRNA and protein were absent or poorly expressed in most metastatic melanoma (Geserick et al., 2015) which is a limitation in targeting necroptosis in new treatment initiatives.
Pyroptosis is also a regulated form of cell death resulting from activation of inflammasomes as outlined in several reviews (Lee & Kang, 2019;Moossavi, Parsamanesh, Bahrami, Atkin, & Sahebkar, 2018;Xia et al., 2019). Inflammasomes have been mainly of interest in defense of cells against infections. However, their association with inflammation and cell death has created interest in their involvement in a wide range of diseases such as obesity, dementia, diabetes, and cancers (Guo, Callaway, & Ting, 2015).
There are many different types of inflammasomes but in general they are cytosolic protein complexes composed of sensors that recognize microbial components and products of cell injury, an adaptor protein apoptosis-associated speck-like protein containing a caspase recruitment domain (ASC), and caspase-1 that binds to ASC. The formation of such a complex leads to the activation of caspase-1, which induces the cleavage and secretion of pro-inflammatory cytokine interleukin-1β (IL-1β) and IL-18 (Lamkanfi & Dixit, 2014). The sensors are categorized according to their structural characteristics into nucleotide-binding domain-like receptors (NLRs), and absent in melanoma 2 (AIM2)-like receptors (ALRs). The NLR sensors can be specific for certain bacterial products such as the lethal toxin of Bacillus anthracis in NLRP1 and bacterial flagellin in NLRC4. NLRP3 appears less specific and can be activated by crystalline structures such as uric acid crystals and a range of bacterial products and viruses (Lamkanfi & Dixit, 2014).
Activation of NLRP3 is believed to occur in two stages with an initial priming step resulting from activation of NF-kB, for example, by toll-like receptors (TLR) which increases the levels of proteins in the complex. Deubiquitination of NLRP3 by deubiquitinating enzymes (Py, Kim, Vakifahmetoglu-Norberg, & Yuan, 2013) and phosphorylation steps mediated by c-Jun terminal kinase (JNK1; Song, Liu, et al., 2017) are also involved. Following the priming step, activation of the complex can occur in response to a number of stimuli including particulate matter, bacterial and viral products, and ATP. It was proposed that the common mechanism may be potassium (K + ) efflux and low K levels. This would be consistent with the proposed mechanism of activation by the antibiotic nigericin, which is believed to cause efflux of K + in exchange for H + (Próchnicki, Mangan, & Latz, 2016). Another protein involved in assembly of the inflammasome is the enzyme NEK7 that binds to the so-called LRR domain of NLRP3.
This forms a connection with adjacent NLRP3 proteins allowing oligomerization to occur (Nozaki & Miao, 2019).
The AIM2 sensor was discovered in experiments designed to suppress tumorigenicity of melanoma cells by transfer of chromosome 6 from normal cells. One of the resulting differentially expressed genes was AIM2 which belonged to the family of interferon-inducible genes (DeYoung et al., 1997). Subsequent studies have shown that it binds to double-stranded DNA (dsDNA) in the cytosol either released from the nucleus or from bacteria and DNA viruses (Lugrin & Martinon, 2018).
Once bound to DNA, it forms a helical structure with ASCs by their pyrin domain (PYD) and caspase-1 then binds to ASC proteins via the CARD domains (Wang & Yin, 2017). In addition to recognition of dsDNA, AIM2 can also recognize endogenous retroviruses (Sharma, Karki, & Kanneganti, 2019)that leads to activation of endogenous IFN pathways in immune responses (Chuong, Elde, & Feschotte, 2016). AIM2 is subject to degradation by autophagy once it is bound by tripartite motif protein 11 (TRIM11). This is viewed as an autoregulatory mechanism to control inflammation (Liu et al., 2016). Immunohistochemistry studies have shown high levels of AIM2 in inflammatory skin disorders (de Koning et al., 2012) and in primary melanoma (de Koning, van Vlijmen-Willems, Zeeuwen, Blokx, & Schalkwijk, 2014).
| C Y TOKINE S AND CELL DE ATH RE SULTING FROM AC TIVATI ON OF THE INFL AMMA SOME
The key downstream events of activation of inflammasomes are activation of caspase-1 which converts pro-interleukin (IL)-1β and pro-IL-18 into the active cytokines IL-1β and IL-18. Caspase-1 also cleaves a protein called gasdermin D. Gasdermin D is a member of a family of conserved proteins that includes gasdermin A, B, C, D, E, and DFNB59 (Orning, Lien, & Fitzgerald, 2019). They have an N-terminal pore forming domain (PFD) composed of 242 amino acids (aa) connected by a 43-aa linker to a 199-aa carboxy-terminal domain. After cleavage by caspase-1, the N-terminal PFDs oligomerize and integrate into the cell membranes to form large diameter pores of 10-15nm which allows entry of solutes and disruption of the cell as well as release of IL-1β and IL-18. Most of the gasdermins have pore-forming ability that is held in check by their C-terminal domain (Kovacs & Miao, 2017). Gasdermin E can be cleaved by caspase-3 and can thereby increase apoptosis induced by intrinsic pathways.
Both gasdermin D and E also permeabilize mitochondrial membranes and provide a link between these two death pathways (Rogers et al., 2019). The central role of gasdermins in pyroptosis has led to the proposal that it be referred to as gasdermin mediated programmed necrotic cell death (Shi, Gao, & Shao, 2017).
| G OOD AND BAD A S PEC TS OF INFL AMMA SOME AC TIVATI ON IN C AN CER S
The activation of inflammasomes while potentially inducing cell death in cancers may also have tumor-promoting properties due to induction of chronic inflammation. The potential effects on individual tumors have been reviewed elsewhere Moossavi et al., 2018;Xia et al., 2019). Inflammasomes were found to be inactive in primary melanoma but constitutively active in high-grade metastatic melanoma (Dunn, Ellis, & Fujita, 2012;Okamoto et al., 2010). Nodular melanoma was reported to be more common in certain polymorphisms of NLRP3 and NLRP1 (Verma et al., 2012). One of the main products of inflammasome activation is IL-1β which has been implicated in promotion of lung cancers. This was based on a significant reduction of lung cancers in the CANTOS trial on 10,061 patients with cardiovascular disorders and high C-reactive proteins (CRPs) who were randomized to placebo or different doses of canakinumab, an antibody against IL-1β. The trial was positive in terms of reduction of cardiovascular events. In addition, a retrospective analysis showed a highly significant reduction in incidence of lung cancers in patients receiving the two higher doses of canakinumab (Ridker et al., 2017).
Retrospective analysis of an immunotherapy trial in melanoma
also raised questions about association of high CRPs with suppression of responses against immunotherapy with anti-CTLA4. This trial compared results of immunotherapy with tremelimumab (anti-CTLA4) with those against standard chemotherapy. The intent-totreat results were negative (Ribas et al., 2013) but a retrospective analysis showed that when patients with high CRPs were excluded, the patients treated with tremelimumab did have better survival than the chemotherapy treated patients (Marshall, Ribas, & Huang, 2010). A small phase 2 study on 37 patients treated with interferon and tremelimumab showed increased survival was associated with low baseline CRP levels (Tarhini et al., 2012). CRPs are induced in the liver by IL-6 which is upregulated by activation of NF-kB by IL-1β so this is again indirect evidence for an adverse effect of activated inflammasomes on immune responses against melanoma.
In contrast, the importance of inflammasomes in immune responses against cancers received strong support from studies on an innate immune checkpoint referred to as transmembrane protein 176B (TMEM176B). TMEM176B regulates Ca 2+ -dependent K + channels and thereby prevents development of low levels of K + in the cytosol that is a strong activator of NLRP3 inflammasomes. Block of this checkpoint in models of several murine tumors allowed activation of NLRP3 inflammasomes and enhanced immune responses against the tumors.
Immunotherapy with anti-PD1 and anti-CTLA4 against EG7 lymphomas was also enhanced by blockade of TMEM176B. Importantly, they found that human melanoma tumors responding to anti-CTLA4 and anti-PD1 had upregulation of 15/16 inflammasome-related genes that was not detected in melanoma that did not respond to ICB (Segovia et al., 2019). These gene expression differences were not observed in pretreatment samples. Pharmacological inhibition of TMEM176B was associated with increased infiltration of tumors by CD8 T cells. Similar enhanced anti-PD1 responses were seen against mice bearing a murine melanoma. The authors suggested that TMEM176B may be a useful marker to predict responses to ICB immunotherapy with high levels being associated with poor responses (Segovia et al., 2019). Inflammasomes are known to be expressed in DCs but whether they are critical for directing type 1 responses is not known (Ferreira et al., 2017). AIM2 in plasmacytoid DCs in lung carcinoma was considered responsible for immunosuppression associated with IL-1 alpha production (Sorrentino et al., 2015). NLRC4 inflammasome in cDC1 was the target for dabrafenib activation (Hajek et al., 2018) but its role in treatment responses remains to be studied. The need for priming steps for activation of inflammasomes may provide a strategy for selective activation of cells in the immune system and thereby increase immune responses against cancers. Recent studies have shown impressive responses in patients that had failed anti-PD1 treatment when treated with TLR9 agonists. TLR9 expression is confined to plasmacytoid DCs and CD141 DCs so that activation of inflammasomes by these agonists would be selective for the immune cells (Poh, 2018;.
| DE VELOPING A UNIF YING HYP OTHE S IS
These and previous studies indicate that inflammasomes have diverse roles in cancer with some cancers benefiting from IL-1β and IL-18, whereas in others, the IL-1 signaling pathway promoted cancer growth Karki, Man, & Kanneganti, 2017). We suggest the best unifying hypothesis is that the outcomes depend on whether inflammasome activation is predominantly within the tumor or the immune cells. In the Segovia et al. study, the benefits appeared to be clearly focused on immune responses to the tumors and the TMEM176B checkpoint did not work through changes in the tumor cells. It also appeared that the effects of blocking this checkpoint were mainly evident in immunogenic tumors and not in tumors unresponsive to immunotherapy (Segovia et al., 2019; Figure 1).
| CLUE S FROM THE C AN CER G ENOME ATL A S ( TCG A ) MEL ANOMA DATA
Studies on information in the TCGA have proven useful in identifying subgroups of patients with different outcomes. It was reasoned that high levels of proteins associated with inflammasomes may identify their effect on survival. RNA-seq data from 458 patients with cutaneous melanoma were interrogated for associations of inflammasome proteins with patient survival by comparing outcomes in patients with high (>median) versus low (<median) levels. The forest plots shown in Figure 2 indicate that the RNA-seq expression levels, above or below the median, of AIM2, NLRP3, NLRP1, and NLRC4 show a strong positive correlation with survival. Expression of ASC (PYCARD) adaptor proteins was not related to survival.
As discussed above, the outcome of inflammasome activation may depend on whether inflammasomes are activated in immune cells or in cancer cells. TCGA analyses shown in Figure 2 did not discriminate between these effects but we reasoned that if the results were due to effects related to activation in immune cells this would be most evident in melanoma with high levels of T cells infiltrating lymphocytes (TILs). The effects on survival were therefore compared in patients with <5% TILs and those with >5% TILs using information reported elsewhere (Chen, Khodadoust, Liu, Newman, & Alizadeh, 2018;Saltz et al., 2018). The improvement in survival with high levels of inflammasome receptors NLRP1 and NLRP3 was abrogated in patients with low TILs (Figure 3a). In contrast, the improved survivals seen with high AIM2 and high NLRC4 were retained in patients with high or low TILs score suggesting the beneficial effect might be intrinsic to melanoma cells and independent of TILs level (Figure 3b).
NLRP1 activation is of particular interest in that recent studies
suggest that it is held in an inactive state by the serine dipeptidyl peptidases (DPP)8 and 9 and that inhibitors of these dipeptidases result in activation of NLRP1. (Zhong et al., 2018) Preclinical studies with non-specific inhibitors of DPP8/9 (also known as PT100 or talabostat) had previously shown that these inhibitors had anti-tumor effects that appeared mainly related to effects on the immune system (Adams et al., 2004).
NLRC4 is believed to be activated by the NOD-like apoptosis-inducing protein (NAIP) sensor which binds a number of different gram-negative bacteria. It can bind directly to caspase-1 rather than through binding to ASC and can activate caspase-8. It has also been associated with high levels of IL-18 (Duncan & Canna, 2018). In a murine model of melanoma, NLRC4 was found to suppress tumor growth by non-inflammasome activation involving interferon-gamma production from tumor-associated macrophages.
The latter were prominent in primary melanoma but not in metastatic melanoma (Janowski et al., 2016). AIM2 is activated by dsDNA from pathogens or damaged nuclei. It was also activated by IFN in a network involving endogenous retroviruses upstream of AIM2 (Chuong et al., 2016).
F I G U R E 1
Possible mechanisms of pyroptotic cell death through inflammasome activation. Epigenetic drugs such as DNA methyltransferase inhibitor (DNMTi) and CDK9 inhibitors (CDK9i) can activate ERVs and dsDNA in the cancer cells. Inflammatory protein AIM2 binds dsDNA and subsequently activates caspase-1 through forming a complex with ASC protein. Whereas, DAMPs, PAMPs binds to TLRs on the plasma membrane and subsequently activate inflammatory sensor NLRC4, which can directly activate caspase-1. Activated caspase-1 directs pyroptotic cell death and converts IL-1β, IL18 from inactive Pro-IL-1β, Pro-IL18 that releases from the cells to drive inflammation (left panel). Intrinsic inflammatory gene signatures in the dendritic cells or macrophages augment response to ICB. TMEM176B is a negative regulator of the inflammatory proteins such as NLRP3 and NLRP1. Blocking TMEM176B by small molecule inhibitor leads to activation of caspase-1 and IL-1β. This promotes the recruitment of CD8+ T cells in the tumor microenvironment thus enhancing the therapeutic response of ICB (right panel) (modified from Segovia et al., 2019). DAMPs, damage-associated molecular patterns; ERVs, endogenous retroviruses; ICB, immune checkpoint blockade; PAMPs, pathogen-associated molecular patterns; TLR, toll-like receptor.
F I G U R E 2
Low expression of inflammasome mediators is associated with poor prognosis in melanoma. RNAseq data of skin cutaneous melanoma (SKCM) patients were retrieved from TCGA database (N = 458). Patients were dichotomized based on median expression (>median = high; <median = low) of the corresponding genes. (a) A forest plot was generated based on the computed hazard ratio (HR) and 95% confidence intervals (CI) of survival for each gene. Logrank p value refers to the significance of the overall survival adjusted to 10 years. (b) A KM-plot is showing overall survival of SKCM patients based on AIM2 median expression
| C-reactive proteins as markers of harmful inflammasome activation
IL-1β is a very pleiotropic cytokine that has effects that can promote or inhibit growth of tumors (Bent, Moll, Grabbe, & Bros, 2018). It can also have downstream effects resulting in activation of NF-kB and production of cytokines like IL-6 that activate acute phase proteins like CRP (Slaats, Ten Oever, van de Veerdonk, & Netea, 2016). CRP levels are recognized markers of inflammation, and several studies have identified high CRP levels (>10 mg) to be associated with an adverse prognosis in melanoma. A comprehensive study on 1,144 patients found that approximately 10% of patients had elevated levels, and this was an adverse prognostic indicator at all stages of the disease (Fang et al., 2015). Sequential studies also showed that increased levels could predict recurrences.
Certain forms of metastatic melanoma when they involve subcutaneous sites can be associated with clinical appearances of marked inflammation and systemic symptoms of fevers, anorexia, and weight loss. The clinical appearance of one such metastasis is shown in Figure 4a. Melanoma cultures from this patient (patient 7) were shown to produce high levels of a range of cytokines including IL-1β, IL-6, IL-8, and VEGF, which were associated with constitutive activation of NF-kB (Gallagher et al., 2014). The BET protein inhibitor, I-BET151, was very effective in inhibiting the cytokine production in vitro and this or more recently developed BET protein inhibitors (Xu & Vakoc, 2017) may have a role against the severe forms of inflammatory melanoma (Gallagher et al., 2014). We have subsequently shown that such melanoma can be associated with marked global hypomethylation of DNA and expression of inhibitory ligands such as PD-L1 (Chatterjee et al., 2018;Emran et al., 2019). These studies are consistent with an adverse effect of inflammasome activation in melanoma due to both growth-promoting effects and immunosuppression from low-grade chronic inflammation.
| D IFFERENTIATI ON S TATE OF MEL ANOMA A S DE TERMINANTS OF INFL AMMA SOME AC TIVATION
Part of the paradigm introduced by the studies of Tsoi et al. was that different states of differentiation of melanoma have different sensitivity to particular treatments. In the case of ferroptosis, the susceptible melanoma cells appeared to have a neural crest-like phenotype. Triggers for cell death were agents that reduced or inhibited the anti-oxidants like GPX4 that inhibit the lipoxygenase enzymes . In the case of NLRP3 in normal cells, a priming step increases the concentration of proteins in the complex and also involves deubiquitination and phosphorylation steps referred to above. Whether a priming step is needed in cancer cells is not clear as the malignant state may provide this priming step. This was supported by studies on melanoma that showed constitutive activation of NALP (NLRP3) inflammasomes in melanoma occurred F I G U R E 3 Association of TILs level and gene expression of the inflammasome mediators with overall survival in melanoma. (a) Skin cutaneous melanoma (SKCM) patients were separated based on the median expression of the corresponding gene and the TILs level (>5% refers to high TILs). TILs proportion in the tumor were retried based on deep learning pathology images (Saltz et al., 2018). (b) Similarly, SKCM patients were stratified based on low TILs level (<5% refers to low TILs) and median expression of the selected gene. Forest plot refers to the HR with 95% CI and logrank p value were calculated for overall survival. Statistical analysis was performed in GraphPad prism, and p < .05 refers to significance of overall survival in late-stage metastatic melanoma but not in intermediate or earlystage melanoma. The inflammasomes in intermediate stage melanoma could however be activated by exposure of the melanoma to IL-1 (Okamoto et al., 2010). Studies on melanoma cell lines suggest that sensitivity to the nigericin activator of NLRP3 was associated with high levels of NLRP3 complex proteins and other inflammasome sensors (unreported data) but whether levels are associated with different states of differentiation remains to be studied TLR9 expression is confined to plasmacytoid DCs so that activation of inflammasomes by these agonists would be selective for the immune cells (Poh, 2018;.
| REPURP OS ING DRUG S TO AC TIVATE INFL AMMA SOME S
Recent studies showing that the serine dipeptidases DPP8/9 inhibit activation of NLRP1 has put renewed interest in past studies on inhibitors of these peptidases (like talabostat) as anti-tumor agents (Eager et al., 2009). In particular, studies on AML have shown that with appropriate selection a large proportion of AML lines can be killed by these and more specific inhibitors of DDP8/9 (Johnson et al., 2018). In addition, a number of chemotherapy agents that have modest activity against cancers such as melanoma may have ancillary effects on inflammasomes. For example, paclitaxel was shown to activate TLR in macrophages and to prime macrophages for NLRP3 inflammasome activation by ATP or nigericin (Son, Shim, Hwang, Park, & Yu, 2019). IL-1β production was totally dependent on presence of NLRP3. Doxorubicin was found to induce pyroptosis in several melanoma lines in vitro particularly when autophagy was inhibited by chloroquine .
Autophagy is a critical regulator of inflammasome activation by removal of endogenous signals that would otherwise activate inflammasomes. It is also critical for degradation of inflammasome components and may be a physiological feedback mechanism to control inflammation (Harris et al., 2017;Seveau et al., 2018). Chloroquine and several derivatives are currently in clinical trials with chemotherapy but whether inflammasomes were involved is not known (Amaravadi, Kimmelman, & Debnath, 2019;Rebecca et al., 2019).
Temozolomide was shown to induce responses in 3 patients who had failed immunotherapy with pembrolizumab (Swami et al., 2019) but whether activation of inflammasomes was involved is an intriguing possibility. A particularly interesting report was the activation of inflammasomes by the BRAFV600E targeting drug dabrafenib (Hajek et al., 2018). As reviewed elsewhere , the cause of fevers induced by dabrafenib has long been a puzzle and this report provides a plausible explanation as well as opening up new areas of research such as whether off-target effects of dabrafenib on inflammasomes may increase immune responses against melanoma.
Drugs that demethylate DNA in the nucleus like decitabine or azathioprine may activate AIM2 inflammasomes. Endogenous retroviral elements (ERV) in particular were shown to activate AIM2, and further studies are needed to examine whether cell death induced by these drugs is associated with activation of AIM2 (Chuong et al., 2016). Inhibitors of CDK4/6 were also reported to expose ERV elements and potentially be involved in the activation of immune responses by these drugs (Goel et al., 2017). CDK9 inhibitors are proving to be another class of epigenetic regulators that can reactivate genes silenced in heterochromatin to an active state in euchromatin by phosphorylation of BRG1 in the SWI/SNF complex (Zhang et al., 2018). These brief references indicate there is much scope for examining the role of inflammasomes in the activity of these agents ( Figure 1).
| CON CLUS ION
Pyroptosis induced by activation of inflammasomes is potentially an additional cell death mechanism that is not subject to the obstacles that limit apoptosis and other forms of cell death.
Nevertheless, the role of inflammasomes in cancer and melanoma in particular is complicated in that the inflammatory response generated may promote tumorigenesis as suggested for lung carcinoma in the CANTOS trial. There is also indirect evidence that inflammation from activation of inflammasomes may be immunosuppressive and inhibit immunotherapy induced by ICB. In contrast, data from TCGA analyses can be interpreted to suggest that inflammasomes in melanoma may be associated with improved prognosis.
We suggest these different interpretations may depend on whether there is chronic activation of inflammasomes in the tumor itself giving rise to tumor promotion and immunosuppression or whether the site of inflammasome activation is in the immune system (particularly DCs) as shown in Figure 1. The latter may amplify the inherent cytotoxic mechanisms and lead to tumor control. The TCGA data, though indirect, would be consistent with this interpretation and is supported by studies in murine models.
Should these interpretations prove valid a two-pronged treatment approach might be considered that would involve selection of patients with evidence of chronic activation of inflammasomes in their tumor such as high CRP levels and treatment with agents that limit inflammation such as inhibition of circulating IL-1β and inhibition of NF-kB by use of BET protein inhibitors. On the other hand, should favorable outcomes depend on activation of inflammasomes in the cells of the immune system, selection of patients might be based on those not responding to ICB and treatment with agents that activate the inflammasome. There is very limited understanding of the role of inflammasomes in relation to different subsets of DCs and whether activation of different inflammasomes has different treatment outcomes. New initiatives that target inflammasomes in the immune system might include blockade of the TMEM176B checkpoint that limits activation of inflammasomes in T cells as reviewed above. There may also be scope for treatments with drugs that have selectivity for particular inflammasomes. The targeting of NLRP1 inflammasomes by DPP8/9 inhibitors or the targeting of NLRC4 inflammasomes by dabrafenib may be examples of such agents as well as the wellknown TLR9 agonists mentioned above. Blocking autophagy to increase activation of inflammasomes might also be worthy of investigation in drugs shown in past studies to have only modest benefits against melanoma.
CO N FLI C T O F I NTE R E S T
The authors have no conflict of interest in relation to this work. | 6,816.6 | 2020-02-06T00:00:00.000 | [
"Medicine",
"Biology"
] |
Estimation for a Class of Lipschitz Nonlinear Discrete-Time Systems with Time Delay
and Applied Analysis 3 will be denoted by normal letters; R denotes the real n-dimensional Euclidean space; ‖ · ‖ denotes the Euclidean norm; θ k ∈ l2 0,N means ∑N k 0 θ T k θ k < ∞; the superscripts “−1” and “T” stand for the inverse and transpose of a matrix, resp.; I is the identity matrix with appropriate dimensions; For a real matrix, P > 0 P < 0, resp. means that P is symmetric and positive negative, resp. definite; 〈∗, ∗〉 denotes the inner product in the Krein space; diag{· · · } denotes a block-diagonal matrix; L{· · ·} denotes the linear space spanned by sequence {· · ·}. 2. System Model and Problem Formulation Consider a class of nonlinear systems described by the following equations: x k 1 Ax k Adx kd f k, Fx k , u k h k,Hx kd , u k Bw k ,
Introduction
In control field, nonlinear estimation is considered to be an important task which is also of great challenge, and it has been a very active area of research for decades 1-7 .Many kinds of methods on estimator design have been proposed for different types of nonlinear dynamical systems.Generally speaking, there are three approaches widely adopted for nonlinear estimation.In the first one, by using an extended nonexact linearization of the nonlinear systems, the estimator is designed by employing classical linear observer techniques 1 .The second approach, based on a nonlinear state coordinate transformation which renders the dynamics driven by nonlinear output injection and the output linear on the new coordinates, uses the quasilinear approaches to design the nonlinear estimator 2-4 .In the last one, methods are developed to design nonlinear estimators for systems which consist of an observable linear part and a locally or globally Lipschitz nonlinear part 5-7 .In this paper, the problem of H ∞ estimator design is investigated for a class of Lipschitz nonlinear discrete-time systems with time delay and disturbance input.will be denoted by normal letters; R n denotes the real n-dimensional Euclidean space; • denotes the Euclidean norm; θ k ∈ l 2 0, N means N k 0 θ T k θ k < ∞; the superscripts "−1" and "T " stand for the inverse and transpose of a matrix, resp.;I is the identity matrix with appropriate dimensions; For a real matrix, P > 0 P < 0, resp.means that P is symmetric and positive negative, resp.definite; * , * denotes the inner product in the Krein space; diag{• • • } denotes a block-diagonal matrix; L{• • • } denotes the linear space spanned by sequence {• • • }.
System Model and Problem Formulation
Consider a class of nonlinear systems described by the following equations: where k d k − d, and the positive integer d denotes the known state delay; x k ∈ R n is the state, u k ∈ R p is the measurable information, w k ∈ R q and v k ∈ R m are the disturbance input belonging to l 2 0, N , y k ∈ R m is the measurement output, and z k ∈ R r is the signal to be estimated; the initial condition x 0 s s −d, −d 1, . . ., 0 is unknown; the matrices A ∈ R n×n , A d ∈ R n×n , B ∈ R n×q , C ∈ R m×n and L ∈ R r×n , are real and known constant matrices.
In addition, f k, Fx k , u k and h k, Hx k d , u k are assumed to satisfy the following Lipschitz conditions: where α > 0 and β > 0 are known Lipschitz constants, and F, H are real matrix with appropriate dimension.The H ∞ estimation problem under investigation is stated as follows.Given the desired noise attenuation level γ > 0 and the observation {y j } k j 0 , find an estimate z k | k of the signal z k , if it exists, such that the following inequality is satisfied: Remark 2.1.For the sake of simplicity, the initial state estimate x 0 k k −d, −d 1, . . ., 0 is assumed to be zero in inequality 2.3 .Remark 2.2.Although the system given in 20 is different from the one given in this paper, the problem mentioned in 20 can also be solved by using the presented approach.The resolvent first converts the system given in 20 into a delay-free one by using the classical system augmentation approach, and then designs estimator by employing the similar but easier technical line with our paper.
Main Results
In this section, the Krein space-based approach is proposed to design the H ∞ estimator for Lipschitz nonlinear systems.To begin with, the H ∞ estimation problem 2.3 and the Lipschitz conditions 2.2 are combined in an indefinite quadratic form, and the nonlinearities are assumed to be obtained by {y i } k i 0 at the time step k.Then, an equivalent Krein space problem is constructed by introducing an imaginary Krein space stochastic system.Finally, based on projection formula and innovation analysis approach in the Krein space, the recursive estimator is derived.
Construct a Partially Equivalent Krein Space Problem
It is proved in this subsection that the H ∞ estimation problem can be reduced to a positive minimum problem of indefinite quadratic form, and the minimum can be obtained by using the Krein space-based approach.
Since the denominator of the left side of 2.3 is positive, the inequality 2.3 is equivalent to where zf k | k and zh k d | k denote the optimal estimation of z f k and z h k d based on the observation {y j } k j 0 , respectively.And, let
3.3
From the Lipschitz conditions 2.2 , we derive that Note that the left side of 3.1 and 3.4 , J N , can be recast into the form 3.5 where
3.6
Since J N ≤ J * N , it is natural to see that if J N > 0 then the H ∞ estimation problem 2.3 is satisfied, that is, J * N > 0. Hence, the H ∞ estimation problem 2.3 can be converted into finding the estimate sequence } such that J N has a minimum with respect to {x 0 , w} and the minimum of J N is positive.As mentioned in 25, 26 , the formulated H ∞ estimation problem can be solved by employing the Krein space approach.
Introduce the following Krein space stochastic system Let
3.8
Definition 3.1.The estimator y i | i − 1 denotes the optimal estimation of y i given the observation L{{y z j } i−1 j 0 }; the estimator z m i | i denotes the optimal estimation of zm i | i given the observation L{{y z j } i−1 j 0 ; y i }; the estimator Furthermore, introduce the following stochastic vectors and the corresponding covariance matrices 3.9 And, denote
3.10
For calculating the minimum of J N , we present the following Lemma 3.2.
Lemma 3.2. {{ y z i } k i 0 } is the innovation sequence which spans the same linear space as that of
Proof.From Definition 3.1 and 3.9 , y i | i − 1 , z m i | i and z h i d | i are the linear combination of the observation sequence {{y z j } i−1 j 0 ; y i }, {{y z j } i−1 j 0 ; y i , zm i | i }, and {{y z j } i j 0 }, respectively.Conversely, y i , zm i | i and zh i d | i can be given by the linear combination of {{ y z j } i−1 j 0 ; It is also shown by 3.9 that
3.13
This completes the proof.
Now, an existence condition and a solution to the minimum of J N are derived as follows.
3.14
In this case the minimum value of J N is given by
3.15
where Proof.Based on the definition 3.2 and 3.3 , the state equation in system 2.1 can be rewritten as 3.17 In this case, it is assumed that
3.18
By introducing an augmented state we obtain an augmented state-space model
3.21
Additionally, we can rewrite J N as where
3.23
Define the following state transition matrix
3.25
Using 3.20 and 3.24 , we have where
3.27
The matrix Ψ wN is derived by replacing B u,a in Ψ uN with B a .Thus, J N can be reexpressed as where
3.29
Considering the Krein space stochastic system defined by 3.7 and state transition matrix 3.24 , we have where matrices Ψ 0N , Ψ uN , and Ψ wN are the same as given in 3.26 , vectors y zN and u N are, respectively, defined by replacing Euclidean space element y z and u in y zN and u N given by 3.25 with the Krein space element y z and u, vectors w N and v z,aN are also defined by replacing Euclidean space element w and v z,a in w N and v z,aN given by 3.23 with the Krein space element w and v z,a , and vector x a 0 is given by replacing Euclidean space element x in x a k given by 3.19 with the Krein space element x when k 0.
Using the stochastic characteristic of x a 0 , w N and v z,a , we have
3.32
On the other hand, applying the Krein space projection formula, we have where
3.34
where ϕ m,ij is derived by replacing C z,a in ϕ ij with
3.36
Since matrix Θ N is nonsingular, it follows from 3.35 that R y zN and R y zN are congruent, which also means that R y zN and R y zN have the same inertia.Note that both R y zN and Q v z,aN are blockdiagonal matrices, and Therefore, J N subject to system 2.1 with Lipschitz conditions 2.2 has the minimum if and only Moreover, the minimum value of J N can be rewritten as
3.38
The proof is completed.
Remark 3.4.Due to the built innovation sequence {{ y z i } k i 0 } in Lemma 3.2, the form of the minimum on indefinite quadratic form J N is different from the one given in 26-28 .It is shown from 3.15 that the estimation errors y k | k −1 , z m k | k and z h k d | k are mutually uncorrelated, which will make the design of H ∞ estimator much easier than the one given in 26-28 .
Solution of the H ∞ Estimation Problem
In this subsection, the Kalman-like recursive H ∞ estimator is presented by using orthogonal projection in the Krein space.Denote y 0 i y i ,
3.39
Observe from 3.8 , we have Based on the above definition, we introduce the following stochastic sequence and the corresponding covariance matrices
Applying projection formula in the Krein space, x i, 2 i 0, 1, . . ., k d is computed recursively as 3.42
3.43
Note that
3.44
Abstract and Applied Analysis 15 where
3.46
Moreover, taking into account 3.7 and 3.46 , we obtain
3.48
where Q w i I. Thus, P 2 i, i i 0, 1, . . ., k d can be computed recursively as
3.49
Similarly, employing the projection formula in the Krein space, the optimal estimator x i, 1 i k d 1, . . ., k can be computed by
3.51
Then, from 3.7 and 3.50 , we can yield
3.52
Thus, we obtain that 1 if i − j ≥ k d , we have
3.56
Next, according to the above analysis, z m k | k as the Krein space projections of zm k | k onto L{{y z j } k−1 j 0 ; y 0 k } can be computed by the following formula where
3.58
And, z h k d | k as the Krein space projections of zh k d | k onto L{{y z j } k−1 j 0 ; y 1 k } can be computed by the following formula
3.59
Based on Theorem 3.3 and the above discussion, we propose the following results.
3.61
R y 0 k, 0 , P 1 i, j , and R y 1 j, 1 are calculated by 3.58 , 3.56 , and 3.51 , respectively.Moreover, one possible level-γ H ∞ estimator is given by where E 0 I , and z m k | k is computed by 3.57 .
Proof.In view of Definitions 3.1 and 3.5, it follows from 3.9 and 3.41 that R y k | k − 1 R y 0 k, 0 .In addition, according to 3.7 , 3.9 , and 3.57 , the covariance matrix R z m k | k can be given by the second equality in 3.61 .Similarly, based on 3.7 , 3.9 , and 3.59 , the covariance matrix R z h k d | k can be obtained by the third equality in 3.61 .Thus, from Theorem 3.3, it follows that J N has a minimum if 3.60 holds.
On the other hand, note that the minimum value of J N is given by 3.15 in Theorem 3.3 and any choice of estimator satisfying min J N > 0 is an acceptable one.Therefore, Taking into account
A Numerical Example
Consider the system 2. parameter L −0.3243 0.0945 T of 5 in 20 .In this numerical example, we compare our algorithm with the one given in 20 in case of γ 1.6164.Figure 1 shows the true value of signal z k , the estimate using our algorithm, and the estimate using the algorithm given in 20 .Figure 2 shows the estimation error of our approach and the estimation error of the approach in 20 .It is shown in Figures 1 and 2 that the proposed algorithm is better than the one given in 20 .Figure 3 shows the ratios between the energy of the estimation error and input noises for the proposed H ∞ estimation algorithm.It is shown that the maximum energy ratio from the input noises to the estimation error is less than γ 2 by using our approach.Figure 4 shows the value of indefinite quadratic form J N for the given estimation algorithm.It is shown that the value of indefinite quadratic form J N is positive by employing the proposed algorithm in Theorem 3.6.
Conclusions
A recursive H ∞ filtering estimate algorithm for discrete-time Lipschitz nonlinear systems with time-delay and disturbance input is proposed.By combining the H ∞ -norm estimation condition with the Lipschitz conditions on nonlinearity, the H ∞ estimation problem is converted to the positive minimum problem of indefinite quadratic form.Motivated by the observation that the minimum problem of indefinite quadratic form coincides with Kalman filtering in the Krein space, a novel Krein space-based H ∞ filtering estimate algorithm is developed.Employing projection formula and innovation analysis technology in the Krein space, the H ∞ estimator and its sufficient existence condition are presented based on Riccatilike difference equations.A numerical example is provided in order to demonstrate the performances of the proposed approach.Future research work will extend the proposed method to investigate more general nonlinear system models with nonlinearity in observation equations.Another interesting research topic is the H ∞ multistep prediction and fixed-lag smoothing problem for time-delay Lipschitz nonlinear systems.
2 . 3 where
Π k k −d, −d 1, . . ., 0 is a given positive definite matrix function which reflects the relative uncertainty of the initial state x 0 k k −d, −d 1, . . ., 0 to the input and measurement noises.
; the initial state x 0 s s −d, −d 1, . . ., 0 and w k , v k , v z f k , v z k and v z h k are mutually uncorrelated white noises with zero means and known covariance matrices and zh k d | k are regarded as the imaginary measurement at time k for the linear combination Fx k , Lx k , and Hx k d , respectively.
3 . 40 Definition 3 . 5 .
Given k ≥ d, the estimator ξ i | j, 2 for 0 ≤ j < k d denotes the optimal estimate of ξ i given the observation L{{y 2 s } j s 0 }, and the estimator ξ i | j, 1 for k d ≤ j ≤ k denotes the optimal estimate of ξ i given the observationL{{y 2 s } k d −1 s 0 ; {y 1 τ } j τ k d }.For simplicity, we use ξ i, 2 to denote ξ i | i − 1, 2 , and use ξ i, 1 to denote ξ i | i − 1, 1 throughout the paper.
1 with time delay d 3 Then we have α β 1 .
Set x k −0.2k 0.1k T k −3, −2, −1, 0 , and Π k I k −3, −2, −1, 0 .Both the system noise w k and the measurement noise v k are supposed to be band-limited white noise with power 0.01.By applying Theorem 3.1 in 20 , we obtain the minimum disturbance attenuation level γ min 1.6164 and the observer algorithm Estimate using algorithm of[20]
Figure 1 :
Figure1: Signal z k solid , its estimate using our algorithm star , and its estimate using algorithm in 20 dashed .
Figure 2 :
Figure 2: Estimation error of our algorithm solid and estimation error of algorithm in 20 dashed .
Figure 3 :Figure 4 :
Figure 3:The energy ratio between estimation error and all input noises for the proposed H ∞ estimation algorithm.
31where y zN y zN − Ψ uN u N .In the light of Theorem 2.4.2 and Lemma 2.4.3 in 26 , J N has a minimum over {x a 0 , w N } if and only if R y zN y zN , y zN and Q v z,aN v z,aN , v z,aN have the same inertia.Moreover, the minimum of J N is given by a k is given by 3.23 .It follows that R y zN and Q v z,aN have the same inertia if and only if R y 3.60 , one possible estimator can be obtained by setting zm k | k z m k | k and zh k d | k z h k d | k .This completes the proof.Remark 3.7.It is shown from 3.57 and 3.59 that z m k | k and z k d | k are, respectively, the filtering estimate and fixed-lag smoothing of zm k | k and z k d | k in the Krein space.Additionally, it follows from Theorem 3.6 that zm k | k and zh k d | k achieving the H ∞ estimation problem 2.3 can be, respectively, computed by the right side of 3.57 and 3.59 .Thus, it can be concluded that the proposed results in this paper are related with both the H 2 filtering and H 2 fixed-lag smoothing in the Krein space.Recently, the robust H ∞ observers for Lipschitz nonlinear delay-free systems with Lipschitz nonlinear additive uncertainties and time-varying parametric uncertainties have been studied in 10, 11 , where the optimization of the admissible Lipschitz constant and the disturbance attenuation level are discussed simultaneously by using the multiobjective optimization technique.In addition, the sliding mode observers with H ∞ performance have been designed for Lipschitz nonlinear delay-free systems with faults matched uncertainties and disturbances in 8 .Although the Krein space-based robust H ∞ filter has been proposed for discrete-time uncertain linear systems in 28 , it cannot be applied to solving the H ∞ estimation problem given in 10 since the considered system contains Lipschitz nonlinearity and Lipschitz nonlinear additive uncertainty.However, it is meaningful and promising in the future, by combining the algorithm given in 28 with our proposed method in this paper, to construct a Krein space-based robust H ∞ filter for discrete-time Lipschitz nonlinear systems with nonlinear additive uncertainties and time-varying parametric uncertainties. | 4,753 | 2011-07-17T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Fabrication of In(P)As Quantum Dots by Interdiffusion of P and As on InP(311)B Substrate
Herein, we report on our investigation of a fabrication scheme for self-assembled quantum dots (QDs), which is another type of Stranski–Krastanow (S–K) growth mode. The In(P)As QD structure was formed by the irradiation of As flux on an InP(311)B surface in a molecular beam epitaxy system controlled by substrate temperature and irradiation duration. These QDs show photoluminescence at around 1500 nm, which is suitable for fiber optic communication systems. The QDs formed by this structure had high As composition because they had size, density, and emission wavelength similar to those of QDs grown by the usual S–K growth mode.
Introduction
Semiconductor quantum dots (QDs) have attracted much attention because they can confine electrons and holes three dimensionally. Therefore, QDs were expected to be used in high performance semiconductor lasers and to be applied in quantum information and communication technologies [1][2][3][4][5]. The technique for self-assembling QDs is widely used for application to optical devices (such as semiconductor laser diodes) using the Stranski-Krastanow (S-K) growth mode because these applications do not require control of a QD position. In the crystal growth of a compound semiconductor, especially lattice-mismatched-material systems, the strained layer on the substrate forms a QD structure that is determined by the total energy stabilization including strain energy and surface energy [6,7]. In systems of III-V materials, techniques for self-assembling of InAs QDs on GaAs or InP substrate have been well investigated [8][9][10][11]. Control of the size, density, and emission wavelength of the QDs is achieved by optimizing the crystal growth condition. In the growth of self-assembled QDs, a molecular beam of group III atoms and group V atoms are usually supplied simultaneously. On the other hand, there are some special growth techniques such as droplet epitaxy [12,13], in which only group III atoms are supplied to the substrate at low temperature followed by irradiation with group V atoms at high temperature. Droplet epitaxy has an advantage because the nanoscale droplets of group III atoms form independently of lattice mismatch, which allowed us to fabricate QDs in lattice-matched-material systems. In addition, we found that, during the formation of QDs irradiated only by group V atoms, growth occurred on the substrate by molecular beam epitaxy (MBE), in which the group V atoms on the substrate were replaced by the irradiating atoms [14]. It is believed that the replacement of group V atoms occurred at equilibrium, so that high-quality crystals formed. However, detailed research has not been done because changing the group V atoms during irradiation is not easy due to the high vapor pressure of this group. In this study, we investigated a technique for the growth of QDs using Figure 1a shows the RHEED pattern after growth of the InP buffer layer along the [01−1] direction. The RHEED pattern shows a streak pattern, which means that a clean, flat surface was obtained after growth of the buffer layer. First, we examined the As irradiation process at 470 • C. Figure 1b,c show the change of the RHEED pattern, of which the streak pattern was weakened after starting irradiation with the As beam. A spotty pattern appeared at around 180 s, as shown in Figure 1b. After irradiating with As for 240 s, a clear spot pattern was observed, as shown in Figure 1c. These results indicate that a crystalline three-dimensional nanostructure was formed, because metallic or amorphous surfaces show a halo pattern with RHEED. Therefore, the surface phosphorus of the substrate was exchanged with irradiating arsenic atoms, and interdiffusion occurred during this process [14]. This is quite different from the process used for the InP(001) substrate. On the InP(001) surface, one or two monolayer phosphorus atoms were exchanged with arsenic, and a stabilized flat surface was obtained [17,18]. On the other hand, more than two monolayer phosphorus atoms should be exchanged to form a three-dimensional structure in our case because large mass transport occurred along the normal direction on the (311)B surface [15,16]. The incorporated As atoms form an InPAs layer with high As composition, which generates a strain field. This strain is the driving force that forms the structure of the QDs. This is the reason for the use of the (311)B surface in our process. Next, we examined the same process at different substrate temperatures during which the As irradiation time was fixed to 240 s. Figure 1d,e show the RHEED pattern after 240 s of As irradiation at 450 • C and 430 • C, respectively. In the process at 450 • C, the RHEED pattern changed from a streak to a spotty pattern. Although it was similar to the one formed during the process at 470 • C, the three-dimensional structure on the surface was expected to be smaller at 450 • C than at 470 • C. In the process at 430 • C, the RHEED streak pattern was maintained or changed to slightly streaky, which indicates that the surface was flat and that there was insignificant formation of a three-dimensional structure.
condition even for the non-equilibrium growth technique of MBE. In our case, a better equilibrium condition was achieved because we only supplied As atoms and exchanged P and As at the surface. However, the P and As atoms exchanged under our conditions did not achieve states of complete equilibrium, so that large coalescent islands were observed in the samples processed at 470 °C and 450 °C. These results show that the P and As exchange and interdiffusion process is strongly dependent on the substrate temperature. To investigate the surface morphology, AFM measurement was conducted after the processing at three temperatures. In this case, we kept the substrate in the growth chamber for 300 s after setting the substrate temperature at 430 • C for each process because this better adapted the growth process of the cap layer for the PL samples. Figure 2a shows an AFM image after As irradiation at 470 • C for 240 s. Three-dimensional QD structures were observed on the surface, which is consistent with RHEED measurements. The average lateral size (L) along the [−233] direction, the height (H), and the density (D) were 35.7 nm, 4.51 nm, and 8.96 × 10 10 /cm −2 , respectively. The histogram of the lateral size and height are shown in Figure 3a. The size and density of this sample were almost the same as for InAs QDs grown on InP(311)B substrate with an ordinary scheme [19][20][21]. Although it is difficult to determine accurately the composition of the thin surface layer, these QDs were expected to be formed as InPAs with high As composition (nearly InAs). In the AFM image, some larger islands were observed that were considered to be formed by the coalescence of smaller QDs. However, the fluctuation of size was not very large; it remained around 17%. Figures 2b and 3b show AFM images of the surface morphology after 240 s of As irradiation at 450 • C and a histogram of QD size, respectively. In this sample, the surface morphology observed was similar to the one processed at 470 • C. The lateral size, height, and density were 31.7 nm, 3.70 nm, and 8.04 × 10 10 /cm 2 , respectively, slightly smaller than those of the sample at 470 • C. Their smaller size is consistent for spotty patterns of RHEED. In this sample, fewer coalescent islands were observed than in the sample processed at 470 • C. In contrast, the sample processed at 430 • C showed different surface morphology and size distribution, as shown in Figures 2c and 3c. It is believed that the small change in the RHEED pattern mentioned above indicates that these small QD structures were formed with As irradiation at 430 • C. The average lateral size, height, and density were 25.8 nm, 1.82 nm, and 4.80 × 10 10 /cm 2 , respectively. Since the size distribution in Figure 3c shows a significant shift to the smaller region, it is clear that the exchange reaction and interdiffusion of P and As were not promoted as well in this temperature range. In addition, the height distribution shows a bimodal shape. The larger peak (around 4 nm) is similar to the ones for the samples processed at 470 • C and 450 • C. Therefore, the group with greater height was distinct from the coalescent islands observed in Figure 2a,b. This is related to the self-size-limiting phenomena that occur during formation of QDs [22,23]. In [18,19], the size of InAs QDs was limited under low As pressure or low growth rates, which can be considered a quasi-equilibrium condition even for the non-equilibrium growth technique of MBE. In our case, a better equilibrium condition was achieved because we only supplied As atoms and exchanged P and As at the surface. However, the P and As atoms exchanged under our conditions did not achieve states of complete equilibrium, so that large coalescent islands were observed in the samples processed at 470 • C and 450 • C. These results show that the P and As exchange and interdiffusion process is strongly dependent on the substrate temperature.
Next, we evaluated the surface morphology by changing the As irradiation time at 470 • C. Two samples were fabricated using As irradiation for 120 s and 480 s. The other process was the same as for the sample with As irradiation for 240 s at 470 • C, as shown in Figure 2a. The AFM image and histogram of lateral size and height are shown in Figures 2d and 3d for 120 s of As irradiation, and in Figures 2e and 3e for 480 s of As irradiation. For the sample with 120 s of As irradiation, the average size, height, and density were 33.0 nm, 4.03 nm, and 6.84 × 10 10 /cm 2 , respectively, which were slightly smaller than that of the sample with 240 s of As irradiation. The distributions of the lateral size and height were similar to those of the sample with 240 s of As irradiation, and shifted to smaller regions, which indicated that the formation of QDs was proceeding in this stage. In addition, the coalescent islands were not found in this region. On the other hand, the sample with 480 s of As irradiation shows many coalescent islands. The average size, height, and density were 38.6 nm, 5.26 nm, and 1.01 × 10 11 /cm 2 , respectively. The size distribution became quite broad, which means that both the nucleation of small QDs and the formation of large islands by coalescence occurred in this stage. This is also supported by the shift of average size to larger regions and by the increase in the QD density. Therefore, the size limiting effect was not very large in this temperature region. Next, we evaluated the surface morphology by changing the As irradiation time at 470 °C. Two samples were fabricated using As irradiation for 120 s and 480 s. The other process was the same as for the sample with As irradiation for 240 s at 470 °C, as shown in Figure 2a. The AFM image and histogram of lateral size and height are shown in Figures 2d and 3d for 120 s of As irradiation, and in Figures 2e and 3e for 480 s of As irradiation. For the sample with 120 s of As irradiation, the average size, height, and density were 33.0 nm, 4.03 nm, and 6.84 × 10 10 /cm 2 , respectively, which were slightly smaller than that of the sample with 240 s of As irradiation. The distributions of the lateral size and Figure 2a-e. The sample with 240 s of As irradiation at 470 • C shows strong emission (even at room temperature) around 1500 nm, which is a higher intensity when compared with conventional S-K QDs. Therefore, the crystal quality of these quantum dots was very high. The wetting layer emission could not be detected as it was expected to appear in a shorter wavelength region (out of measurement range). The sample with 240 s of As irradiation in Figure 2b at 450 • C, and c at 430 • C, shows shorter wavelength emission at 1346 nm and 1259 nm, which is consistent with the smaller QDs observed in these samples, as compared to the sample processed at 470 • C. The full widths at half maxima (FWHMs) of these spectra were (a) 95 meV, (b) 82 meV, and (c) 102 meV. The wider spectrum was observed in the sample processed at 430 • C where the size distribution of QDs was wide. The samples processed at 470 • C for 120 s and 480 s showed emission centered at (d) 1446 nm and (e) 1608 nm, respectively. In this case, the center of the emission wavelength shifted depending on the average size of the QDs. The FWHM of each spectrum was (d) 93 meV and (e) 93 meV. Furthermore, we could not observe the emission of the wetting layer (WL) in any of the samples, which either implies that the WL did not form or that it had vanished during the embedding process. Although the FWHM of spectrum (d) had almost the same size distribution of QDs as in the sample processed for 240 s at 470 • C, the FWHM of spectrum (e) was also the same, in spite of the wide size distribution in this sample. The reason for this is believed to be carrier transfer from smaller QDs to larger QDs. In this sample, a highly packed structure was observed, as shown in Figure 2e. The smaller QDs nucleated between larger QDs, so that the carriers in the smaller QDs moved to neighboring larger QDs and were emitted from there. In these cases, higher energy emission was suppressed so that the FWHM became smaller [24]. Although the details of carrier dynamics still need to be investigated, it was found that this technique can be used to control the emission wavelength over a wide range, from 1346 nm to 1608 nm. Therefore, wide-range optical devices with these QDs could be realized using this technique after optimization. The PL results also show that these QDs were composed of InPAs with a high As composition (more than 90%) because the InAs QDs formed by the S-K growth at 470 • C showed emission wavelength centered at 1599 nm.
Crystals 2020, 10, x FOR PEER REVIEW 7 of 9 correspond to the AFM images in Figure 2a to e. The sample with 240 s of As irradiation at 470 °C shows strong emission (even at room temperature) around 1500 nm, which is a higher intensity when compared with conventional S-K QDs. Therefore, the crystal quality of these quantum dots was very high. The wetting layer emission could not be detected as it was expected to appear in a shorter wavelength region (out of measurement range). The sample with 240 s of As irradiation in Figure in any of the samples, which either implies that the WL did not form or that it had vanished during the embedding process. Although the FWHM of spectrum (d) had almost the same size distribution of QDs as in the sample processed for 240 s at 470 °C, the FWHM of spectrum (e) was also the same, in spite of the wide size distribution in this sample. The reason for this is believed to be carrier transfer from smaller QDs to larger QDs. In this sample, a highly packed structure was observed, as shown in Figure 2e. The smaller QDs nucleated between larger QDs, so that the carriers in the smaller QDs moved to neighboring larger QDs and were emitted from there. In these cases, higher energy emission was suppressed so that the FWHM became smaller [24]. Although the details of carrier dynamics still need to be investigated, it was found that this technique can be used to control the emission wavelength over a wide range, from 1346 nm to 1608 nm. Therefore, wide-range optical devices with these QDs could be realized using this technique after optimization. The PL results also show that these QDs were composed of InPAs with a high As composition (more than 90%) because the InAs QDs formed by the S-K growth at 470 °C showed emission wavelength centered at 1599 nm.
Summary
In summary, we fabricated self-assembled In(P)As QDs using IDE. The three-dimensional QD structures were obtained by only irradiating As flux onto an InP(311)B substrate at substrate temperatures ranging from 430 • C to 470 • C. The QD size and density were controlled by changing the substrate temperature and duration of As irradiation. The QDs obtained by IDE show strong PL at room temperature, controllable over a range of 300 nm, in the region of 1500 nm.
Author Contributions: K.A. conceived and design the study, performed the experiments and wrote the paper; A.M., T.U. and N.Y. reviewed manuscript and edited the manuscript. All authors have read and agree to the published version of the manuscript.
Funding: This research was funded by the program of MEXT Grant-in-Aid for Scientific Research on Innovative Areas "Science of hybrid quantum systems", a CREST project (JPMJCR17N2) funded by Japan Science and Technology Agency, and a research project "R&D to Expand Radio Frequency Resources" by the Ministry of Internal Affairs and Communications. | 4,046 | 2020-02-05T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A Novel Two-Stage Heart Arrhythmia Ensemble Classifier
: Atrial fibrillation (AF) and ventricular arrhythmia (Arr) are among the most common and fatal cardiac arrhythmias in the world. Electrocardiogram (ECG) data, collected as part of the UK Biobank, represents an opportunity for analysis and classification of these two diseases in the UK. The main objective of our study is to investigate a two-stage model for the classification of individuals with AF and Arr in the UK Biobank dataset. The current literature addresses heart arrhythmia classification very extensively. However, the data used by most researchers lack enough instances of these common diseases. Moreover, by proposing the two-stage model and separation of normal and abnormal cases, we have improved the performance of the classifiers in detection of each specific disease. Our approach consists of two stages of classification. In the first stage, features of the ECG input are classified into two main classes: normal and abnormal. At the second stage, the features of the ECG are further categorised as abnormal and further classified into two diseases of AF and Arr. A diverse set of ECG features such as the QRS duration, PR interval and RR interval, as well as covariates such as sex, BMI, age and other factors, are used in the modelling process. For both stages, we use the XGBoost Classifier algorithm. The healthy population present in the data, has been undersampled to tackle the class imbalance present in the data. This technique has been applied and evaluated using an ECG dataset from the UKBioBank ECG taken at rest repository. The main results of our paper are as follows: The classification performance for the proposed approach has been measured using F1 score, Sensitivity (Recall) and Specificity (Precision). The results of the proposed system are 87.22%, 88.55% and 85.95%, for average F1 Score, average sensitivity and average specificity, respectively. Contribution and significance: The performance level indicates that automatic detection of AF and Arr in participants present in the UK Biobank is more precise and efficient if done in a two-stage manner. Automatic detection and classification of AF and Arr individuals this way would mean early diagnosis and prevention of more serious consequences later in their lives.
Cardiovascular Disease
There are approximately 7 million people with heart problems in the UK every year, causing around 150,000 deaths [1]. Therefore, study of cardiovascular diseases and their early detection is of the most importance. The abnormalities present in heart rhythm are known as arrhythmia. An electrocardiogram (ECG) is used to detect the heart rhythm using electrodes. Any deviations from the normal rhythm can indicate a potential underlying cardiovascular problem. Investigating and diagnosing these signals early can avoid hospitalisation or even sudden death.
Computer-Aided Diagnosis
Computer-aided detection or computer aided diagnosis (CAD) refers to computerbased systems that help doctors to make decisions. As the field of automatic heart arrhythmia classification uses computers to aid the doctors classify arrhythmia types, it also falls under this category. Using CAD would mean complexities such as probability to misdiagnose, human error and lack of human expertise could be eliminated or at least reduced. As a result, long-term monitoring of patients using CAD systems is highly preferred. Two examples of approaches that have used CAD are Cardiac Arrhythmia Disease Classification using the LSTM deep learning approach [2], which have used the UCI dataset for their classification purposes, and the CAD scheme for Distinguishing Between Benign and Malignant Masses on Breast images Using Deep Convolutional Neural Network that uses Bayesian Optimization [3]. In both approaches, the main goal has been to use the computer as an aid to help the doctors diagnose a disease within patients. In the first one, arrhythmia is being diagnosed and in the second one, breast cancer tissue is being diagnosed.
Data Set
Our data come from the UKBiobank ECG at rest repository [4]. There are 52,213 healthy participants and 1916 participants with both arrhythmia types, 162 participants with ventricular arrhythmias (Arr), and 1682 with atrial fibrillation (AF). These information were gathered in the follow-up period from the same individuals throughout their time of involvement. There are 72 cases that have both Arr and AF. Five ECG features were extracted from 10 s resting ECG of individuals with AF and Arr plus eight covariates. The features extracted include the following: RR Interval, QRS Duration, Tpe (T-peak-to-T-end) Interval, QTc (Corrected QT interval), PR interval, Sex, Age, BMI, Smoke, Diab, Chol, SBP, DBP.
Two-Stage Concept
The ability to detect and classify these Arr and AF correctly and quickly by using classification models with high confidence, is a goal that it yet to be achieved, at least in the real world. This is because, in theory and practice, accuracies as high as 99% have been achieved, but there is uncertainty whether these will be the same if applied in practice. The proposed two-stage model is derived from the idea of the general doctor and specialist. When a person is suspicious about some condition and they visit their general doctor, called a General Practitioner(GP) here in UK, the GP does an initial assessment and either refers the patient to a specialist or will just say there is nothing to be concerned about, depending on their diagnosis. The same concept has been introduced for our model. The first classifier acts as a GP and classifies whether a specific individual is healthy or not. The second classifier, acting similar to a specialist, then decides what disease type they have.
The major contribution of this paper is to propose a two-stage model for the classification of individuals with AF and Arr using the UK Biobank dataset with improved performance and concept over already existing models. In this paper, we describe a twostage heart arrhythmia classification, tested on the UKBioBank data using the XGBoost Classifier. At the first stage, the pre-processed features using an Outlier Detection algorithm are classified into two main groups; normal and abnormal. At the second stage, the abnormal cases are further classified into their subgroups: AF or Arr. (Those participants with both cases AF and Arr were removed from analysis and classification for this experiment.)
Paper Structure
In Section 2, we discuss the background literature that is similar to this work. In Section 3, we discuss the methods used in the experiment, including the data description and the pre-processing step. In Section 4, we discuss classifiers present in our experiment. In Section 5, we discuss implementation of our classifiers, the results and evaluation of our methods. In Section 6, we conclude this paper and discuss possible future work.
Available Data Sets
There are currently two publicly available datasets present for heart arrhythmia, and these are the MIT-BIH ECG dataset and the UCI Arrhythmia dataset, and both have been used by most researchers. The downsides of these two datasets are their small size, lack of enough instances of different arrhythmia types and huge class imbalance. Therefore, in our work, we have opted to use a larger dataset in order to build more accurate classifiers.
Related Work
Currently, there are numerous methods present for heart arrhythmia classification. Some methods use machine learning algorithms such as support vector machines, neural networks, decision tree, K Means and other deep learning methods [5][6][7][8].
For example, Oster et al. [9] have proposed some machine learning models based on the support vector machine algorithm as well as a combination of a classical machine learning with deep learning approaches for the classification of patients with atrial fibrillation (AF). They used a subset of the UK Biobank ECG dataset for their study that is manually annotated by healthcare experts. Their combined classical machine learning and deep learning model achieved an F1 score of 84.8% on a test set and a Cohen's kappa coefficient of 0.83.
Chen et al. performed automatic heart arrhythmia classification using a combined network of Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) using the MIT-BIH Arrhythmia dataset. Their methods demonstrated 99.32% accuracy [10].
Ashfaq Kha and Kim proposed a deep learning technique for heart arrhythmia classification using the UCI arrhythmia dataset. The approach included a noise removal method using principal component analysis (PCA) and then used LSTM for classification. They reached a classification accuracy of 93.5% using their model. This is an example of a computer-aided diagnosis as mentioned earlier in the paper [2]. Fujita and Cimr proposed a CAD system for the detection of AF and flutters using a deep CNN on the MIT-BIH arrhythmia dataset. They demonstrated 98.45% for accuracy, 99.87% for sensitivity and 99.27% for specificity [11]. In a study carried out by Nishio et al., they performed lung cancer classification using deep CNN (DCNN) and transfer learning. The best averaged validation accuracies for the DCNN with transfer learning were 60.7%, 64.7% and 68.0% [12]. Al-Antari et al. used deep learning for the classification of breast cancer in digital X-ray mammograms [13]. The breast lesions were detected accurately and, and the system showed an F1-score of 99.28%. The system was also able to classify the breast lesions in less than 0.025. In another similar study of CAD, Komeda et al. used CNN to correctly classify digital Polyps images. The decisions by the classifier were correct 70% of the time [14].
In general, all the methods mentioned above show good performance for detecting and classifying ECG beats (and some of them other diseases) from a publicly available dataset such as the UCI and MIT-BIH, but they would not perform well in practice because of certain factors, including not able to generalise well, lack of enough instances and the size of the data. If and when validated using an independent external dataset, the aforementioned systems show overfitting [15]. This means their reliability in practice could be questioned, and therefore, it is difficult to analyse the stated accuracy and performance, especially when it comes to large scale databases. Therefore in this study, the UK Biobank is used to rectify this, and the two-stage model was introduced to further specialise each classifier for the diseases present in the dataset.
There are also numerous approaches that have used the idea of multi-stage. For example, Kutlu and Kuntalp [16] proposed a three-stage model using KNN classifiers. At the first stage, the heartbeats were classified into subgroups based on selected optimal features. At the second stage, the main groups were classified further into subgroups. Furthermore, lastly, at the third stage, the unclassified beats were labelled. The study used MIT-BIH arrhythmia as their dataset. The demonstrated performances were 85.59%, 95.46% and 99.56%, for average sensitivity, selectivity and specificity, respectively. In another study, Manju and Nair [17] used a combination of SMOTEENN and XGBoost classifiers for classifying arrhythmia present in the UCI dataset. Their method uses SMOTE to address the highly imbalanced dataset problem and using XGBoost classifier as feature reduction. Their demonstrated accuracy is stated as 97. 48%.
Other examples of related work include using Neural Networks in a multi-stage manner [18][19][20], deep genetic ensembles [21], the concept of boosting [22], using LSTM neural network [23] and using Cartesian Genetic Programming for creating an optimal digital circuit [24].
By taking into consideration the advantages and disadvantages of existing state-of-theart models, this study proposes a two-stage classification model based on XGBoost classifier that takes in the pre-processed features. In the first stage, the classifier determines if the person belongs to the healthy or unhealthy group, and in the second stage, the classifier decides on the type of arrhythmia, including AF and Arr. In order to tackle the class imbalance problem present in the dataset, the XGBoost algorithm is used in both classifiers. The Isolation Forest algorithm is also used for outlier and noise removal. To evaluate the performance of the model, the method was validated on the UK Biobank ECG at rest arrhythmia dataset. The reason for using the XGBoost classifier depended on a few factors. One is its ability to deal with data that has class imbalance problem. A second is its ability to deal with missing data if any are present. The third is because of its speed and performance [25].
Proposed Two-Stage Method
In this section, we describe the ECG dataset and the methods used in the proposed two-stage ensemble classifier.
Data Description
Our data come from the UKBiobank ECG at rest repository [4]. There were 52,213 healthy participants, and 1916 participants with both arrhythmia types, 162 participants with ventricular arrhythmias (Arr) and 1682 with Atrial Fibrillation (AF). This information was gathered in the follow-up period from the same individuals throughout their time of involvement. There are 72 cases that have both Arr and AF. Five ECG features were extracted from the 10 s resting ECG of individuals with AF and Arr plus eight covariates. The features extracted include the following: RR Interval, QRS Duration, Tpe (T-peak-to-Tend) Interval, QTc (Corrected QT interval), PR interval, Sex, Age, BMI, Smoke, Diab, Chol, SBP, DBP.
Pre-Processing
In this section, the pre-processing of the features is explained in different subsections. Table 1 shows the columns with missing data and their counts. We have also tackled the missing data problem in our dataset. An Imputer Python library was used to fill the missing values for the numerical and categorical data [26].
Undersampling and Outlier Removal
Because of the significant class imbalance problem present in our dataset, we undersampled the healthy population and used XGBoost Classifier as the main algorithms in both stages. We also used an outlier detection method using the Isolation Forest algorithm. Isolation Forest is an unsupervised machine learning approach using decision trees. First, it locates the outliers by randomly selecting one of features and then carries out an arbitrary split selection between the minimum and maximum values of the feature that is selected. This process of random feature subdivision results in smaller paths being produced in the tree structure for the anomalies and therefore distinguishing them from the normal data [27]. This algorithm is relatively fast at detecting anomalies, and it requires less memory compared to other outlier detection methods; hence, it is the method we have used in our study. By adjusting the parameters of the Isolation Forest algorithm, we have removed nearly half of our healthy population. Thus, we are left with more balanced classes between healthy and unhealthy samples, reducing the imbalance by nearly 4%. The class imbalance now stands at 7% of the total population.
Classifier Description
In this section, the proposed model is explained and a flow chart of the classifier is shown in Figure 1.
Two-Stage Classifier
The proposed novel method in this paper is inspired by the diagnostic check system by doctors in the UK, where patients visit a General Practitioner, and then, if the General Practitioner indicates the patient has arrhythmia, they then visit a Specialist who tells them which form(s) of arrhythmia they suffer from. The main idea is to perform the classification in two steps using two classifiers. In the first classifier, which is the initial step of the process, the classifier separates classified cases into normal and abnormal. This is very similar to visiting a General Practitioner who does a general check and tells the patient whether there is any abnormality present and what their diagnostic is. If they believe there is abnormality present, they would refer the patient to a specialist; this is what sparked the idea of the two-stage classifier. In the second stage, the second classifier then determines what type of disease the patient has. The classifier could also determine if they are actually healthy, and that the first classifier was wrong. This simple but powerful idea has had a magnificent impact on the classification of arrhythmia subcategories in our dataset, and we have reason to believe it will have a great impact on other problems in the healthcare industry.
XGBoost Classifier
There is a huge class imbalance in our data, similar to many other datasets in the healthcare industry. Only 4% of patients have heart arrhythmia, and the rest are healthy. XGBoost has been rated as one of the best machine learning algorithms in competitions for its extraordinary speed and performance and its ability to deal with missing data, skewed class distributions and a large dataset, which is exactly the data type we have. Therefore, out of all the algorithms present, we have chosen the XGBoost ensemble method [28].
XGBoost stands for Extreme Gradient boosting, and it comes from the machine learning tree models. XGBoost is an ensemble of decision tree algorithm where a pruning strategy is applied to fix the errors made by earlier made trees. Trees are added to the model until no further improvement can be seen [28,29].
XGBoost Parameters
In this section, for both classifiers. the XGBoost parameters are shown in Table 2. The parameters have been tuned using the python library of randomised search cross validation (cv). This method is great for finding the best combination of parameters that result in the best performance of a model, which is in essence a method of hyper parameter optimisation or hyper parameter tuning [30].
Cross Validation and Overfitting
To validate that our model is not overfitting to the test data, we have used stratified k-fold cross validation (cv) in our experiment [31]. This has been carried out for both of the classifiers. This method creates equal mean response values in all the folds by separating a different section of the train/test split each time. We used 10 splits for our k-fold cv.
Evaluation Metric Measures
Both of the classifiers developed in this study have been implemented using Python3 on an Intel(R) Core(TM) i7-8550U CPU 1.80GHz Laptop in the JupyterLab environment. The two-stage procedure is carried out using data from local storage. Since we used the Isolation Forest to remove outliers from our healthy population and reduce the class imbalance problem, we were left with overall data size of 27,942, each having 13 features. Therefore, total 27,942 rows were used, of which 15,942 were used for training and 12,000 used for test set. The two-stage classifier was created, and at the first stage, the system classified the ECG feature input vectors into two main groups, normal and abnormal cases. At the second stage, the abnormal group was further classified into AF or Arr. For each classifier, the performance was measured using the F1 score, sensitivity and specificity, which are the standard statistical measures. These results are acquired by drawing a confusion matrix for each classifier and then taking the average of the results. These measures are defined as: (1)
Classifiers Implementation
In this section, the implementation of both classifiers are discussed in detail. The overall classifier has two stages: in the first stage, it separates beats into two classes, normal or abnormal. In the second stage, the abnormal group is further separated into AF or AV. In the first classifier stage, an XGBoost classifier is used. This classifier separates the two main groups from each other. In case of any contradictions between the predicted and the true label, the label is corrected before feeding it to the second stage. In the second classification stage, separated abnormal cases are classified into AF or AV. At the end of the first and second stages, the final decision is evaluated.
Comparison with Other Methods
The comparison of our method to other state-of-the-art models is problematic because of the difference in classification methods, data used, evaluation metrics and the type of arrhythmia's being classified. As a result, we compare the performance between this two-stage classifier with a single-stage classifier. This way, we can clearly determine if a two-stage approach is significantly different from a single-stage approach. The sensitivity and specificity of the first stage classifier are 0.785 and 0.81, respectively. The sensitivity and specificity of the second stage classifier are 0.986 and 0.909, respectively. The classifier, having separated the normal and abnormal, performs very well in detecting the arrhythmia type. If we had used the one-stage classifier without proposing this novel method, the classification would have been different. As all the classifications would have been done using one multi-class classifier, the sensitivity for AF is 0.39 with precision of 0.72 and sensitivity for Arr is 0.02 with specificity of 0.12 and for normal the sensitivity is 0.99 with specificity of 0.96. The one-stage classifier is incapable of classifying the arrhythmia types correctly. However, as mentioned earlier, because of the nature of the data, the way it has been used and the methods that apply to it, a fair comparison between the proposed two-stage classifier and other literature background is difficult [32][33][34][35][36][37][38][39][40].
Our results are shown in the Tables 3 and 4 below, comparing One-Stage and Two-Stage classifiers. Table 5 shows the comparison between this model and other state-of-the-art models.
Conclusions
This study aims to construct a heart arrhythmia classification model in a two-stage manner. The correct detection of arrhythmias present in patients results in the right medication and treatment being offered, which is vital to the patient's health improvement. The two-stage classifier has been introduced using the set of five ECG features and eight covariates, to correctly classify two types of arrhythmia, atrial fibrillation and ventricular arrhythmia. XGBoost ensemble method was used as the main detection and classification algorithm in this study. In the majority of other similar studies in the heart arrhythmia classification field, the two datasets of MIT-BIH and UCI Arrhythmia were used. These datasets are not large, and even though they include many of the arrhythmias present in the modern world; they do not contain enough examples of them. For example, the UCI arrhythmia dataset only has instances from 452 individuals. Even though it covers about 16 different arrhythmia types, as an example, there are only five examples of Atrial Fibrillation or Flutter present in the dataset [42]. In Table 6, shown below, the class code, the name of the class and the number of instances for the UCI arrhythmia dataset can be seen.
Therefore, training a classifier on such few examples means the model would not generalise well when used in practice. This is one of the main reasons we are interested in the UKBioBank ECG at rest dataset. It covers two of the most common and dangerous types of arrhythmias (AV and AF), and it has more than enough examples to be able to construct a classification system that generalises well. The ECG features and the covariates have all contributed to creating this robust method of detecting different arrhythmia types [43]. The data used and the method proposed showed satisfactory performance of classifying arrhythmia types. As the features also include background information about the patient, such as whether they are diabetic or not, smoker or not, their BMI and their blood pressure levels, this study is rated to be one of the rare studies to simulate the real environment of patients. Cardiologists, when they are looking at a patient's ECG, always take into account the underlying conditions of the patient. For example, if the patient is suffering from high blood pressure, a slight peak in their R wave would not be significantly important compared to a person who does not have any underlying conditions. The success of this two-stage classifier is also owed to the way it has been designed. The first classifier acts as a General Practitioner and just defines whether it believes the features presented to it belong to a healthy or not healthy individual. Then, in the second stage, those who have been classified as not healthy/abnormal are classified further into their correct arrhythmia type. In order to create a correct classification of the first stage, those who have been misclassified in the first stage are corrected for the second stage. This means that, for example, if the system mistakenly defined an individual as having a disease and that person is actually healthy, the label is set to its correct classification before feeding it to the second stage. This in real life could be the intervention of a doctor to distinguish between the algorithm's decision and their decision and deciding on the final verdict. Using XGBoost as the main algorithm for both stages has resulted in reduced classification time and better performance in the evaluation metrics. Another main reason for choosing the XGBoost classifier is its ability to deal with class imbalance. Even though, we removed so many outliers from the healthy population, there still a skewed class distribution present in our dataset: the class distribution is 7% diseases and 93% healthy, and the XGBoost ensemble method has tackled this very well. In conclusion, the proposed method can be easily put into practice using similar medical data present in other fields. The classification happens in two stages. In the first stage, the first XGBoost classifier is trained on a proportion of the data using a label called "Disease" that has a binary value of 0 or 1. Those who are classified as 1 are considered to be abnormal. Then, in the second stage, these label 1s are used to classify them further into two main groups of AF and Arr. The Isolation Forest algorithm used for the detection and removal of outliers, the ensemble technique used in this two-stage method, showed that the proposed model discriminates between the two different arrhythmia types very well. The first classifier has a sensitivity of 0.785 and a specificity of 0.8. As for the second classifier, the sensitivity is 0.986 with a specificity of 0.909. The performance of the classifier is reliant on the availability of data and features present in the data. The combined classification accuracies for the proposed approach are 87.22%, 88.55% and 85.95%, for average F1 Score, average sensitivity and average specificity, respectively. Data Availability Statement: The data used for this experiment come from the UK BioBank ECG at rest repository. These data are not available to the public. An application has to be made to be able to use or download the data. acknowledges support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement no 786833.
Abbreviations
The following abbreviations are used in this manuscript: | 6,050.8 | 2021-05-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Vibro-acoustic characteristics analysis of the rotary composite plate and conical – cylindrical double cavities coupled system
Purpose – The purpose of the study is to obtain and analyze vibro-acoustic characteristics. Design/methodology/approach – A unified analysis model for the rotary composite laminated plate and conical – cylindrical double cavities coupled system is established. The related parameters of the unified model are determined by isoparametric transformation. The modified Fourier series are applied to construct the admissible displacement function and the sound pressure tolerance function of the coupled systems. The energy functional of the structure domain and acoustic field domain is established, respectively, and the structure – acoustic coupling potential energy is introduced to obtain the energy functional. Rayleigh – Ritz method was used to solve the energy functional. Findings – The displacement and sound pressure response of the coupled systems are acquired by introducing the internal point sound source excitation, and the influence of relevant parameters of the coupled systems is researched. Through research, it is found that the impedance wall can reduce the amplitude of the soundpressureresponseandsuppresstheresonanceofthecoupledsystems.Besides,thecompositelaminatedplatehasagoodnoisereductioneffect. Originality/value – This study can provide the theoretical guidance for vibration and noise reduction.
Introduction
Rotary laminated plate structures based on fiber reinforced composite materials have been widely applied in aerospace, marine and other fields. In engineering practice, it is inevitable to produce the coupled system between rotary composite laminated plate and acoustic cavities.
Taking the submarine as an instance, the two cabins inside the hull of the submarine can be regarded as a rotary acoustic cavity coupled system, and the two cabins are separated by the cabin door composed of composite laminated plate. Noise in each cabin causes the variation of sound pressure, which leads to the vibration of the cabin door. Meanwhile, this vibration also contributes to noise, forming the composite laminated plate and double acoustic cavities coupled system. To compare the vibro-acoustic characteristics of structure-acoustic coupled system containing rotary composite laminated plate and acoustic cavity coupled system, it is important to conduct in-depth research on the vibro-acoustic characteristics of the rotary composite laminated plate and conical-cylindrical double acoustic cavities coupled system and reveal the vibro-acoustic coupling mechanism, which can provide the theoretical basis for structural optimization and low noise design.
The acoustic field characteristics of acoustic cavity coupled system under different conditions have been investigated by many scholars in recent years. Tanaka et al. (2012) derived the eigenpairs, which verified the validity by an experiment of coupled rectangular cavity, and investigated the fundamental properties of the eigenpairs derived. Moores et al. (2018) demonstrated an acoustical analog of a circuit quantum electrodynamics system that leverages acoustic properties to enable strong multimode coupling in the dispersive regime, and the operating frequencies where this emission rate is suppressed are identified. Unnikrishnan Nair et al. (2010) proposed a simplified modeling approach for numerical simulation of a coupled cavity-resonator system, and the influence of damping and resonator volume fraction on the coupled system performance is shown. A high-order doubly asymptotic open boundary for modeling scalar wave propagation in two-dimensional unbounded media was presented by Birk et al. (2016), which can handle domains with arbitrary geometry by using a circular boundary to divide these into near field and far field. Through modal energy analysis (MODENA), Zhang et al. (2016) defined a dimensionless coupling quotient, which is equal to the ratio of the gyroscopic coupling coefficient and the critical coefficient at modal frequencies. Chen et al. (2017) presented a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces and revealed the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure. Based on the energy principle in combination with a 3D modified Fourier cosine series approach, Shi et al. (2018) studied the modeling and acoustic eigen analysis of coupled spaces with a coupling aperture of variable size. It can be seen from the existing literature that the current research on acoustic cavity coupled system is mostly for rectangular cavity. However, the research on the acoustic characteristic model of the rotary acoustic cavity is rarely involved, let alone the research on the modeling of rotary acoustic cavity coupled system. At the same time, most of the investigations are based on the coupling mechanism analysis between acoustic cavities, while no analysis model has been established specifically.
Unlike the acoustic cavity coupled system, the coupling of rotary composite laminated plate and conical-cylindrical double acoustic cavities belongs to structure-acoustic coupling. Many scholars have researched the coupling mechanism of structure-sound coupled system. On the foundation of these investigations, the models of sectional structure-sound coupled system are established, and the vibro-acoustic characteristics of the coupled system are analyzed. Van Genechten et al. (2011) presented a newly developed hybrid simulation technique for coupled structural-acoustic analysis, which applies a wave-based model for the acoustic cavity and a direct or modally reduced finite element (FE) model for the structural part. By invoking the energy distribution approach, De Rosa et al. (2012) proposed a similitude for the analysis of the dynamic response of acoustic-elastic assemblies and discussed the FE modeling of a flexural plate coupled to an acoustic room and an infinite cylinder containing a fluid. Wang et al. (2015) developed a coupled smoothed finite element method (S-FEM) to deal with the structural-acoustic problems consisting of a shell JIMSE 3,1 configuration interacting with the fluid medium; numerical examples of a cylinder cavity attached to a flexible shell and an automobile passenger compartment were conducted to illustrate the effectiveness and accuracy of the coupled S-FEM for structural-acoustic problems. Shu et al. (2014) proposed a level set-based structural topology optimization method for the optimal design of coupled structural-acoustic system with a focus on interior noise reduction. Some scholars have also studied the vibro-acoustic coupling characteristics of the composite laminated plate-double cavity coupled system and obtained some achievements. Larbi et al. (2012) demonstrated the theoretical formulation and the FE implementation of vibro-acoustic problems with piezoelectric composite structures connected to electric shunt circuits. Sarig€ ul andKarag€ ozl€ u (2014, 2018) presented the results of modal structure-sound coupling analysis for rectangular plates with different composite parameters and compared with the performance of isotropic plate systems. Ferreira et al. (2014), Carrera et al. (2018) and Cinefra et al. (2021) established a theoretical model of rectangular composite laminated plate and constructed a rectangular cavity model based on the FE formula of sound field under standard pressure. In the existing literature, the research object of the composite laminated plate-cavity coupled system is mostly concentrated on the rectangular composite laminated plate-cavity coupled system, while the rotary composite laminated plate-cavity coupled system is rarely studied, much less the rotary composite laminated plate and conical-cylindrical double acoustic cavities coupled system.
In view of the shortcomings of existing research, the vibro-acoustic characteristics analysis of the rotary composite laminated plate and conical-cylindrical double acoustic cavities coupled system are conducted in this paper. First, the expressions of admissible displacement function and sound pressure function of laminated plate are constructed. Second, the energy functional of the structure domain and sound field domain is established, respectively, and the coupling potential energy is added to acquire the energy functional of the whole coupled system. Then, Rayleigh-Ritz method is applied to gain the equation of the vibro-acoustic characteristics of the system. Finally, based on the fast convergence and accuracy of the model, the effect mechanism of relevant parameters on the vibro-acoustic characteristics is analyzed, and the response of the coupled system is investigated by introducing internal point force and point source excitation.
2. Unified analysis model of the coupled systems 2.1 Model description Figure 1 demonstrates the rotating cross-section geometrical parameters and coordinate system of the conical-cylindrical acoustic cavity coupled system and rotary composite laminated plate and conical-cylindrical double cavities coupled system. As shown in Figure 1a, the coordinate of conical-cylindrical acoustic cavity coupled system is composed of two local coordinate systems (o-s 1 , θ 1 , r 1 ) and (o-s 2 , θ 2 , r 2 ). R 1 and R 2 separately represent the short radius and long radius of conical cavity 1, and α and L 1 are the cone-apex angle and the length of generatrix of conical cavity 1. R 2 and R 3 denote the inner radius and outer radius of cylindrical cavity 2, and the height of cylindrical cavity 2 is marked as L 1 . Thickness of the acoustic cavity coupled system is represented as H. Compared with the coupled system in Figure 1a, the rotary composite laminated plate and conical-cylindrical double acoustic cavities coupled system add a composite plate between the two acoustic cavities. It can be found from Figure 1b that the coordinate of rotary composite laminated plate and conicalcylindrical double cavities coupled system is composed of three local coordinate systems (o-s 1 , θ 1 , r 1 ), (o-s 2 , θ 2 , r 2 ) and (o-s 3 , θ 3 , r 3 ). L p is the thickness of composite laminated plate, and other geometric parameters have the same meanings as in Figure 1a. The shapes of rotary acoustic cavity coupled system at different rotation angles are given in Figure 2. The parameter ϑ is introduced here to express the rotation angles of coupled systems, and Vibration and noise reduction whether rotation angles are 2π is related to the number of acoustic walls in the coupled systems. In order to study the vibro-acoustic characteristics of the coupled systems, a monopole point sound source Q is placed.
In Figure 3, the general boundary conditions on the edge of rotary composite laminated plate are represented by introducing three groups of linear springs k u , k v , k w along u, v, w directions and two groups of torsion springs K r and K θ , which are continuously distributed along the boundary. u, v and w denote r, θ and s directions, respectively. k u θ0 , k v θ0 , k w θ0 , K r θ0 and K θ θ0 are introduced to represent the five groups of boundary springs at the boundary θ 5 0, and the boundary springs at θ 5 ϑ, r 5 0 and r 5 H can also be expressed in a similar way. When the rotation angle ϑ 5 2π, the coupling boundary shown in Figure 3b is generated for the rotary composite laminated plate. Three groups of linear coupling springs k uc , k vc and k wc and two groups of torsion springs K rc and K θc are uniformly set on the coupling boundary (θ 5 0 and θ 5 2π). Therefore, the coupling of the rotating composite laminated plate can be realized.
Related parameter of unified model
To ensure that the systems can be coupled during the modeling process, the parallelogram section in conical acoustic cavity is necessary to be transformed into a square section. It can be found from Figure 4 that the plane ros coordinate system is transformed to plane ξoη coordinate system by iso-parametric transformation, where each vertex in plane ξoη coordinate system has been determined as (0, 0), (1, 0), (1, 1) and (0, 1).
Using the four-node coordinate transformation in the FE method, the coordinate transformation equations are as follows: Vibration and noise reduction in which (r (i) , s (i) ) and (ξ (i) , η (i) ) separately denote the coordinate of the ith vertex in the plane ros coordinate system and plane ξoη coordinate system, and N i (ξ, η) is the shape function of the ith vertex of the plane ros coordinate system. The specific coordinate system transformation process is expressed in the matrix form: where jJ j is the determinant of the Jacobian matrix. Table 1 shows the related parameters between the local coordinate systems of conical and cylindrical acoustic cavity and the coordinate system of double curvature cavity element (α, β, z). In addition, the Lame coefficient of the cavity and the value range of the corresponding coordinate axis are also given:
Construction of admissible displacement and sound pressure functions
Based on the first-order shear deformation theory (FSDT) and two-dimensional modified Fourier series theory, the admissible displacement function of rotary composite laminated plate is established. The admissible sound pressure functions of conical and cylindrical cavities are constructed by three-dimensional modified Fourier series. The expression is the superposition of a cosine function and six sines and cosines functions. The specific expressions can be written as follows (Zhang et al., 2019(Zhang et al., , 2020a in which u (r, θ, t), v (r, θ, t) and w (r, θ, t) separately represent the admissible displacement function of the surface in the r, θ and z directions of the rotary composite laminated plate, and f r ðr; θ; tÞ and f θ ðr; θ; tÞ denote the lateral rotation in the r and θ directions, respectively. p n (n 5 1, 2) is the expression of the admissible sound pressure function for the nth cavity in the coupled system. The displacement supplement polynomials of rotary composite laminated plate can be expressed as Φ M and Φ Nq (N q 5 1, 2), and the sound pressure supplement polynomials of the nth cavity are written as P n Ω and P n Θq (Θ q 5 1, 2, . . . ,6). While A mn , B mn , C mn , D mn and E mn represent the unknown two-dimensional Fourier coefficient vectors and unknown three-dimensional Fourier coefficient vectors. These parameters can be expressed as follows: P Ω n ðr n ; θ n ; s n Þ ¼ P Θ 1 n ðr n ;θ n ; s n Þ ¼
Stress-strain and displacement relationship
The normal strain and shear strain at any point on the composite laminated plate can be defined by the changes of strain and curvature in midplane: where ε 0 r , ε 0 θ , γ 0 rθ , γ 0 rz and γ 0 θz are the strain components on the middle surface of the laminated plate, and χ r , χ θ and χ rθ are the curvature variation components on the middle surface. The specific expressions are as follows: According to Hooke's law, the corresponding stress-strain relationship at the k-layer can be obtained as follows: where the stiffness coefficients of laminated plate can be denoted by Q k ij (i, j 5 1, 2, . . ., 6), and they can be acquired from the following equations: In Equation (35), T is the transformation matrix, which is defined as follows, where θ is the included angle between the main direction and the r direction of the layer, namely, the layer laying direction: T ¼ 2 6 6 6 6 4 cos 2 θ sin 2 θ À2 sin θ cos θ sin 2 θ cos 2 θ 2 sin θ cos θ cos θ sin θ Àsin θ cos θ sin θ cos θ Àsin θ cos θ cos 2 θ À sin 2 θ 3 7 7 7 7 5 in which Q k ij (i, j 5 1, 2, . . ., 6) denotes the material coefficient of the k-layer of plate, and its value can be obtained by the engineering constant: The forces and torques applied to the laminated plate are obtained by integrating the stresses in the plane. From one layer of laminated plate to another layer, the thickness is integrated to obtain the following equation: where N r , N θ and N rθ are the resultant forces in the plane, M r , M θ and M rθ represent bending and torsional torque and Q θ , Q r denote the resultant forces of horizontal shear. Shear correction factor which can guarantee the strain energy caused by transverse shear stress is expressed as κ s , and N L represents the number of layers.
Energy equation and solution procedure
The Lagrange equation of t the conical-cylindrical acoustic cavity coupled system and rotary composite laminated plate and conical-cylindrical double cavities coupled system can be expressed as follows: where T P and T Cn (n 5 1, 2) are the total kinetic energy of the rotary composite plate and the nth acoustic cavity in the coupled systems, respectively. U P and U Cn separately denote the total potential energy in the plate and the nth acoustic cavity. U PÀcoupling and U CnÀcoupling represent the coupling potential energy of the composite laminated plate and acoustic cavities when rotation angle ϑ 5 2π. The boundary spring potential energy of plate can be expressed as U SP . W P&C 1 , W P&C 2 W C 1 &P and W C 2 &P are the coupling potential energy generated when the composite plate is coupled with acoustic cavity 1 and acoustic cavity 2, and W P&C 1 ¼ W C 1 &P , W P&C 2 ¼ W C 2 &P . W Cn−wall represents the impedance potential energy caused by the impedance wall of the nth acoustic cavity in the coupled systems. W C 1 &C 2 and W C 2 &C 1 denote the coupling potential energy between acoustic cavity 1 and acoustic cavity 2, in which W C 1 &C 2 ¼ W C 2 &C 1 . W F is the work done by harmonic point force F on the composite laminated plate, and W Qn is the work done by the monopole point sound source in the nth acoustic cavity of the coupled systems.
For the two coupled systems, partial energy equations do not exist completely: (1) the conical-cylindrical acoustic cavity coupled system: (2) the rotary composite laminated plate and conical-cylindrical double cavities coupled system: The total kinetic energy of the rotating composite laminated plate and the nth acoustic cavity represented by T P and T C n can be written as follows: in which N l represents the number of layers of rotary composite laminated plate. Z k is the coordinate value of bottom surface thickness of k-layer, and Z k þ 1 is the coordinate value of upper surface thickness. The material density of k-layer can be expressed as ρ k , and ρ Cn denotes the density of acoustic media in nth acoustic cavity.
Vibration and noise reduction
The specific expressions of the total potential energy U P and U C n are as follows: Vibration and noise reduction In the formula, the total potential energy of rotary composite laminated plate includes tensile potential energy U stretch , bending potential energy U bend , tensile and bending coupling potential energy U sÀb , whose expressions have also been given, and c n is the propagation speed of sound in the acoustic medium of the nth cavity.
As the boundary conditions of the rotary composite laminated plate model established in this paper are determined by the boundary springs, the boundary spring potential energy U SP will be generated, and the expressions are as follows: When 0 < ϑ < 2π, the impedance wall dissipation energy of the nth sound cavity in the coupled systems is as follows: W Cn−wall ¼ W n wall 1 þ W n wall 2 þ W n wall 3 þ W n wall 4 þ W n wall 5 þ W n wall 6 (59) JIMSE 3,1 where j represents a pure imaginary number. S r denotes the area of the rth uncoupled acoustic wall, and the corresponding acoustic wall impedance value is expressed by Z r . When ϑ 5 2π, the coupling potential energy of the middle plate U PÀcoupling and the nth acoustic cavity of the coupled systems U CnÀcoupling can be written as follows: The coupling potential energy between acoustic cavity 1 and acoustic cavity 2 W C 1 &C 2 can be expressed as follows: W P&C 1 and W P&C 2 are the coupling potential energy when the laminated plate is coupled with acoustic cavity 1 and 2, respectively, and the expressions are as follows: Vibration and noise reduction jJ 1 j$ðr 1 þR 2 Þdr 1 dθ 1 (64) The expressions of the work W F done by the harmonic point force F on the composite laminated plate and the work W F done by the monopole point sound source Q in the nth acoustic cavity in the coupled systems are as follows: where f i (i 5 u j , v j , w j ) is the function of external load distribution, and the action position of harmonic point force F is (r 0 , θ 0 ). Q n s represents the distribution function of the point sound source acting on the nth cavity, the amplitude of the point sound source is A (kg/s 2 ) and the specific position of the point sound source Q is (r e , θ e , s e ). δ and δ c are two-dimensional and three-dimensional Dirac functions.
Each energy equation is substituted into Equations (41)-(43). According to Rayleigh-Ritz method, the partial derivatives of the unknown two-dimensional and threedimensional Fourier coefficients are obtained in the Lagrange equation, and the results are equal to zero: vL C 2 vG m t n t l t ¼ vT C 2 vG m t n t l t À vU C 2 vG m t n t l t À vU C 2 Àcoupling vG m t n t l t À vW P&C 2 vG m t n t l t À vW C 2 &C 1 vG m t n t l t þ vW C 2 −wall vG m t n t l t þ vW Q 2 vG m t n t l t ¼ 0 Transform Equation (71)-(73) into matrix form: À in which K p and K cn respectively represent the stiffness matrix of the rotary composite laminated plate and the nth acoustic cavity in the coupled systems, and M p , M cn are the mass matrices. The impedance matrix of the nth acoustic cavity in the coupled systems is denoted by Z cn . C Cn&P is the vibro-acoustic coupling matrix between the nth cavity and the composite laminated plate in coupled systems, and C P&Cn ¼ C Cn&P .
When F and Q n are equal to zero, Equations (75)-(77) can be combined to obtain the natural frequency and mode solving equations of the coupled systems. To facilitate the solution, the equations need to be converted into linear equations for solving: S ¼ 2 6 6 6 6 6 6 4 G ¼ ½ P mn ωP mn F mt nt lt ωF mt nt lt G mt nt lt ωG mt nt lt T Finally, the eigen solution ω is the natural frequency of the coupled systems, and the eigenvector G is the corresponding mode. By substituting harmonic point force and monopole point sound source into Equations (78)-(81), the steady-state response of the coupled systems can be obtained.
Numerical discussion and result analysis
According to the unified analysis model of the conical-cylindrical acoustic cavity coupled system and rotary composite laminated plate and conical-cylindrical double cavities coupled system, numerical discussion and result analysis are carried out to further research the vibroacoustic characteristics. This section mainly verifies the convergence and accuracy of the coupled system model, analyzes the influence factors on natural frequency of the coupled systems under free vibration and studies the steady-state response analysis of the coupled systems under the action of harmonic point force and point sound source excitation. In this section, the boundary conditions of the composite laminated plate in the coupled system are represented by artificial virtual spring boundary technology, which can simulate complex boundary conditions. The spring stiffness settings of each boundary condition are presented in Table 2. Table 3 shows the material parameters of the composite laminated plate in the examples. The acoustic medium of acoustic cavities is air, where the density of air is defined as ρ air 5 1.21 kg/m 3 , and the speed at which sound waves travel through air is defined as c air 5 340 m/s.
Unified analysis model validation
The convergence analysis and accuracy verification of the model of the coupled systems established above will be carried out. Tables 4 and 5 Tables 4 and 5 are as follows: R 0 5 0.5 m, R 1 5 1m, R 2 5 1.5 m, H 5 0.5 m, α 5 π/6, L 2 5 1.5 m, h p 5 0.02 m, ϑ 5 π/2. The material used is [Glass/epoxy j Boron/epoxy], and the layering angle is [0 jπ/2]. It can be seen from Tables 4 and 5 that when M c 3 N c 3 Q c 5 5 3 5 3 5 and M p 3 N p 5 10 3 10, the natural frequencies of each order basically complete convergence. In contrast to the FE simulation results, the maximum error of the natural frequency is less than 0.1%, indicating the accuracy of the proposed method. Therefore, it is reliable to select truncation values M c 3 N c 3 Q c 5 5 3 5 3 5 and M p 3 N p 5 10 3 10 in the numerical calculation of present method. When the rotation angle of the coupled systems ϑ 5 2π, the rotary composite laminated plate and acoustic cavities in the systems will generate additional coupling potential energy, exerting an influence on the vibro-acoustic characteristics of the systems. Therefore, the accuracy of the proposed method is verified again as shown in Table 6. The acoustic walls of cavities in this example are all rigid walls, and the boundary conditions of the rotary plate are set as S-S-S-S. The geometric parameters of the coupled systems in Table 6 are as follows: R 0 5 0.7 m, R 1 5 1.4 m, R 2 5 2.1 m, H 5 0.7 m, α 5 π/6, L 2 5 2.5 m, h p 5 0.03 m. The material used is [Glass/epoxy j Boron/epoxy j Glass/epoxy], and the layering angle is [π/4 j 0 jπ/4]. The maximum error between the numerical calculation results and the FE simulation results in Table 6 is less than 1%, which indicates that present method still has well accuracy when the rotation angle ϑ 5 2π.
Free vibration analysis
In this section, the free vibration of the conical-cylindrical acoustic cavity coupled system and rotary composite laminated plate and conical-cylindrical double cavities coupled system is conducted to investigate the effects of related parameters on the vibro-acoustic characteristics of the coupled systems. Figure 5 shows the variations of natural frequencies of the coupled systems with different rotation angles. The acoustic walls of cavities in this example are all rigid walls, and the boundary conditions of the rotary plate are Material ρ plate (kg/m 3 ) set as S-S-C-C. The determined geometric parameters are as follows: R 0 5 0.6 m, R 1 5 1.2 m, R 2 5 1.7 m, H 5 0.5 m, α 5 π/4, L 2 5 2m, h p 5 0.024 m. The material used is [Glass/epoxy j Boron/epoxy], and the layering angle is [Àπ/4 jπ/4]. As shown in Figure 5, the natural frequency of the same order decrease with the increase of rotation angle ϑ. However, the situation changes when the rotation angle ϑ 5 2π. Taking the rotary composite laminated plate and conical-cylindrical double cavities coupled system as an instance, Table 7 gives the first eight natural frequencies of the coupled system with various rotation angles. It can be found from Table 7 that the natural frequency at ϑ 5 2π is basically the same as that at ϑ 5 π.
In the case of coupling of the two acoustic walls, a closed loop will be formed, and repeated modes will be generated, resulting in continuous expansion of the natural frequencies of each order. These conditions explain why the natural frequency rises again when ϑ 5 2π in Figure 5. Table 6. Accuracy analysis of the conical-cylindrical acoustic cavity coupled system and rotary composite laminated plate and conicalcylindrical double cavities coupled system at rotation angle of 2π
Vibration and noise reduction
For the conical-cylindrical acoustic cavity coupled system and rotary composite laminated plate and conical-cylindrical double cavities coupled system, it is necessary to analyze the effects of conical cavity cone-apex angle α and cylindrical cavity height L 2 on the natural frequency of the coupled systems. The variations of natural frequencies of the coupled systems with different cone-apex angle α and height L 2 are presented in Figure 6.
The acoustic walls of cavities in this example are all rigid walls, and the boundary conditions of the rotary plate are set as S-S-C-C. The determined geometric parameters are as follows: R 0 5 0.5 m, R 1 5 1m, R 2 5 1.5 m, H 5 0.5 m, ϑ 5 π/2, h p 5 0.018 m. The material used is [Glass/epoxy j Boron/epoxy], and the layering angle is [Àπ/2 jπ/2]. As can be seen from Figure 6, the natural frequency of the coupled system degrades with the decrease of coneapex angle α and the increase of height L 2 .
Compared with the conical-cylindrical acoustic cavity coupled system, the rotary composite laminated plate and conical-cylindrical double cavities coupled system is also affected by the relevant parameters of the composite laminated plate. The first eight natural frequencies of the coupled system under different boundary conditions are shown in Table 8. The acoustic walls of cavities in this example are all rigid walls. The geometric parameters of the coupled system in Table 8 are as follows: R 0 5 0.5 m, R 1 5 1m, R 2 5 1.5 m, H 5 0.5 m, α 5 π/6, L 2 5 1.5 m, h p 5 0.02 m, ϑ 5 π/2. The material used is [Glass/epoxy j Boron/epoxy], and the layering angle is [0 jπ/2]. It can be found from Table 8 that the natural frequency of the coupled system calculated by present method is basically consistent with the FE simulation The conical-cylindrical acoustic cavity coupled system The rotary composite laminated plate and conical-cylindrical double cavities coupled system results with different boundary conditions, which again proves the accuracy of the proposed method. Meanwhile, the natural frequency of the coupled system increases with the increase of the stiffness value of boundary springs, and the calculation results of elastic boundary 1 (E 1 ) and elastic boundary 2 (E 2 ) also verify the reliability of this conclusion. Besides boundary conditions, the thickness of the plate is also necessary to be studied parameterized. Figure 7 shows the influence trend of laminated plate thickness h p on the natural frequency of the coupled system with different size conditions. The acoustic walls of cavities in this example are all rigid walls, and the boundary conditions of the rotary composite laminated plate are set as C-C-C-C. The determined geometric parameters are as follows: α 5 π/6, L 2 5 1.5 m, ϑ 5 π/2. The material used is [Glass/epoxy j Boron/epoxy], and the layering angle is [0 jπ/2]. As can be seen from Figure 7, the natural frequency of the coupled system shows an upward trend with the increase of h p .
Steady state response analysis
In this section, the displacement response and sound pressure response of the conicalcylindrical acoustic cavity coupled system and rotary composite laminated plate and conicalcylindrical double cavities coupled system under the excitation of point sound source are researched. Figures 8 and 9 Figures 8 and 9 that the response curve obtained by the proposed method is consistent with the results of FE simulation, which proves the accuracy of the displacement and sound pressure response analysis model of the coupled systems under the excitation of point sound source. The sound pressure response of the coupled systems under the excitation of a point sound source with different impedance value of the acoustic walls is presented in Figure 10. All the six acoustic walls in the coupled systems are impedance walls, and the impedance values of the acoustic walls are rigid, Z 1 5 ρ c c 0 (100j) and Z 2 5 ρ c c 0 (30j). The boundary condition, geometric parameters, materials used and layering angle of the coupled systems are consistent with those in Tables 4 and 5. The action position of point sound source is at (0.80 , 0.71, 0.25 m) in cavity 1, observation point 1 is located at (0.85 , 0.42, 0.33 m) in cavity 1 and observation point 2 is located at (1.15 , 0.45, 0.52 m) in cavity 2.
As shown in Figure 10, the amplitude of the sound pressure response is reduced, and the effect of resonance suppression is achieved when the acoustic wall is changed to the impedance wall, whose influence increases with the increase of the impedance value. However, the variation of acoustic walls in the coupled systems does not affect the waveform of the response. The conical-cylindrical acoustic cavity coupled system The rotary composite laminated plate and conical-cylindrical double cavities coupled system
Vibration and noise reduction
In the rotary composite laminated plate and conical-cylindrical double cavities coupled system, if the point source excitation and the observation point are not in the same cavity, the sound pressure response of the coupled system will be affected by the laminated plate between the two cavities. To investigate the influence mechanism on the thickness of plate, Figure 11 gives the sound pressure response curves of the observation points in the upper conical cavity under the excitation of the point sound source in the lower cylindrical cavity with different thicknesses of the laminated plate, where h p 5 0 m denotes that there is no plate between the two acoustic cavities, and the system is cylindrical-conical acoustic cavity coupled system. The acoustic walls, boundary condition, geometric parameters, materials used and layering angle of the coupled systems are consistent with those in Tables 4 and 5. The action position of point sound source is at (1.23 , 0.41, 0.86 m) in cavity 2, observation point 1 is located at (0.77 , 0.69, 0.29 m) in cavity 1 and observation point 2 is located at (0.91 , 0.35, 0.38 m) in cavity 1. It is not difficult to know from Figure 11 that the amplitude of sound pressure response curves at the same observation point decreases with the increase of the plate thickness, especially the difference between the sound pressure response and the sound pressure response of the composite laminated plate is apparent. The results show that the composite laminated plate in the coupled system has an impact on the noise reduction, and the effect increases with the increase of the thickness of composite laminated plate.
Conclusion
A unified analysis model of the rotary composite laminated plate and conical-cylindrical double cavities coupled system is constructed in this investigation. First, the admissible displacement and sound pressure functions of the rotary laminated plate and acoustic cavity are presented. Second, the energy functional of the laminated plate structure domain and cavity sound field domain is proposed. Then, the coupling potential energy between the cavities and the plate-double cavities is introduced to obtain the total energy functional of the coupled systems. Finally, the energy functional is solved by the Rayleigh-Ritz method. The free vibration and steady-state response of the coupled systems are studied by the numerical results of examples, and the following significant conclusions are obtained: (1) The unified analysis model established in this paper has good convergence and accuracy when the truncation values M c 3 N c 3 Q c 5 5 3 5 3 5 and M p 3 N p 5 10 3 10.
(2) Under the condition of free vibration, the natural frequency of the rotary composite laminated plate and conical-cylindrical double cavities coupled system increases with the increase of the rotation angle, the spring stiffness, thickness of the plate and cone-apex angle of the conical acoustic cavity. It decreases with the height of the cylindrical acoustic cavity increasing.
(3) In the conical-cylindrical acoustic cavity coupled system and rotary composite laminated plate and conical-cylindrical double cavities coupled system, the impedance wall can reduce the amplitude of the sound pressure response and suppress the resonance of the coupled systems. The effect of the impedance wall increases with the increase of the impedance value but has no effect on the waveform of the response. In the rotary composite laminated plate and conical-cylindrical double cavities coupled system, the composite laminated plate has a good noise reduction effect and increases with the increase of the plate thickness. | 8,528.8 | 2022-03-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
Deep Learning Based Underwater Acoustic Target Recognition: Introduce a Recent Temporal 2D Modeling Method
In recent years, the application of deep learning models for underwater target recognition has become a popular trend. Most of these are pure 1D models used for processing time-domain signals or pure 2D models used for processing time-frequency spectra. In this paper, a recent temporal 2D modeling method is introduced into the construction of ship radiation noise classification models, combining 1D and 2D. This method is based on the periodic characteristics of time-domain signals, shaping them into 2D signals and discovering long-term correlations between sampling points through 2D convolution to compensate for the limitations of 1D convolution. Integrating this method with the current state-of-the-art model structure and using samples from the Deepship database for network training and testing, it was found that this method could further improve the accuracy (0.9%) and reduce the parameter count (30%), providing a new option for model construction and optimization. Meanwhile, the effectiveness of training models using time-domain signals or time-frequency representations has been compared, finding that the model based on time-domain signals is more sensitive and has a smaller storage footprint (reduced to 30%), whereas the model based on time-frequency representation can achieve higher accuracy (1–2%).
Introduction
Acoustic signals are the only energy form known to humans that can travel long distances underwater and are generally considered to be the best information carrier for underwater targets [1].So, for a long time, underwater acoustic target recognition (UATR) technology has been an important auxiliary tool for marine resource development, playing an important role in both civilian and military applications.Affected by subsea reverberation [2], multipath effects [3], Doppler effects [4], etc., collected ship radiated noise is always accompanied by interference noise.Traditional UATR technology mainly relies on signal processing methods to reduce interference [5], extract features characterizing ship attributes [6,7], and design classifiers for recognition [8].There are two difficulties: it is easy to eliminate the target signal information while filtering out some strong interference; with the upgrading of marine equipment, the interference is becoming increasingly sophisticated, making it difficult to find a universal feature extraction method [9].Deep neural networks (DNNs) provide new ideas for researchers that can achieve mapping from acoustic signals to category labels by continuously iterating to fit the nonlinear function and be the classifier to achieve UATR [10].
The pipelines of UATR based on Machine Learning (ML) or DNNs are shown in Figure 1, which mainly includes pre-processing, feature extraction, and classifier design.Pre-processing generally refers to audio down-sampling [11,12] and amplitude normalization [13,14].Signal processing methods are also applied, and some specific filters are Sensors 2024, 24, 1633 2 of 13 designed to meet the research needs [15] or improve signal quality [16][17][18].Regarding features for classification and recognition, the power spectrum was often used in early research [19].Detection of envelope modulation on noise (DEMON) and low-frequency analysis and recording (LOFAR) are commonly used spectral analysis methods as manually designed features in UATR [20][21][22][23].Constant-Q transform (CQT) [24], Mel frequency cepstral coefficients (MFCC) [25], and Gammatone frequency cepstral coefficients (GFCC) [26] also perform well which simulate the auditory perception of human ears.In addition, some studies used multiple spectra [14] or information from multiple modalities [12] to obtain the fusion feature of targets, which is helpful for improving robustness.Recently, neural networks are also widely used in experiments to further extract features and compress feature spaces [10].In addition to first transforming raw and non-stationary time domain signals into the frequency domain and then inputting them to the network, it is feasible to directly input time domain signals, which treat signals as time series or text sequences.This approach captures the correlation between sampling points and mines inter-class features using hidden layers of the network model [27][28][29].
processing generally refers to audio down-sampling [11,12]and amplitude normalization [13,14].Signal processing methods are also applied, and some specific filters are designed to meet the research needs [15] or improve signal quality [16][17][18].Regarding features for classification and recognition, the power spectrum was often used in early research [19].Detection of envelope modulation on noise (DEMON) and low-frequency analysis and recording (LOFAR) are commonly used spectral analysis methods as manually designed features in UATR [20][21][22][23].Constant-Q transform (CQT) [24], Mel frequency cepstral coefficients (MFCC) [25], and Gammatone frequency cepstral coefficients (GFCC) [26] also perform well which simulate the auditory perception of human ears.In addition, some studies used multiple spectra [14] or information from multiple modalities [12] to obtain the fusion feature of targets, which is helpful for improving robustness.Recently, neural networks are also widely used in experiments to further extract features and compress feature spaces [10].In addition to first transforming raw and non-stationary time domain signals into the frequency domain and then inputting them to the network, it is feasible to directly input time domain signals, which treat signals as time series or text sequences.This approach captures the correlation between sampling points and mines inter-class features using hidden layers of the network model [27][28][29].bandpass filters to divide the raw signal into multiple frequency band components and then merged these components into a 2D tensor, which can be input for 2D convolution [30].Hu et al. adopted another 2D-variation method that combined the 1D feature vectors of multiple channels and converted the vectors into 2D tensors, then used several 2D time-dilated convolutional blocks to capture deep features for classification [31].
Inspired by the periodicity of natural time series, such as changes in rainfall during the rainy and dry seasons, a new temporal 2D-variation model is proposed and performs well in some time series analysis tasks in [32].The modeling method can be summarized as follows: using frequency domain characteristics of time series to discover periods, then shaping the time series into a stack of multiple periods, which transforms raw 1D temporal signals to a set of 2D tensors.The obtained 2D tensors contain intraperiod and interperiod variations of the signals.Considering that underwater radiation noise of ships usually has obvious periodic characteristics, in this paper, we introduce a recent 2D modeling method into UATR to combine conv1D and conv2D, and use Timesblocks to capture periodic features of underwater acoustic time domain signal on the basis of current convolutional neural networks.Compared to current research that focuses solely on 1D or 2D convolution, our approach not only prevents information loss during time-frequency conversion and overcomes the limitations of pure 1D convolution, but also provides a new idea for model structure design.Our work mainly includes the following parts: • We train models using time domain signals and time-frequency representations, obtain a network structure suitable for UATR, and analyze the performance and applicable scenarios of these two types of inputs.• We adopt a recent temporal modeling method to transform the time-domain feature vectors of underwater acoustic signals extracted in 1D convolution into 2D tensors, and then use 2D convolution to further extract periodic characteristics.By adding Timesblocks to two excellent model structures, the models break through bottlenecks in the original structure's recognition ability.
The remainder of this paper is organized as follows.The 2D modeling method and Timesblock are described in detail in the Methods section.The used dataset, data preprocessing methods, comparison models, and training strategy were introduced in the experiment.The results show the experimental results and some discussion.Finally, the Conclusion section presents a summary of this paper.
Temporal 2D-Variation Modeling
The core of the 2D modeling method is discovering the periodicity of the time series, which is achieved by searching for a series of periods.As shown in Figure 2a, the algorithm can be divided into the following five steps.
•
Perform a fast Fourier transform (FFT) on the time series X 1D to convert it into a frequency domain sequence Y 1D .Only retain the first half of the frequency domain sequence because the result obtained by FFT is middle-symmetric.The dimension of the original series is T × C, where T is the length of the time series and C is the number of channels.
•
Calculate the average amplitude Amp Y 1D of the frequency domain sequence Y 1D for all channels.The first amplitude is set to 0 considering the characteristics of FFT.• Assuming that the raw time series X 1D has k types of periods, and each period has a different length, record the Top-k + 1 amplitudes Amp f 1 , . . ., Amp f k+1 and the corresponding positions p f 1 , . . ., p f k+1 .Some modifications are made here, which are summarized in subsequent experiments.
• The position is considered as the period length, and the number of periods in a signal , where Ceil is the rounding up operation, ensuring that all sampling points are counted.
•
Normalize amplitudes Amp f 2 , . . ., Amp f k+1 using the softmax function to obtain weights {ω 1 , . . . ,ω k } that represent the importance of each period.In summary, multiple periods of the series are discovered based on the frequen domain characteristics, and different periods are assigned corresponding weights.
TimesNet and Timesblock
TimesNet is a neural network that accepts general time series.First, a 1D convolut layer is used as token embedding to adjust the number of channels, and the position e bedding method in Transformer [33] is adopted to record sequential information of ti points.The results of embedding are passed through the dropout layer to prevent over ting.Next, extract periodic features of each channel through several stacked Timesblo which are shown in Figure 2b.The output of each Timesblock was normalized to acce ate convergence.After activation through the activation function geLU, the obtained m tichannel feature matrix was expanded into a long feature vector.Finally, the vecto compressed through one linear layer, and the softmax function is used to calculate probability of belonging to a certain category.
TimesBlock is the backbone and the most critical component of TimesNet, which extract spatiotemporal 2D features of intraperiod and interperiod variations after 2D m eling.The feature extraction process of TimesBlock is as follows: • Based on the temporal 2D-variation modeling mentioned in Section 2.1, k types periods and weights of the input signal can be calculated.
•
Transform the raw time series into a set of 2D tensors { 2 ∈ L i ×N i × }, where the -th period length, indicating that each column contains the time points wit one period; is the number of the -th period, representing that each row conta the time points at the same phase among different periods.To add, { } is often an integer, which means that the number of sampling points in the last period is l than the period length.So, to obtain a complete 2D tensor, it is necessary to padd z In summary, multiple periods of the series are discovered based on the frequency domain characteristics, and different periods are assigned corresponding weights.
TimesNet and Timesblock
TimesNet is a neural network that accepts general time series.First, a 1D convolution layer is used as token embedding to adjust the number of channels, and the position embedding method in Transformer [33] is adopted to record sequential information of time points.The results of embedding are passed through the dropout layer to prevent overfitting.Next, extract periodic features of each channel through several stacked Timesblocks which are shown in Figure 2b.The output of each Timesblock was normalized to accelerate convergence.After activation through the activation function geLU, the obtained multichannel feature matrix was expanded into a long feature vector.Finally, the vector is compressed through one linear layer, and the softmax function is used to calculate the probability of belonging to a certain category.
TimesBlock is the backbone and the most critical component of TimesNet, which can extract spatiotemporal 2D features of intraperiod and interperiod variations after 2D modeling.The feature extraction process of TimesBlock is as follows:
•
Based on the temporal 2D-variation modeling mentioned in Section 2.1, k types of periods and weights of the input signal can be calculated.
• Transform the raw time series into a set of 2D tensors {X
, where L i is the i-th period length, indicating that each column contains the time points within one period; N i is the number of the i-th period, representing that each row contains the time points at the same phase among different periods.To add, { T p f i } is often not an integer, which means that the number of sampling points in the last period is less than the period length.So, to obtain a complete 2D tensor, it is necessary to padd zero for most signal sequences before reshaping.
•
Input these tensors into two inception blocks in series that contain multiple-scale convolutional kernels to extract feature maps of intraperiod and interperiod variations.
•
Reshape the extracted feature maps back into 1D feature vectors, removing the previously filled tails.
•
Calculate the weighted average feature vectors for all periods, with weights derived from the algorithm in the first step.
•
The final feature vector, i.e., the output of one Timesblock, is obtained by adding the weighted average feature vector from the previous step as a residual to the original series.
Data Source
There are two publicly available underwater sound databases to choose from: ShipsEar [34] and Deepship [35].In past research, these two databases have been widely used, and their reliability in classification has been demonstrated.The ship radiated noise in ShipsEar is divided into four major categories based on the size of the ship.Deepship contains four classes of ships: Cargo, Passenger, Tanker, Tug, each with a longer duration.Compared to ShipsEar, the records in Deepship have complex background components, which results in a low signal-to-noise ratio (SNR) and the ability to more clearly verify the effectiveness of classification methods.
Dataset Manufacture
Regarding the manufacturing of training and testing samples, research on classification based on underwater acoustic time-domain signals (T) usually divides the entire audio into a certain number of frames, each frame containing approximately thousands of sampling points (less than 1s in duration), whereas features are often extracted from longer segments (approximately a few seconds) in research based on time-frequency representation (T-F).The basis for the first is that compared to the background, target features are relatively stable in each frame and can be captured by deep learning models, which can also obtain a larger number of samples and reduce memory usage during training.The second is subjectively more reasonable, just as sonar men need a period of listening to complete recognition tasks.Here, we consider these two methods and attempt to compare them.
Due to the fact that most of the characteristics of ship-radiated noise are concentrated in low-frequency components, audio is down-sampled to 8000 Hz and then divided into frames and segments.There were 4000 sample points for each frame and no overlap between frames.Meanwhile, each segment has a duration of four seconds, and short-time Fourier transform (STFT) is performed to obtain the corresponding time-frequency feature maps whose size is 256 × 256.Some tails that were not long enough were removed, and the results are shown in Table 1. Figure 3 shows the waveform and T-F representation of each category.It can be observed that there are significant differences in the amplitude, amplitude fluctuation, and local extreme frequency of the waveform.In addition, the low-frequency frequency band components of ship-radiated noise vary greatly among different categories.These reflect the separability between categories.
Protocol
The entire experiment was divided into two parts.Experiments to compare the performance of models based on time-domain signals or time-frequency representation were first conducted.Another main purpose of the experiment was to prove the validity of the recent 2D modeling method.Considering that TimesNet was originally designed to handle typical time series analysis tasks, and compared to general time series classification tasks, ship radiated noise samples have a higher sampling density, contain more sampling points, and have significant background interference, making it difficult for the original TimesNet to play an effective role.Therefore, we decided to add TimesBolcks to outstanding model architectures in an attempt to gain stronger performance.
Protocol
The entire experiment was divided into two parts.Experiments to compare the performance of models based on time-domain signals or time-frequency representation were first conducted.Another main purpose of the experiment was to prove the validity of the recent 2D modeling method.Considering that TimesNet was originally designed to handle typical time series analysis tasks, and compared to general time series classification tasks, ship radiated noise samples have a higher sampling density, contain more sampling points, and have significant background interference, making it difficult for the original TimesNet to play an effective role.Therefore, we decided to add TimesBolcks to outstanding model architectures in an attempt to gain stronger performance.
Several classical deep-learning architectures with excellent performance, which are state-of-the-art and have been validated in the field of underwater acoustics, are selected to build recognition models, including ResNet [36], SE ResNet [37], CamResNet [15], DenseNet [13], and MSRDN [11].The information for each model is described briefly below.
•
ResNet is a very famous neural network, which responds well to degradation and greatly eliminates the difficulty of training neural networks with excessive depth by adding shortcuts.Activation defaults to using reLU.• SE ResNet adds the squeezing-and-excitation (SE) module [37] to the initial residual block, which mainly includes two linear layers to calculate the weights of different channels to introduce the channel attention mechanism.Activation defaults to using reLU.
•
CamResNet uses 1D convolution instead of linear layers to adapt the SE module, and goes further into the attention mechanism by adding a spatial attention module as an independent branch to the SE module to synthesize the signal characteristics in all channels.Activation defaults to using reLU.
•
DenseNet is a variation of ResNet by converting skip-connection from addition to concatenation, which performs well on certain datasets.Every block in DenseNet contains three layers and uses an eLU for activation.
•
MSRDN is composed of stacked multi-scale residual units that contain four parallel convolutional layers with different kernel sizes to generate and combine feature Several classical deep-learning architectures with excellent performance, which are stateof-the-art and have been validated in the field of underwater acoustics, are selected to build recognition models, including ResNet [36], SE ResNet [37], CamResNet [15], DenseNet [13], and MSRDN [11].The information for each model is described briefly below.
•
ResNet is a very famous neural network, which responds well to degradation and greatly eliminates the difficulty of training neural networks with excessive depth by adding shortcuts.Activation defaults to using reLU.
•
SE ResNet adds the squeezing-and-excitation (SE) module [37] to the initial residual block, which mainly includes two linear layers to calculate the weights of different channels to introduce the channel attention mechanism.Activation defaults to using reLU.
•
CamResNet uses 1D convolution instead of linear layers to adapt the SE module, and goes further into the attention mechanism by adding a spatial attention module as an independent branch to the SE module to synthesize the signal characteristics in all channels.Activation defaults to using reLU.
•
DenseNet is a variation of ResNet by converting skip-connection from addition to concatenation, which performs well on certain datasets.Every block in DenseNet contains three layers and uses an eLU for activation.
•
MSRDN is composed of stacked multi-scale residual units that contain four parallel convolutional layers with different kernel sizes to generate and combine feature maps with multiple resolutions.A soft-threshold learning module is added to the top of the units to generate a threshold for every channel by nonlinear transformation and enhancing the effective channel components.The model uses siLU for activation.• The backbone of each model is a stack of several convolutional blocks, the bottom layer is a convolutional block that adjusts the channel from 1 to the specified number, and the top layers contain one average-pooling, which significantly reduces the number of parameters in the linear layer and effectively prevents overfitting.One linear layer is placed at the end to output the probabilities.
In addition, these models were not completely copied and adjusted during the experimental process because different datasets and data processing methods were used.
The evaluation indicators are based on convention, using an accuracy and confusion matrix.The accuracy reflects the effective recognition ratio of the model to the overall dataset, and the confusion matrix focuses on a local level that contains the precision and recall of each class.Due to the relatively balanced number of samples in each class within the dataset, the F1 score has not received much attention.In addition, the parameter quantity and inference time of the models are also recorded as auxiliary evaluation indicators.
The execution of data processing utilizes two Python packages, librosa, to read audio files (.Wav) and perform STFT and sklearn to perform standardization.Training and testing were conducted on a regular rack server with an Nvidia GeForce RTX 3090 GPU ( 24 The models were trained from scratch, and the network weights were updated after completing each batch of training.To add, the learning rate is adaptively adjusted, and the adjustment formula is as follows, where the initial lr 0 is set to 0.001.3.
and enhancing the effective channel components.The model uses siLU for activation.
•
The backbone of each model is a stack of several convolutional blocks, the bottom layer is a convolutional block that adjusts the channel from 1 to the specified number, and the top layers contain one average-pooling, which significantly reduces the number of parameters in the linear layer and effectively prevents overfitting.One linear layer is placed at the end to output the probabilities.
In addition, these models were not completely copied and adjusted during the experimental process because different datasets and data processing methods were used.
The evaluation indicators are based on convention, using an accuracy and confusion matrix.The accuracy reflects the effective recognition ratio of the model to the overall dataset, and the confusion matrix focuses on a local level that contains the precision and recall of each class.Due to the relatively balanced number of samples in each class within the dataset, the F1 score has not received much attention.In addition, the parameter quantity and inference time of the models are also recorded as auxiliary evaluation indicators.
The execution of data processing utilizes two Python packages, librosa, to read audio files (.Wav) and perform STFT and sklearn to perform standardization.Training and testing were conducted on a regular rack server with an Nvidia GeForce RTX 3090 GPU (24 G) The models were trained from scratch, and the network weights were updated after completing each batch of training.To add, the learning rate is adaptively adjusted, and the adjustment formula is as follows, where the initial lr 0 is set to 0.001.3. From the overall experimental results, the T-F-models achieved higher accuracy.Due to the same structure in both types of models, this can be attributed to the fact that T-F representations highlight differences between categories more effectively, resulting in better separability.However, DNNs can also rely on short-frame signals to make reliable judgments, which are not inferior.Moreover, T-models have the advantages of a smaller memory footprint and faster response time with almost no need for preprocessing units, which makes them more suitable for deployment on edge detection devices with insufficient storage space.During the experiments, it was also found that as the number of channels increased, the parameter quantity of the T-F-models using 2D convolution gradually widened the gap with the T-models, as shown in Table 3, and these additional floatingpoint operations also affected the inference speed.As each model, the SE module is quite powerful, as it improves the accuracy of ResNet without significantly affecting other indicators.CamResNet did not outperform ResNet, which may be due to the lack of robust spatial characteristics of underwater acoustic signals, making it more difficult for the model to converge.Convolutions in the attention mechanism module have a negative effect, which significantly increases the floating-point operand.It was also found that parallel structures in MSRDN can cope with overfitting and provide the possibility of adapting to more complex structures.The unique connection mechanism in DenseNet causes the number of channels to rapidly expand, and continuous pooling causes feature vectors (or maps) to quickly decrease in size, making it difficult to deepen, resulting in poor performance.
Table 2. Hyperparameters of most models.The multi-scale convolution kernel size of the 1D MSRDN follows the setting in Ref. [11], and the 2D version adopts the settings of the second group in this table.The number of blocks was set to 3 in DenseNet.
Hyper-Parameters Considered Values Best Values
Kernel Size 1D k Part II The Timesblock is added to the original blocks of SE ResNet and MSRDN, respectively, to replace 1D convolutional for further improvement, and the new structures are shown in Figure 6.In the Timesblock, there is one unique hyperparameter: Top-k k.A Sensors 2024, 24, 1633 9 of 13 set of controlled experiments was conducted to determine the optimal value k for this task.The experimental configuration is as follows: Part II The Timesblock is added to the original blocks of SE ResNet and MSRDN, respectively, to replace 1D convolutional for further improvement, and the new structures are shown in Figure 6.In the Timesblock, there is one unique hyperparameter: Top-k .A set of controlled experiments was conducted to determine the optimal value for this task.The experimental configuration is as follows: = {1, 2, 3 or 4} , Inception channels = {128 to 64 to 128} , kernel number = {1 or 2}, and kernel size = {1, 3}.
During the experiment, dropout was added at the output of the first convolutional block to deal with overfitting.The results are shown in Table 4, and it can be found that the Timesblock can indeed play a role.To our surprise, introducing Timeblocks can reduce the parameter quantity of the model.Since is independent of the network layer, its selection only affects the inference speed, and it is not recommended to use a larger as it does not benefit testing.To verify whether the Timesblock captures the multi-periodicity of ship radiated noise, heat maps of feature matrices extracted from the last Timesblock are drawn.Figure 7 shows some cases from which it can be seen that the transformed 2D tensors are informative and clearly distinguishable.During the experiment, dropout was added at the output of the first convolutional block to deal with overfitting.The results are shown in Table 4, and it can be found that the Timesblock can indeed play a role.To our surprise, introducing Timeblocks can reduce the parameter quantity of the model.Since k is independent of the network layer, its selection only affects the inference speed, and it is not recommended to use a larger k as it does not benefit testing.To verify whether the Timesblock captures the multi-periodicity of ship radiated noise, heat maps of feature matrices extracted from the last Timesblock are drawn.Figure 7 shows some cases from which it can be seen that the transformed 2D tensors are informative and clearly distinguishable.
Part III Specific to each category, some confusion matrices are shown in Figure 8, and the detailed data of the confusion matrix obtained by SE ResNet with 2D modeling is shown in Table 5.It can be seen that the passenger is easy to identify, which has the highest recall and precision, while Cargo and Tanker are prone to confusion.After manually listening to audio, we speculate that the cause of confusion may be that such ships do not have relatively consistent rotational frequencies and axial frequencies.In addition, considering that the SNR of audio in Deepship is relatively low, some frames may be submerged by interference, which affects the training of the model.To verify whether the Timesblock captures the multi-periodicity of ship radiat noise, heat maps of feature matrices extracted from the last Timesblock are drawn.Figu 7 shows some cases from which it can be seen that the transformed 2D tensors are formative and clearly distinguishable.Part III Specific to each category, some confusion matrices are shown in Figure 8, and the detailed data of the confusion matrix obtained by SE ResNet with 2D modeling is shown in Table 5.It can be seen that the passenger is easy to identify, which has the highest recall and precision, while Cargo and Tanker are prone to confusion.After manually listening to audio, we speculate that the cause of confusion may be that such ships do not have relatively consistent rotational frequencies and axial frequencies.In addition, considering that the SNR of audio in Deepship is relatively low, some frames may be submerged by interference, which affects the training of the model.In order to examine the model performance and show the experimental results more intuitively, the t-distributed Stochastic Neighbor Embedding (t-SNE) [38] method was used to visualize the target features extracted from the model.As shown in Figure 9, different categories of features extracted are basically separated, and the main intersections are concentrated at the boundaries of Cargo and Tanker, which is consistent with what is reflected in the confusion matrix.In order to examine the model performance and show the experimental results more intuitively, the t-distributed Stochastic Neighbor Embedding (t-SNE) [38] method was used to visualize the target features extracted from the model.As shown in Figure 9, different categories of features extracted are basically separated, and the main intersections are concentrated at the boundaries of Cargo and Tanker, which is consistent with what is reflected in the confusion matrix.
Conclusions
The impact of two types of inputs (time domain waveform or time-frequency representation) on the underwater acoustic target recognition model has been analyzed.In contrast, models based on time-frequency representation can achieve higher accuracy (1-2%), while including more network parameters and floating-point operations, which is more evident when networks become complex; models based on time domain signals have faster inference speed and model files are smaller, making them suitable for deployment on edge devices with insufficient computing power and limited memory.SE ResNet and MSRDN are two high-performance model structures.This paper introduces a recent temporal 2D modeling method from TimesNet into these structures.The modeling method tries to find multiple periods of time domain signals based on frequency domain characteristics, and on this basis, transforms the signal X 1D ∈ R T×C into a set of 2D tensor �X 2D i ∈ R L i ×N i ×C , i ∈ (1, k)�.Then, 2D convolution blocks can play a role in capturing features of intraperiod-and interperiod-variations of signals.By adding Timesblocks, recognition rates of models in real underwater acoustic experiments can be improved (0.7-0.9%).
By analyzing the situation of samples in Deepship based on confusion matrices, find that identifying cargo is difficult, while identifying Tug is easier, which can be demonstrated by visualization results of feature vectors of each category of test samples using t-SNE.
Author Contributions: J.T. played a leading role in defining core ideas and hypotheses, and completed conceptualization together with W.G. and J.M. W.G. developed the methodology, analyzed
Conclusions
The impact of two types of inputs (time domain waveform or time-frequency representation) on the underwater acoustic target recognition model has been analyzed.In contrast, models based on time-frequency representation can achieve higher accuracy (1-2%), while including more network parameters and floating-point operations, which is more evident when networks become complex; models based on time domain signals have faster inference speed and model files are smaller, making them suitable for deployment on edge devices with insufficient computing power and limited memory.SE ResNet and MSRDN are two high-performance model structures.This paper introduces a recent temporal 2D modeling method from TimesNet into these structures.The modeling method tries to find multiple periods of 1D time domain signals based on frequency domain characteristics, and on this basis, transforms the signal X 1D ∈ R T×C into a set of 2D tensor X i 2D ∈ R L i ×N i ×C , i ∈ (1, k) .Then, 2D convolution blocks can play a role in capturing features of intraperiod-and interperiod-variations of signals.By adding Timesblocks, recognition rates of models in real underwater acoustic experiments can be improved (0.7-0.9%).
By analyzing the situation of samples in Deepship based on confusion matrices, find that identifying cargo is difficult, while identifying Tug is easier, which can be demonstrated by visualization results of feature vectors of each category of test samples using t-SNE.
Author Contributions: J.T. played a leading role in defining core ideas and hypotheses, and completed conceptualization together with W.G. and J.M. W.G. developed the methodology, analyzed the theoretical feasibility, and designed the research framework and approach with J.T. and X.S. E.M., J.M. and X.S. conducted investigation in related fields.The data processing and platform construction for the validation experiment are the responsibility of E.M. J.T. and J.M. provided supervision and guidance throughout the process.W.G. drafted the manuscript, and J.T., J.M. and X.S. reviewed and edited it for improvement.Visualization, performed by W.G. and E.M. Project administration was
Figure 1 .
Figure 1.various UATR pipelines, (a) is based on Machine Learning (ML), (b) is based on DNN adopting pattern recognition mode, (c) is also based on deep learning but adopting end-to-end pattern.
Figure 1 .
Figure 1.Various UATR pipelines, (a) is based on Machine Learning (ML), (b) is based on DNN adopting pattern recognition mode, (c) is also based on deep learning but adopting end-to-end pattern.1D convolutional layers and recurrent neural network layer layers are two kinds of suitable layers to process time series.Inspired by residual networks (ResNets), Doan et al. introduced and designed a 1D dense convolutional neural network that replaces the addition operation in the original skip-connection technique with concatenation [13].Tian et al. proposed a multi-scale residual unit to generate feature maps with multiple resolutions and avoid the inadequacy problem of small convolutional kernels [11].Xue et al. added the channel attention mechanism to their model to extract more useful feature information [15].Yang Jirui, et al. improved channel attention mechanism to better adapt to the characteristics of underwater acoustic signals [17].However, both 1D convolution and RNNs can only model changes between adjacent time points, thus failing to discover long-term dependencies of sequences.Unlike the above works, Kamal et al. used a set of
Figure 3 .
Figure 3. Waveform (above, frames with durations of 0.1 s) and T-F representation (below, segments with durations of 4 s) of the four ship categories.From left to right are Cargo, Tanker, Tug, and Passenger.The sampling points were standardized.
Figure 3 .
Figure 3. Waveform (above, frames with durations of 0.1 s) and T-F representation (below, segments with durations of 4 s) of the four ship categories.From left to right are Cargo, Tanker, Tug, and Passenger.The sampling points were standardized.
G), and the network model was implemented on the open-source ML framework pytorch-2.0.1 under Linux.The arrangement of model training and testing is as follows: data type = {T or T-F}, batch size = 64, optimizer = Adam, initial learning rate = 0.001, epoch = 50, training:testing = 4:1.
4 .
lr i+1 = lr i × 0.5 epoch−1 Results and Discussion Part I Models mentioned in Section 3.3 are trained.Each model was fully trained, and the training mostly lasted for 17 to 22 epochs, achieving the highest accuracy rate, as shown in Figure 4.During training, by adding and modifying networks layer by layer, the performance gains from the improvements were gradually reflected.The final structures are shown in Figure 5.Meanwhile, some hyper-parameters in the models are adjusted to achieve the best performance, which are listed in Table 2.The storage size of the parameter file is used to represent the parameter quantity of the model, and the inference time is represented by the average training time of completing a batch.The values of the evaluation indicators are shown in Table , and the network model was implemented on the open-source ML framework pytorch-2.0.1 under Linux.The arrangement of model training and testing is as follows: data type = {T or T-F}, batch size = 64, optimizer = Adam, initial learning rate = 0.001, epoch = 50, training:testing = 4:1.
4 .
Results and Discussion Part I Models mentioned in Section 3.3 are trained.Each model was fully trained, and the training mostly lasted for 17 to 22 epochs, achieving the highest accuracy rate, as shown in Figure 4.During training, by adding and modifying networks layer by layer, the performance gains from the improvements were gradually reflected.The final structures are shown in Figure 5.Meanwhile, some hyper-parameters in the models are adjusted to achieve the best performance, which are listed in Table 2.The storage size of the parameter file is used to represent the parameter quantity of the model, and the inference time is represented by the average training time of completing a batch.The values of the evaluation indicators are shown in Table
Figure 4 .
Figure 4. Example of accuracy and loss variation curve.The blue point represents the value of an epoch and it can be found that the convergence of the model was satisfactory.Figure 4. Example of accuracy and loss variation curve.The blue point represents the value of an epoch and it can be found that the convergence of the model was satisfactory.
Figure 4 .
Figure 4. Example of accuracy and loss variation curve.The blue point represents the value of an epoch and it can be found that the convergence of the model was satisfactory.Figure 4. Example of accuracy and loss variation curve.The blue point represents the value of an epoch and it can be found that the convergence of the model was satisfactory.
Figure 5 .
Figure 5. Block structures of each model used in the experiment.CAM represents the channel attention mechanism, and SAM represents the spatial attention mechanism.
Figure 5 .
Figure 5. Block structures of each model used in the experiment.CAM represents the channel attention mechanism, and SAM represents the spatial attention mechanism.
Figure 6 .
Figure 6.Blocks with Timesblocks.(a) SE ResNet and (b) MSRDN.k = {1, 2, 3 or 4}, Inception channels C = {128 to 64 to 128}, kernel number n k = {1 or 2}, and kernel size m k = {1, 3}.During the experiment, dropout was added at the output of the first convolutional block to deal with overfitting.The results are shown in Table4, and it can be found that the Timesblock can indeed play a role.To our surprise, introducing Timeblocks can reduce the parameter quantity of the model.Since k is independent of the network layer, its selection only affects the inference speed, and it is not recommended to use a larger k as it does not benefit testing.
Figure 8 .
Figure 8. Confusion matrices.The darker the green, the higher the probability.The first to fourth rows are Cargo, Tanker, Tug, and Passenger, respectively, and from left to right are T-F-MSRDN, T-F-SE ResNet, T-SE ResNet with Timesblocks, and T-MSRDN with Timesblocks.
Figure 8 .
Figure 8. Confusion matrices.The darker the green, the higher the probability.The first to fourth rows are Cargo, Tanker, Tug, and Passenger, respectively, and from left to right are T-F-MSRDN, T-F-SE ResNet, T-SE ResNet with Timesblocks, and T-MSRDN with Timesblocks.
Table 3 .
Experimental results for two types of inputs.
Table 4 .
Experimental results for the time blocks.
Table 4 .
Experimental results for the time blocks. | 9,013.2 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |